UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Use of compound microlens arrays as a magnifier in near-eye head-up displays Park, Hongbae Sam 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2015_may_park_hongbae.pdf [ 2.65MB ]
Metadata
JSON: 24-1.0167164.json
JSON-LD: 24-1.0167164-ld.json
RDF/XML (Pretty): 24-1.0167164-rdf.xml
RDF/JSON: 24-1.0167164-rdf.json
Turtle: 24-1.0167164-turtle.txt
N-Triples: 24-1.0167164-rdf-ntriples.txt
Original Record: 24-1.0167164-source.json
Full Text
24-1.0167164-fulltext.txt
Citation
24-1.0167164.ris

Full Text

USE OF COMPOUND MICROLENS ARRAYS AS A MAGNIFIER IN NEAR-EYE HEAD-UP DISPLAYS by  Hongbae Sam Park  B.A.Sc., Simon Fraser University, 2012  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Electrical and Computer Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  April 2015  ©  Hongbae Sam Park, 2015 ii  Abstract  This thesis reports a new approach for making a very compact near-eye display (NED) using two microlens array (MLA) layers. The two MLAs will work in conjunction as a magnifying lens (MLA magnifier). The purpose of the MLA magnifier is to aid the eye accommodate on a display that is positioned within several centimeters from the eye, by generating a virtual image of the display at optical infinity. While there are recently developed techniques for similar purposes such as waveguides [17, 18], and retinal scanning methods [21], using a magnifying lens has been the most exploited avenue for generating a virtual image due to its rather simple, tried-and-true optical properties; near-eye display systems that incorporate a magnifying lens, whether it is a single piece or a compound, has been well-studied since the dawn of head-up displays. However, magnifying lens-based optics is inherently hard to make compact, because as the focal length becomes smaller, the thickness of the lens becomes larger.  This thesis presents in detail the method for making a MLA magnifier that retains a thin profile of about 2 mm in thickness with a system focal length of about 6 mm. Thus the total thickness of the MLA magnifier system is around 8 mm (excluding the thickness of the display) in non-folded optics configuration, which is much more compact in comparison to other popular near-eye displays such as Google Glass or Recon Instrument‘s Snow HUD goggles having folded optics.  iii  Preface  The materials presented in this thesis resulted from an independent and original research work undertaken by myself, with the guidance of Prof. Boris Stoeber and aids from Recon Instruments Inc. In Chapter 1, Figures 1.1, 1.4, and 1.5, are used with permission from the original composers of the images. Figures 1.2 and 1.6 are public domain images, and Figure 1.3 is a derivative of an image originally licensed under the Creative Commons BY-SA 3.0 Unported license.  Chapters 2, 3, and 4 are written primarily by Hongbae Sam Park with feedback from Prof. Boris Stoeber. In Chapter 2, Figure 2.14 is used with permission from the original composer of the image. A version of Chapter 2 has been published in a conference paper in the proceedings of the IEEE MEMS 2015 conference, titled as ―COMPACT NEAR-EYE DISPLAY SYSTEM USING A SUPERLENS-BASED MICROLENS ARRAY MAGNIFIER.‖ Parts of the optimization study presented in Chapter 3 and part of Chapter 4 have also been published in the same conference paper. Based on the findings of the research presented here, a provisional U.S. patent application has been filed, of which I am the primary inventor. Except for Figures 1.2, 1.3 and 1.6, as well as where otherwise noted, this thesis is licensed under the Creative Commons BY-NC-ND 2.5 CA license.  iv  Table of Contents  Abstract .......................................................................................................................................... ii Preface ........................................................................................................................................... iii Table of Contents ......................................................................................................................... iv List of Tables ............................................................................................................................... vii List of Figures ............................................................................................................................. viii List of Symbols ........................................................................................................................... xiii List of Abbreviations ................................................................................................................ xvii Glossary .................................................................................................................................... xviii Acknowledgements .................................................................................................................... xix Dedication .....................................................................................................................................xx Chapter 1: Introduction ................................................................................................................1 1.1 Background in Head-up Displays ................................................................................... 1 1.2 Background in Near-eye Displays .................................................................................. 4 1.3 Challenges in Designing NED Optics............................................................................. 6 1.4 Objectives ....................................................................................................................... 7 Chapter 2: Design of Microlens Array Magnifier ......................................................................8 2.1 Background in Superlens ................................................................................................ 8 2.2 Design ............................................................................................................................. 8 2.2.1 Analytical Model ........................................................................................................ 9 2.2.2 Light Throughput Analysis ....................................................................................... 17 2.2.2.1 Convex-convex MLA Magnifier ...................................................................... 17 v  2.2.2.2 Concave-convex MLA Magnifier ..................................................................... 24 2.2.3 Eyebox Formation ..................................................................................................... 25 2.2.3.1 Consideration for Thick Lenses ........................................................................ 28 2.2.4 Optimization ............................................................................................................. 29 2.2.4.1 Design Tradespace Analysis ............................................................................. 30 2.2.5 Modulation Transfer Function for Resolution Estimation ........................................ 34 2.2.6 Simulation ................................................................................................................. 36 2.2.6.1 Simulated Resolution (With Microlens Arrays) ............................................... 42 Chapter 3: Prototyping................................................................................................................46 3.1 Fabrication of the Microlens Arrays ............................................................................. 46 3.1.1 Fabrication Techniques ............................................................................................. 46 3.1.2 Process Overview...................................................................................................... 46 3.1.2.1 Photolithography ............................................................................................... 47 3.1.2.2 Reflow ............................................................................................................... 49 3.1.2.3 Molding and Removal of the Cast MLA .......................................................... 51 3.1.3 Experiments .............................................................................................................. 52 3.2 Making of the Prototype ............................................................................................... 55 Chapter 4: Evaluation of Microlens Array Magnifier .............................................................58 4.1 Experimental Setup ....................................................................................................... 58 4.2 Measurement of the Resolution .................................................................................... 60 4.3 Measurement of the Angular FOV ............................................................................... 68 4.4 Eyebox .......................................................................................................................... 71 Chapter 5: Conclusions and Future Work ................................................................................77 vi  5.1 Concluding Remarks ..................................................................................................... 77 5.2 Future Work .................................................................................................................. 78 Bibliography .................................................................................................................................80 Appendices ....................................................................................................................................85 Appendix A ............................................................................................................................... 85 Appendix B ............................................................................................................................... 88 Appendix C ............................................................................................................................... 92  vii  List of Tables  Table 2.1: Combinations of the convex and concave MLA for magnifying condition. ............... 16 Table 2.2: MLA magnifier parameters for the best design option. ............................................... 34 Table 4.1: Horizontal spatial frequency and converted angular resolution of the test patterns. ... 65 Table 4.2: The size of the eyebox from theory, simulation, and measurements........................... 75 viii   List of Figures  Figure 1.1: Example of a simple HUD [1]. a) An electronic dot sight. b) A view through the dot sight. ................................................................................................................................................ 1 Figure 1.2: HUD of a Boeing C-17 Globemaster aircraft [3]. Public domain image. .................... 2 Figure 1.3: Example of an in-car HUD [4]. .................................................................................... 3 Figure 1.4 Waveguide-based aircraft HUD introduced by BAE systems [5]. ................................ 4 Figure 1.5 A transparent OLED HUD for cars introduced by Kolon Neoview Inc. [6]. ................ 4 Figure 1.6: A helmet worn by F-35 fighter jet pilots. The image is projected on the inside of the visor [7]. Public domain image. ...................................................................................................... 5 Figure 2.1: Definition of the ray height and ray angle for the input and output rays to the MLA system. The refraction of the rays at the boundaries of the MLA system is arbitrary. ................... 9 Figure 2.2: Ray propagation through a decentered microlens. ..................................................... 11 Figure 2.3: Definition of the variables used in ray transfer analysis of the superlens. ................. 12 Figure 2.4: Light propagation through the N-th microlens on each MLA; the N-th microlens on the first MLA is assumed to be a pinhole to only consider the principal ray of the first N-th microlens. ...................................................................................................................................... 18 Figure 2.5: Light propagation through the MLA pair with an off-axis source. The microlenses on the first array are assumed to be pinholes. .................................................................................... 19 Figure 2.6: Light propagation through the MLA pair with an off-axis source. The microlenses on the first arrays have full apertures. ................................................................................................ 22 Figure 2.7: Ray diagram of the concave-convex system. ............................................................. 24 Figure 2.8: Eyebox size for a convex-convex MLA magnifier .................................................... 26 ix  Figure 2.9: Eyebox size for a concave-convex MLA magnifier ................................................... 27 Figure 2.10: Ray propagation through a concave and a convex microlens. Solid lines are the actual refracted rays, and dashed lines are imaginary rays which do not refract at the MLA surfaces. ........................................................................................................................................ 29 Figure 2.11: Design tradespaces spanned by the focal length of the first MLA f1, and F, the focal length of the MLA magnifier. a) Tradespace in terms of the exit angle of the collimated beams. b) Tradespace for the exit angle with unattainable design options indicated as an area at the bottom. c) Same tradespace as in b) marked with a line that indicates design options that correspond to the exit angle of -10°. d) Tradespace in terms of the eye relief, with eye relief that corresponds to 20 mm marked in red lines............................................................................................................ 31 Figure 2.12: Illustration of single microlens and its ROC and θ. ................................................. 32 Figure 2.13: Illustration of the change in eye relief due to the difference in the collimated beam width. θout-max ................................................................................................................................. 33 Figure 2.14: Two stimuli with different spatial frequency having different contrast outcomes through the same optical component [51]. .................................................................................... 35 Figure 2.15: MTF response of a computer-modelled human eye (inset). ..................................... 36 Figure 2.16: Simulation of the MLA magnifier with light sources at different heights on the object plane. .................................................................................................................................. 37 Figure 2.17: Close-up view of the two MLA layers ..................................................................... 37 Figure 2.18: Ray-tracing simulation of the MLA magnifier with a model of the eye. a) Image showing all of the components. b) Image zoomed in on the retina. ............................................. 38 Figure 2.19: Ray-tracing simulation of the inter-lens gap. a) Image showing all of the components. b) Image zoomed in on the retina. ........................................................................... 40 x  Figure 2.20: The spot diagrams of the focused light rays on the retina, from the microlenses (left), and from the inter-lens gaps (right). ............................................................................................. 41 Figure 2.21: Simulated MTF response of the MLA magnifier, with inter-lens gaps blocked. The color-coded lines represent MTF responses of light sources at different locations on the object plane, whose coordinates on the object plane are noted in the labels above the plot. Lines labeled as either T or S respectively represents tangential or sagittal plane MTF responses. .................. 42 Figure 2.22: The tangential and the sagittal planes of a lens produce different focal points for off-axis sources. .................................................................................................................................. 43 Figure 2.23: Simulated MTF response of light coming from inter-lens gaps, with microlenses blocked. Although the same light sources are used as in Figure 2.22, the contrast reaches zero at much lower spatial frequency. Lines labeled as either T or S respectively represents tangential or sagittal plane MTF responses. The spatial frequency axis is half the scale of that in Figure 2.21........................................................................................................................................................ 44 Figure 2.24 Image simulation using Zemax. The input image (one of the Zemax-supplied test images) used is shown (top). What it would look like through the MLA magnifier (bottom). .... 45 Figure 3.1: Description of each process stages ............................................................................. 47 Figure 3.2: Sections of the MLA mask design. a) A complete array of 70x70 microlenses. The arrows indicate location of the thickness measurements after the reflow. b) Zoomed into one corner of the array. The red line shows the scanning path of the profilometer for measuring the height of the photoresist structure after reflow. ............................................................................ 48 Figure 3.3: Photo of the photoresist islands before and after the reflow. a) The islands have a flat plateau before the reflow. b) After the reflow, the surface profile of the islands is round. .......... 49 Figure 3.4: Parameters of the photoresist cylinder and the spherical cap. .................................... 50 xi  Figure 3.5: The silicon wafer mold after the cast PDMS has been cut and peeled. ..................... 51 Figure 3.6: pictures of the MLA taken with a microscope. a) Perspective view of the array of spherical caps on the wafer. b) Looking down at the cast concave MLA. ................................... 52 Figure 3.7: The photoresist thickness vs. the spin coater speed in RPM. ..................................... 53 Figure 3.8: The top and bottom diameters of the mound, as measured in Solidworks. ................ 54 Figure 3.9: Pictures of the 3D-printed frame and the Moiré patterns caused by the rotational misalignment. a) 3D-printed frame and the convex MLA. b) – d) Progression of the rotational misalignment of the convex MLA in the counter-clockwise direction, from the most to the least misaligned. e) The convex MLA is now misaligned in the clockwise direction. The orientation of the Moiré patterns is now reversed. Thus the MLAs are thought to be in alignment when the Moiré pattern orientation is on the verge of being reversed. ........................................................ 56 Figure 4.1: The test setup placed on an optical table is shown. .................................................... 58 Figure 4.2: The microdisplay with a test image of Lenna displayed. a) The test image seen without the MLA magnifier, with camera focused at the display. b) The test image seen without the MLA magnifier, with camera focused at infinity. c) Test image seen through the MLA magnifier. The original test image used is also shown (inset). ..................................................... 59 Figure 4.3: 0.4 and 1.6 cycle/mm test patterns and their magnified images................................. 61 Figure 4.4: 6.4 cycle/mm test patterns with both vertical and horizontal lines and their magnified images. .......................................................................................................................................... 61 Figure 4.5: 8.0 cycle/mm test patterns with both vertical and horizontal lines and their magnified images. .......................................................................................................................................... 62 Figure 4.6: 12.8 cycle/mm test patterns with both vertical and horizontal lines and their magnified images. ......................................................................................................................... 62 xii  Figure 4.7: Portrayal of the angular FOV (FOV°) of a camera and its related parameters. The object is not necessarily at infinity................................................................................................ 63 Figure 4.8: MTF plot of the MLA magnifier from both simulation and measured contrast. ....... 68 Figure 4.9: The microdisplay size and the total image size in pixels (top). The angular FOVs of the microdisplay and the entire image are depicted, as well as the distance to the camera aperture da (bottom). .................................................................................................................................... 69 Figure 4.10: The size of the entire picture and the magnified image of Lenna in number of pixels........................................................................................................................................................ 71 Figure 4.11: A view of the measurement setup from the rear. ..................................................... 72 Figure 4.12: The eye location at the extremes of the eyebox. ...................................................... 73 Figure 4.13: Image of the displayed frame at the extremes of the eyebox. .................................. 74 Figure 4.14: Illustration of the field curvature. ............................................................................. 75 Figure C.1: Ray diagram of the thick concave lens. The bold lines are the actual light rays, and the broken lines are the virtual rays. The direction of ray propagation is indicated with the arrows........................................................................................................................................................ 92 Figure C.2: Ray diagram of the convex microlens. ...................................................................... 94 xiii  List of Symbols  B: A subpixel of a display pixel that represents the color blue. BsRGB: The blue subpixel whose color is represented in the sRGB color space. B8bit: The blue subpixel with its color represented as a value in 8 bits. d: The gap between the first and the second microlens array. da: The distance from the microdisplay to the camera aperture. D: The diameter of a microlens. Df1: A variable that describes how spread out the cone of light is, created from the refraction of the first MLA, at the plane of the second MLA. f1: Focal length of microlenses on the first MLA. f2: Focal length of microlenses on the second MLA. F: The distance between the object and the first MLA from the object. F1: The distance between the object and the flat surface of the thick plano-concave microlens. F2: The distance between the object and the flat surface of the thick plano-convex microlens. F1’: The distance between the virtual image plane generated by the refraction at the flat surface of the thick plano-concave microlens. F2’: The distance between the virtual image plane generated by the refraction at the flat surface of the thick plano-convex microlens. G: A subpixel of a display pixel that represents the color green. GsRGB: The green subpixel whose color is represented in the sRGB color space. G8bit: The green subpixel with its color represented as a value in 8 bits. h': The height to the point of refraction measured from the optical axis on the principal plane. xiv  h1': Has the same definition as h’ but for the plano-concave microlens. h2': Has the same definition as h’ but for the plano-convex microlens. hin: the offset or height of the input light ray from the optical axis, normal to the axis. It is related to the object size. hin-max: the maximum height of the object, measured from the optical axis.  hout: the offset or height of the output light ray from the optical axis, normal to the axis. It is related to the image size. hN2: The height of the light ray originating from a point source on the object plane at a height h from the optical axis, measured above the optical axis at the plane of the second microlens array. hN2’: The height of the light ray originating from a point source on the object plane at a height h from the optical axis, measured below the optical axis at the plane of the second microlens array. ∆h: Offset or decentration of a microlens from the optical axis. NB: When considering a 1-dimensional microlens array, it is the total number of microlenses above the optical axis that take part in the collimation of light rays originating from an off-axis point on the object plane. NT: When considering a 1-dimensional microlens array, it is the total number of microlenses above the optical axis that take part in the collimation of light rays originating from an off-axis point on the object plane. M: The total number of microlenses that take part in the collimation of light through the microlenses, originating from a common point on the object plane. p1: Pitch of the microlenses on the second MLA. p2: Pitch of the microlenses on the second MLA. r: The radius of the base of the microlens mound before reflow.  xv  re: The eye relief. R: A subpixel of a display pixel that represents the color red.  Rs: The radius of curvature of a microlens after the reflow. RsRGB: The red subpixel whose color is represented in the sRGB color space. R8bit: The red subpixel with its color represented as a value in 8 bits. sRGB: A standard for representing the RGB color space. S: The saggital plane of the light passing through a lens. It is the plane that horizontally dissects the lens. t: The thickness of a microlens used in the thick-lens model. t1: The thickness of the substrate of the first MLA. t2: The thickness of the substrate of the second MLA. tp: The thickness of the deposited photoresist. ts: The height of the spherical cap (reflowed microlens), or the microlens sag.  T: The tangential plane of the light passing through a lens. It is the plane that vertically dissects the lens. v1: The image distance of the concave microlens, measured from the vertex of the curved surface. wmax: The distance from the optical axis to the center of the collimated beam. x1: The distance between the principal plane and the center of the concave microlens. x2: The distance between the principal plane and the center of the convex microlens. θF1: Angle between the light ray originating from a point a distance F away from the first surface of the microlens arrays, and the optical axis, for the concave microlens. θF2: Has the same definition as θF1, but for the convex microlens. xvi  θF1’: When the light ray originating from a point a distance F away is refracted by a medium, it is the angle between the optical axis and the refracted ray. θF2’: Has the same definition as θF1’, but for the convex microlens. θin: Angle of the input light ray to an optical system relative to the optical axis of the system θout: Angle of the output light ray to an optical system relative to the optical axis of the system θout-max: The maximum θout as permitted by hin-max.   xvii  List of Abbreviations  FOV: Field of view FOV°: Angular field of view LED: Light emitting diode HMD: Head-mounted display or helmet-mounted display HUD: Head-up display HWD: Head-worn display MLA: Microlens array MTF: Modulation transfer function NED: Near-eye display PDMS: polydimethylsiloxane ROC: Radius of curvature VR: Virtual reality   xviii  Glossary  Angular frequency: the size of a feature on the object represented by the angle between the lines subtended from the edges of the feature to a common point where the feature is observed. Concave-convex system: A two-layer MLA magnifier in which the first MLA has concave microlenses and the second MLA has convex microlenses. Convex-convex system: A two-layer MLA magnifier in which both MLAs have convex microlenses. Eyebox: A volume in front of the NED optics in which the eye can see the entire image without the image being clipped. Eye relief: The clearance between the NED optics and the lens of the human eye. F-number: The ratio between the focal length and the aperture diameter of a lens. Inter-lens gap: The area on a MLA that is not occupied by microlenses. Modulation transfer function: A measure to quantify the resolution capability of an optical system. Sag: the height of a lens from the flat base of the lens to the center of the curved surface of the lens. Superlens: A lens composed of multi-layers of microlens arrays that can achieve unconventional optical properties such as very low F-number.   xix  Acknowledgements  My foremost acknowledgements are owed to Dr. Boris Stoeber and Dr. Reynald Hoskinson for their enormous support and encouragement throughout this research. I‘d also like to extend my appreciation to Dr. Albert Leung and Recon Instruments Inc. for providing a huge opportunity, and Recon employees whom I have interacted and worked together, especially the R&D team (Darrell, Etienne, and Hamid), as they allowed me to grow in many meaningful senses. I am also very grateful to Mitacs and Recon Instruments Inc. for funding this work.   xx  Dedication           I dedicate this thesis to my parents and mentors. 1  Chapter 1: Introduction 1.1 Background in Head-up Displays A head-up display, or HUD in short, is actively being used across many industries of today as a platform for delivering various information to its user. HUDs take many physically and mechanically diverging forms but all of them share one common function; as evident from the term, HUD indicates a type of display that allows the user to obtain information without having to take away attention from the surroundings. In this sense, a TV is not a HUD because our primary vision is focused at the TV when we watch TV, not necessarily at the background scenery (e.g. the wall) behind the TV. On the other side, an electronic dot sight (as in Figure 1.1) on a firearm can be classified as a simple form of a HUD, because the user‘s primary vision would be concentrated at the background scenery through the sight. Using a bright light source such as a light-emitting diode (LED), it produces a marker in the shape of a dot, cross, or a bullseye that is aligned with the aim of a firearm. Then, collimating optics projects the marker onto the visual field of the user in order to minimize the focal discrepancy between the marker and the visual field.  a)  b)  Figure 1.1: Example of a simple HUD [1]. a) An electronic dot sight. b) A view through the dot sight. 2  HUDs first emerged from the aerospace industry shortly after the World War II era [25]. Initially, HUDs served the purpose as gun sights on military aircrafts to help pilots aim at moving targets while maneuvering the aircraft. Today, aircraft HUDs are linked with many onboard sensors and devices and used as instrument clusters that provide vital information to the pilot. Aircraft HUDs are commonly affixed on the top of the instruments panel near the windshield, such as shown in Figure 1.2.   Figure 1.2: HUD of a Boeing C-17 Globemaster aircraft [3]. Public domain image.  The particular aircraft HUD in Figure 1.2 has a transparent image combiner which is angled towards the pilot in a way that the image from a projector below is reflected from the inner surface of the combiner and aligned with the pilot‘s line of sight. The combiner is typically a plate beam splitter that can reflect or pass light by exploiting various properties of light such as wavelength and polarization.  Over time, HUDs have become more mainstream and have shown their strong suit in other industries such as the automotive industry. In automobiles, HUDs can deliver information about the dynamics of the car (e.g. speed), and more, to the driver without the driver having to divert the sight off the road. An example of a commercial automotive HUD is shown in Figure 1.3. 3   Figure 1.3: Example of an in-car HUD [4].  Automotive HUDs are typically composed of an image generator (i.e. a digital micromirror device projector combined with collimating optics) that projects the image onto a combiner placed on the instrument panel of the car, or onto the windshield which is the case in Figure 1.3. The image reflected off the surface of the combiner or windshield is then seen by the driver. Other examples of automotive HUDs project the image directly onto the inner surface of the windshield. In which case, the curvature of the windshield should also be taken into consideration in design of the HUD system. Another recently developed HUD system makes use of optical waveguide and transparent OLED displays, which are shown in Figure 1.4 and 1.5.  The advantage of such systems is that the image projector and associated collimating optics can be eliminated (in the case of waveguide-based HUDs), allowing the compactness in size and the creation of standalone displays that can be easily integrated into an existing environment.  Image projected onto the windshield 4   Figure 1.4 Waveguide-based aircraft HUD introduced by BAE systems [5].   Figure 1.5 A transparent OLED HUD for cars introduced by Kolon Neoview Inc. [6].  1.2 Background in Near-eye Displays Near-eye display or NED in short, is a display that is placed close to the eye. An NED is indeed a form of HUD, where in most cases the display is mounted to the user‘s head. By utilizing the user‘s head as an attachment point, the display can move with the head, making the display in a sense ―embedded‖ in the user‘s field of view. This is an advantage over other stationary HUDs in that the display information is always available at the user‘s discretion regardless of where the 5  user is looking. This would allow the user to not give up focus in situations that require the user to be constantly attentive towards the forward vision while continuously scanning the surroundings. Examples of this situation would be driving on a winding road or skiing down a hill. Other terms such as head-mounted display (HMD) or head-worn display (HWD) are oftentimes used interchangeably to indicate HUDs of the same nature as an NED. However, HMDs and HWDs tend to describe displays in the form of a helmet (as in Figure 1.6) or a head-gear, whereas NEDs tend to take much more compact forms such as a pair of glasses, thus being ―wearable,‖ meaning they are not obtrusive and small enough to be used in day-to-day life. Examples of commercially available NEDs include Google Glass, Recon Instruments‘ Jet.  Figure 1.6: A helmet worn by F-35 fighter jet pilots. The image is projected on the inside of the visor [7]. Public domain image. 6  1.3 Challenges in Designing NED Optics NED optics needs to be designed with the consideration of certain aspects. First, it needs to have a good resolving power, ideally capable of supporting the eye resolution assuming the display resolution is equal or better. Second, a reasonable field of view (FOV) is desired. For example, virtual reality (VR) NEDs ideally would require a wide FOV that encompasses the entire FOV of the eye (in terms of an angle, 120° horizontally for binocular vision). Also, the NED optics needs to have an appropriate accommodation. For NEDs that occupy only part of the eye‘s FOV, the optics shall generate a virtual image at optical infinity in order to minimize the discrepancy between the background scenery and the virtual image. For non-transparent VR NEDs which cover the entire vision with the background scenery blocked, the accommodation needs to be adjustable to support various conditions of the eye such as myopic (near-sighted) and hyperopic (far-sighted) conditions, as well as being able to support the normal eye condition.  These aspects shall be given attention all while designing the NED optics to be compact. These need to be as compact as possible for several reasons. From the form-factor standpoint, an NED that is bulky and juts out too much from the user‘s face can negatively impact its appearance. Compactness can also be correlated to weight reduction; lightweight is desirable since NEDs are mounted to the head and any added weight on the head can impede head movements, making it uncomfortable for the user. Miniaturization of the optics can contribute to a reduction of the overall size of the NED, but not without a compromise of some of these requirements, especially the FOV. Let us take a refractive lens such as a singlet magnifying lens (magnifier) for example; magnifier-based optics is perhaps the most commonly used collimator in an NED due to their simplicity. We can reduce the overall size of the magnifier by first reducing its surface area. This can be achieved by simply using a lens with smaller diameter. However, this would reduce the 7  FOV from the eye‘s perspective as less of the object will be seen through the smaller aperture. We can compensate for the lost FOV and also reduce the thickness of the magnifier system by reducing the focal length. However, reducing the focal length would require reducing the radius of curvature (ROC) of the magnifier, which will introduce more severe lens aberrations and increase the sag (or the lens height) of the magnifier. Suffice to say, refractive lenses, with their own limitations, have clear drawbacks in making a compact NED.   1.4 Objectives We propose a MLA magnifier-based NED optics that can be very compact, by utilizing the unconventional properties of the MLA magnifier such as a very low F-number. First, we study the optical underpinnings of the cascaded MLA layers and establish a theoretical model of a MLA magnifier in terms of the microlens parameters such as the focal length and the pitch of the microlenses. Next, we want to optimize our MLA magnifier design to best conform to our target values for the FOV (20°) and the eye-relief (20 mm). We use Recon Instruments‘ Snow HUD as a benchmark, thus the FOV and the eye-relief of the Snow HUD are selected as our target requirements. Once the theoretical design is optimized, we simulate the design to verify our performance predictions. Lastly, we intend to demonstrate the theory of the MLA magnifier by producing a real-world prototype. One of the MLAs is sourced from a commercial manufacturer and the theoretical model and simulation of the MLA magnifier are based on the microlens parameters of the commercial MLA. The other MLA is fabricated in house. The theoretical model, simulation, and fabrication of the MLA will be discussed further in detail in the subsequent chapters. 8  Chapter 2: Design of Microlens Array Magnifier 2.1 Background in Superlens Typically, a lens that can perform beyond conventional optical boundaries (such as the diffraction limit of the optical resolution, as in [8]) is referred to as a superlens. In this dissertation, the term superlens will strictly indicate compound optical structures that are composed of microlens arrays that can achieve unconventional optical properties. One superlens of this nature has been termed in prior publications by Hembd-Sölner [9], Duparré [10], and Stollberg [11] as the Gabor superlens, which stems from Dennis Gabor‘s invention [13] in the mid-20th century. Gabor‘s invention describes an optical system consisting of multiple layers of convex microlenses (―lenticules‖ as they are described in the invention) which can collectively behave as a single lens by having microlenses with non-identical spacings (or pitches) and focal lengths. Gabor superlenses have been used to form image recording systems that benefit from their inherent compactness.  2.2 Design The proposed MLA magnifier is based on the superlens concept. Contrary to the use of superlenses in the previously reported works [10, 11, 12] as an optical component to focus light onto an image sensor, we intend to use the superlens to collimate light from a display in an NED system. We are also interested in making the superlens-based NED as compact as possible. Therefore we want to keep the number of MLA layers used in the NED as low as possible. Since the minimum number of MLA layers that can form a superlens is two, we will design our superlens with two MLA layers. 9  2.2.1 Analytical Model Gabor [13] and Dupparé [10] show when two MLAs with different pitches are placed in close proximity, they can act as a single lens. According to the ray tracing formalism introduced by Lindlein [15], the ray transfer analysis of an array of lenses can be carried out by treating the lens array by a multiple of single lenses that are periodically decentered from the center optical axis of the MLA. The analytical model of the microlens arrays in this study is structured by following a similar formalism, using ray transfer matrices. The ray transfer analysis can be performed in only one dimension of the MLA if we assume that the microlenses have the same periodicity in both dimensions of the MLA plane. The model of the MLAs that we would like to analyze consists of an object, the MLAs, and an image, as shown in Figure 2.1.  Figure 2.1: Definition of the ray height and ray angle for the input and output rays to the MLA system. The refraction of the rays at the boundaries of the MLA system is arbitrary.  In Figure 2.1, the propagation of a strand of light ray through the MLA system is shown, where it is taking an arbitrary path through the system. We will let the optical axis to be in the direction of θin θout hin hout MLA system (superlens) Optical axis Input ray A Propagation of input ray A without decentration ∆h u v ∆θ  x y Object plane Image plane propagation of input ray A through decentered microlens 10  the x-axis in the XYZ coordinate system, for future reference. The light rays originate from a point at a height of hin from the optical axis and an arbitrary distance u before the MLA system. The rays entering the system make an angle of θin with respect to the optical axis, and the exiting rays converge at a point an arbitrary distance v away. From the optical axis‘ perspective, the MLA is essentially a multitude of decentered microlenses. The decentration of the microlenses introduces a shift to the ray height denoted as ∆h, and tilting of the ray angle ∆θ which is essentially the difference in angle between the decentered and non-decentered optical paths of the same microlens, at the exit. The change in the optical path due to the additional shift and tilt introduced by the microlens decentration is also shown in comparison to the optical path without decentration. In order to account for the addition of these shift variables, a 3×3 matrix for the ray transfer analysis needs to be used instead of the typical 2×2 matrix, as in [12, 15]. For simplicity, assume that we have already defined the MLA system matrix denoted as Msys. Then, we can express the relationship between the output ray height and angle to the input ray height and angle of the MLA system with the following ray transfer matrix equation,   𝑕𝑜𝑢𝑡𝜃𝑜𝑢𝑡1 = 𝑀𝑠𝑦𝑠  𝑕𝑖𝑛𝜃𝑖𝑛1 . (1) Msys can be obtained by cascading appropriate matrices that describe either free-space propagation or refraction introduced by each section of the MLA system.  The microlenses on an MLA are periodically spaced. From the optical axis‘ perspective, the microlenses on the MLA can be treated as a group of microlenses with linear offset from the optical axis. To study the refraction through the MLA, we first look at the refraction through a single decentered microlens, which can be studied in three steps. First, the decentration of an arbitrary microlens on the MLA needs to be considered. The arbitrary location of the microlens 11  with respect to the global optical axis is expressed by a shift variable ∆h, as shown in Figure 2.2. Let us first assume that there is an input ray coming in from a global origin, with an arbitrary θin. Since the refraction only happens locally at the decentered microlens, we first need to shift the input light ray by an offset of ∆h to align the ray with the local scope of the microlens. We can then introduce the local refraction at the microlens. Lastly, the exiting ray from the microlens is shifted again by –∆h to bring it back to the global scope.   Figure 2.2: Ray propagation through a decentered microlens.  These steps are reflected in eq. 2 where, starting from the right side, the three matrices represent each of the shift-refraction-shift steps:  𝑀𝑑𝑒𝑐𝑒𝑛𝑡𝑟𝑎𝑡𝑖𝑜𝑛 =  1 0 −∆𝑕0 1 00 0 1  1 0 0−1𝑓1 00 0 1  1 0 ∆𝑕0 1 00 0 1    =  1 0 0−1𝑓1 −∆𝑕𝑓0 0 1 . (2) Eq. 2 represents the refraction through a decentered microlens. We will assume the lens surfaces to be thin (i.e. thin-lens approximation) to simplify the refraction through the lenses for now. Global optical axis Local optical axis of the lens Entrance decentration, ∆𝑕 Exit  decentration, −∆𝑕 Microlens at the center of MLA Microlenses offsetted by ∆𝑕  12  This will reduce the algebraic efforts for solving the MLA system. We will later introduce thick-lens models to more closely approximately the actual refraction through the lenses. Using the transfer matrix of eq. 2, an array of periodic microlenses can be modeled by simply substituting ∆h with Np, where N is an integer multiplier that represents location of the N-th microlens from the optical axis, and p is the lens spacing or pitch. Figure 2.3 is a more detailed look at the MLA magnifier, where variables associated with the two MLA layers and the microlenses on each MLA are shown partially.  Figure 2.3: Definition of the variables used in ray transfer analysis of the superlens.  In Figure 2.3, d refers to the gap between the MLA layers, p1 and p2 refer to the pitch of the first and the second MLA respectively, and F is the distance between the object and the MLA magnifier. N1 and N2 indicate the N-th microlens on the first and the second MLA, respectively. Although not shown in Figure 2.3, f1 and f2 will refer to the focal length of each of the first and First MLA Second MLA F d p1 p2 Image plane v 0th microlens on first MLA layer 1st microlens on first MLA layer N-th microlens on first MLA layer (N1) N-th microlens on second MLA (N2) Object plane (display) Optical axis of the MLA magnifier 13  the second MLA. The two-layer MLA magnifier has five ray transfer segments; starting from the object, the first segment is free-space propagation for a distance of F. The second segment is the first MLA which introduces decentered refraction. The third segment is another free-space propagation between the MLAs for a distance of d. The fourth being the second MLA whose decentered microlenses refract light yet again. The fifth segment is the last free-space propagation to the image plane a distance v away. With the refraction matrix for MLA defined as in eq. 2, we can now derive the ray transfer matrix of the superlens system in Figure 2.3 by cascading the appropriate matrices that represent each segment, such that:  𝑀𝑠𝑦𝑠 =  1 𝑣 00 1 00 0 1  1 0 0−1𝑓21𝑁2𝑝2𝑓20 0 1  1 𝑑 00 1 00 0 1  1 0 0−1𝑓11𝑁1𝑝1𝑓10 0 1  1 𝐹 00 1 00 0 1    =  𝑀11 𝑀12 ∆𝑕𝑀21 𝑀22 ∆𝜃0 0 1 . (3) The matrix entries in eq. 3 are abbreviated for simplicity. The actual entries are shown in equations (4)-(9).  𝑀11 = 𝑑  𝑣−𝑓2−𝑣𝑓2𝑓1𝑓2 −𝑣𝑓2+ 1 (4)  𝑀12 =  𝐹  𝑑  𝑣𝑓1𝑓2−1𝑓1 −𝑣𝑓1 − 𝑑  𝑣𝑓2− 1 + 𝑣 (5)  ∆𝑕 =𝑁1𝑝1𝑓1  𝑑  𝑣𝑓2− 1 − 𝑣 −𝑁2𝑝2𝑣𝑓2 (6)  𝑀21 =  𝑑𝑓1𝑓2−1𝑓1−1𝑓2 (7)  𝑀22 =  𝐹  𝑑𝑓1𝑓2−1𝑓1−1𝑓2 −𝑑𝑓2+ 1 (8)  ∆𝜃 = 𝑁1𝑝1  𝑑𝑓1𝑓2−1𝑓1 −𝑁2𝑝2𝑓2. (9) 14  In examination of the MLA magnifier system, we find that several conditions need to be imposed to equations (4)-(9) if the output light rays are to be collimated.  Let us first assume that v in Figure 2.3 is finite, i.e. the image is formed at a non-infinite distance away, and thus the output rays are not collimated. Also, we assume the light rays originate from a Lambertian point source on the object plane (i.e. a pixel on the display). Then, there would be infinitely many light rays travelling in all directions forward each with different θin. For an imaging condition, all of these rays must converge at the same point on the image plane an arbitrary distance v away. This suggests that all of the rays that vary in θin must have the same hout at the image plane, from which we draw the conclusion that hout must be independent of θin. Also, from eq. 6, we notice that ∆h is a function of the microlens location parameter N1 and N2, meaning that each microlens location on the array could shift the ray by a different amount of ∆h. However, this should also not affect where the rays converge, thus hout needs to be also independent of ∆h. Although according to equations (1) and (3) the independency of θin and ∆h implies that M12θin + ∆h = 0, θin is not unique for a given N1 and N2 in ∆h; this is because rays having more than one θin can pass through the aperture of the N1-th and N2-th microlens on the first and the second MLA layers. The statement of M12θin + ∆h = 0 needs to be true regardless of θin, which could only mean that M12 = 0. Subsequently, ∆h = 0 because the above statement should be true of all N1 and N2. Let us now assume that the distance v is at infinity. The previous statement about hout being independent of θin and ∆h should continue to be true because the output rays still need to converge at a common point on the image plane, which is now just infinitely far away. An infinite v would mean that all of the output rays will be parallel to each other, thus all of the output rays will have the same θout. Since light rays launched from a common point at hin on the 15  object axis have different θin, θout must be independent of θin. Furthermore, the N1 and N2-dependent variables ∆θ should not affect θout because θout must be constant for any microlens location. Following the same logic as with the independency of θin and ∆h on hout, we conclude that M22 = 0 and ∆θ = 0. Then, the equations (5, 6, 8, 9) become:  𝑀12 =  𝐹  𝑑  𝑣𝑓1𝑓2−1𝑓1 −𝑣𝑓1−𝑣𝑓2+ 1 − 𝑑  𝑣𝑓2− 1 + 𝑣 = 0, (10)  ∆𝑕 =𝑁1𝑝1𝑓1  𝑑  𝑣𝑓2− 1 − 𝑣 −𝑁2𝑝2𝑣𝑓2= 0, (11)  𝑀22 =  𝐹  𝑑𝑓1𝑓2−1𝑓1−1𝑓2 −𝑑𝑓2+ 1 = 0, (12)  ∆𝜃 = 𝑁1𝑝1  𝑑𝑓1𝑓2−1𝑓1 −𝑁2𝑝2𝑓2= 0. (13) Then, the exit ray height and angle hout and θout are functions of only the input ray height hin, such that  𝑕𝑜𝑢𝑡 = 𝑀11𝑕𝑖𝑛 , (14)  𝜃𝑜𝑢𝑡 = 𝑀21𝑕𝑖𝑛 . (15) In eq. 11 and eq. 13, we see that the conditions are met only when N1 and N2 are equal so that they can be factored out of the equations. Thus;  𝑁1 = 𝑁2 = 𝑁. (16) This means that the two MLAs have the same number of microlenses, and the light from the N-th microlens of the first MLA needs to pass through the N-th microlens of the second MLA. Now if we let 𝑣 → ∞ for collimation of light and rearranging the equations, we get:  𝑀12 →  𝐹 = 𝑑−𝑓2 𝑓1𝑑−𝑓1−𝑓2 (17)  ∆𝑕 →𝑝2𝑝1=𝑑−𝑓2𝑓1 (18) 16   𝑀22 → 𝐹 = 𝑑−𝑓2 𝑓1𝑑−𝑓1−𝑓2 (19)  ∆𝜃 → 𝐹 =𝑝2𝑝2−𝑝1𝑓1 (20)  𝑕𝑜𝑢𝑡 =  𝑑  𝑣−𝑓2−𝑣𝑓2𝑓1𝑓2 −𝑣𝑓2+ 1 𝑕𝑖𝑛 → ∞ (21)  𝜃𝑜𝑢𝑡 −𝑚𝑎𝑥 =𝑑−𝑓1−𝑓2𝑓1𝑓2𝑕𝑖𝑛−𝑚𝑎𝑥 . (22) In addition, we make use of the fact that the inter-MLA gap d is the sum of the image distance of the first MLA and the focal length of the second MLA. Therefore,  𝑑 =𝐹𝑓1𝐹−𝑓1+ 𝑓2. (23) Eq. 20 reveals an interesting aspect of the two-layer MLA system; it shows that F is a simple function of the ratio between p2 and p1. F as we have defined already is the distance between the object and the MLA magnifier. However, if the image distance v is at infinity, then F essentially becomes the focal length of the MLA magnifier, which is achieved by just having a couple of MLAs with different pitches. Three combinations of the MLA pair with convex and/or concave microlenses satisfy the equations (17)-(23) that result in F being positive which satisfy the condition for image magnification. The combinations and their conditions are listed in Table 2.1.   Table 2.1: Combinations of the convex and concave MLA for magnifying condition.  f1 (first MLA) f2 (second MLA) F 𝑝2𝑝1 θout Combo 1 > 0 (convex MLA) < 0 (concave MLA) > 0 > 1 < 0 Combo 2 > 0 (convex MLA) > 0 (convex MLA) > 0 > 1 > 0 Combo 3 < 0 (concave MLA) > 0 (convex MLA) > 0 < 1 < 0  Among the three combinations of the convex and concave MLAs in Table 2.1, Combination 1 results in the least number of microlenses that contribute to the light propagation, which 17  produces the smallest eyebox. This can be verified by doing the light throughput analysis as in Section 2.2.2. Therefore, only Combination 2 and Combination 3, denoted as the convex-convex and the concave-convex system respectively, will be of interest in the subsequent sections.  2.2.2 Light Throughput Analysis 2.2.2.1 Convex-convex MLA Magnifier Not all of the MLA magnifier area takes part in forming an image of the object, because the configuration of the MLA magnifier intrinsically limits the light throughput. The effective area of the MLA magnifier that contributes to the forming of an image can be estimated in terms of the number of microlenses. To do this, we first study the light propagation through the microlenses from a geometrical standpoint. Ultimately, we want to compare the location of the light rays that pass through the N-th microlens on the first MLA, which we denote as hN2, to the location of the N-th microlens on the second MLA, which is Np2. The light rays that pass through the N-th microlens of the first MLA must also pass through the N-th microlens on the second MLA for the MLA magnifier to work, as found in Section 2.2.1. Therefore, we can establish that for the N-th microlens, if hN2 is outside the location Np2, then the N-th microlens does not refract light rays as designed and does not contribute to forming an image.  For the sake of simplicity, we will assume that the aperture of the microlenses on the first MLA are small enough to be approximated as pinholes, in a way that the light that passes through each microlens will be approximated as a single strand principal ray that passes through the microlens at its center. By doing this, we can ignore any refraction introduced by the first MLA and assume the light rays that reach the plane of the second MLA as straight lines from the object plane. This can be represented by placing an aperture stop with a pinhole in front of the microlens, as in 18  Figure 2.4. Also, we will assume that the principal ray originates from a point source that resides on the optical axis a finite distance away, and the MLAs are assumed to be very thin that the thin lens model can be used. If the principal rays pass at the center of the N-th microlenses of the first MLA, the rays will have a height of exactly Np1 at the plane of the first MLA. The rays will continue to travel straight toward the second MLA, and they will reach the plane of the second MLA at a height indicated as hN2 in Figure 2.4. Using a simple trigonometric ratio, we can relate hN2 and Np1 such that  𝑕𝑁2 =𝑁𝑝1𝐹 𝐹 + 𝑑 , (24) where F and d represent distances associated with the MLAs and the object as shown in  Figure 2.4.   Figure 2.4: Light propagation through the N-th microlens on each MLA; the N-th microlens on the first MLA is assumed to be a pinhole to only consider the principal ray of the first N-th microlens.  The center of the N-th lens on the second MLA has a height of Np2, from the optical axis. From eq. 18, we know the ratio between p1 and p2 for the collimating condition, and the height of the hN2 Optical axis of the MLA magnifier Np2 Pinhole aperture Plane of the first MLA Plane of the second MLA F d D2 N-th microlens Center of the N-th microlens 19  centre of the N-th lens on the second MLA can be described in terms of F, f1, and p1 such that:  𝑁𝑝2 = 𝑁𝑝1𝐹𝐹−𝑓1. (25) Again, in order for the principal ray to be properly refracted by the microlens on the second MLA, the principal ray must reach the plane of the second MLA within the aperture of the N-th microlens on the second MLA, such that  𝑁𝑝1𝐹𝐹−𝑓1−𝐷22≤ 𝑕𝑁2 ≤ 𝑁𝑝1𝐹𝐹−𝑓1+𝐷22, (26) where D2 is the aperture diameter of the microlenses on the second MLA. Eq. 26 shows that the number of microlenses that contributes to the image formation is always bound by some value determined by F, f1, and D2.  We will now make some modifications to the principal ray model in Figure 2.4 to make it more general by taking an account of an off-axis source with an offset hin, as in Figure 2.5.   Figure 2.5: Light propagation through the MLA pair with an off-axis source. The microlenses on the first array are assumed to be pinholes. Plane of the second MLA hN2 hin Optical axis of the MLA magnifier Plane of the first MLA NTp2 NBp2 hN2’ Object plane F d D2 20  We will continue to consider only the principal ray through the first MLA. We will also assume the rays radiate in non-specific directions on either side of the optical axis. In Figure 2.5, two strands of the light rays are shown, which represent the topmost and the bottommost rays of the conic radiation of light from a point source. Since the propagation of the rays departing from an off-axis source is not symmetrical about the optical axis anymore, we will consider the topmost ray and the bottommost ray separately. Let us consider the topmost ray first. We will make a small change to the previous notation by calling the N-th microlens above the optical axis as the NT-th microlens, and below the optical axis as the NB-th microlens. We can geometrically solve for the height of the principal ray at the plane of the second MLA, denoted as hN2 in Figure 2.5, in terms of F, d, and the offset hin such as  𝑕𝑁2 =𝑁𝑇𝑝1𝐹 𝐹 + 𝑑 − 𝑕𝑖𝑛𝑑𝐹. (27) Factoring in the diameter of the microlens, the range of height of the N-th microlens considering its aperture would be 𝑁𝑇𝑝2 ±𝐷22. By substituting p2 with p1 using eq. 18 the range of height becomes such that   𝑁𝑇𝑝2 ±𝐷22=𝑁𝑇𝑝1𝐹𝐹−𝑓1±𝐷22.  (28) We want to know what happens to the difference in height between hN2 and NTp2 as the number of microlenses NT grows. For this, we subtract the derivatives of the two heights such that:  𝑑𝑑𝑁𝑇  𝑁𝑇𝑝1𝐹 𝐹 + 𝑑 − 𝑕𝑖𝑛𝑑𝐹  −𝑑𝑑𝑁𝑇 𝑁𝑇𝑝1𝐹𝐹−𝑓1±𝐷22 =𝑝1𝐹 𝐹 + 𝑑 −𝑝1𝐹𝐹−𝑓1. (29) We see that the outcome of eq. 29 is positive, for F > f1, f2 > 0, and substituting d in terms of F, f1, and f2. We have established before that the principal ray from the NT-th microlens of the first MLA must reach the second MLA plane within the aperture of the NT-th microlens on the second MLA, as in eq. 26. The fact that the right side of eq. 27 has a larger derivative w.r.t. NT than the 21  right side of eq. 28 implies that as NT increases, i.e. as the location of the microlens moves further away from the optical axis, hN2 starts to become larger than NTp2. This suggests that we can ignore the negative sign in eq. 28 because the height of the principal ray of the NT-th microlens on the first MLA, at the plane of the second MLA, will always be larger than the height of the NT-th microlens of the second MLA, for all NT greater than zero. This allows us to have a unique solution for NT, and NT will be at the maximum when eq. 27 and eq. 28 (ignoring the negative sign) are set equal (i.e. when the principal ray of the NT-th microlens on the first MLA reaches the second MLA plane, it touches the top of the NT-th microlens on the second MLA), such that:  𝑁𝑇𝑝1𝐹 𝐹 + 𝑑 − 𝑕𝑖𝑛𝑑𝐹=𝑁𝑇𝑝1𝐹𝐹−𝑓1+𝐷22. (30) Same analogy can be applied to hN2’ and NBp2 of the bottommost ray, and the resulting equation that will give the maximum number of NB is:  𝑁𝐵𝑝1𝐹 𝐹 + 𝑑 + 𝑕𝑖𝑛𝑑𝐹=𝑁𝐵𝑝1𝐹𝐹−𝑓1+𝐷22, (31) and we note that NT ≠ NB. Then, the addition of NT and NB will give us the total number of microlenses which successfully refract the principal rays from the first MLA, such that:   𝑁𝑇𝑝1𝐹 𝐹 + 𝑑 − 𝑕𝑖𝑛𝑑𝐹 +  𝑁𝐵𝑝1𝐹 𝐹 + 𝑑 + 𝑕𝑖𝑛𝑑𝐹 =𝑁𝑇𝑝1𝐹𝐹−𝑓1+𝑁𝐵𝑝1𝐹𝐹−𝑓1+ 𝐷2, (32)   𝑝1 𝐹+𝑑 𝐹 𝑁𝑇 + 𝑁𝐵  =𝑝1𝐹𝐹−𝑓1 𝑁𝑇 + 𝑁𝐵 + 𝐷2. (33) If we denote the total number of microlenses that will refract the principal rays as M, defined as  𝑀 = 𝑁𝑇 + 𝑁𝐵, (34) then eq. 33 becomes:   𝑝1 𝐹+𝑑 𝐹𝑀 =𝑝1𝐹𝐹−𝑓1𝑀 + 𝐷2. (35) 22  Solving for M, eq. 35 becomes:  𝑀 =𝐷2𝐹 𝐹−𝑓1 𝑝1 𝑑𝐹−𝐹𝑓1−𝑑𝑓1 . (36) Apparently, M does not depend on the offset hin (could also be thought of as the height of an object), meaning that M will be constant regardless of the object height, which would allow us to conveniently estimate the size of the eyebox. Let us now consider a case that is more realistic than approximating the light propagation with principal rays, as in Figure 2.6.  Figure 2.6: Light propagation through the MLA pair with an off-axis source. The microlenses on the first arrays have full apertures.  In Figure 2.6, we now assume that the microlenses on the first MLA have full apertures and thus all of the lens area can refract light. Then, the light from the first MLA will converge and form an image on the plane a distance v1 away from the first MLA. As the same light continues to propagate over a distance equal to f2 to the plane of the second MLA, it is spread over a range d hN2 𝑁𝑇𝑝1𝐹𝐹 − 𝑓1 𝐷𝑓1  v1 f2 𝑁𝐵𝑝1𝐹𝐹 − 𝑓1 F D1 hin Principal rays hN2’ MLA magnifier optical axis 23  denoted as Df1 in Figure 2.6. We note that this range can be simply described as a geometric ratio of the distances f2 and v1 in relation to the aperture diameter of the microlenses on the first MLA D1 such that  𝐷𝑓1 =𝐷1𝑣1𝑓2. (37)  Then, the ray height at the second MLA plane hN2 should be modified to reflect this as  𝑕𝑁2 =𝑁𝑇𝑝1𝐹 𝐹 + 𝑑 − 𝑕𝑖𝑛𝑑𝐹+𝐷12𝑣1𝑓2 (38) We note that as long as there is an overlap between Df1 and the aperture of the NT-th microlens, the light will refract as designed. This condition is met when the bottom of the Df1 is lower than the top of the NT-th microlens aperture, such that  𝑁𝑇𝑝1𝐹 𝐹 + 𝑑 − 𝑕𝑖𝑛𝑑𝐹−  𝐷12𝑣1𝑓2 ≤𝑁𝑇𝑝1𝐹𝐹−𝑓1+𝐷22. (39) Or alternatively,  𝑁𝑇𝑝1𝐹 𝐹 + 𝑑 − 𝑕𝑖𝑛𝑑𝐹−  𝐷1 𝑑−𝑣1 2𝑣1≤𝑁𝑇𝑝1𝐹𝐹−𝑓1+𝐷22, (40) because  𝑑 = 𝑓2 + 𝑣1. (41) For the bottommost ray, the equation becomes  𝑁𝐵𝑝1𝐹 𝐹 + 𝑑 + 𝑕𝑖𝑛𝑑𝐹−  𝐷1 𝑑−𝑣1 2𝑣1≤𝑁𝐵𝑝1𝐹𝐹−𝑓1+𝐷22. (42) Likewise, the addition of NT and NB will give the total number of microlenses that will allow the proper refraction, and the total microlens count M for the system can be found as  𝑀𝑝1𝐹 𝐹 + 𝑑 −  𝐷1 𝑑−𝑣1 𝑣1≤𝑀𝑝1𝐹𝐹−𝑓1+ 𝐷2,  (43) and M can be expressed in the other terms as 24   𝑀 =𝐹 𝐷1 𝑑−𝑣1 +𝐷2𝑣1  𝐹−𝑓1 𝑣1𝑝1 𝑑𝐹−𝐹𝑓1−𝑑𝑓1 . (44) Note that now the maximum M is bigger by the constant term, 𝐹𝐷1 𝐹−𝑓1  𝑑−𝑣1 𝑣1𝑝1 𝑑𝐹−𝐹𝑓1−𝑑𝑓1 , in comparison to the approximation of M with pinholes in eq. 36.  2.2.2.2 Concave-convex MLA Magnifier The same analysis as in the previous section can be applied to the concave-convex system with microlenses with full apertures. In the concave-convex system the first MLA has concave microlenses, and the second MLA has convex microlenses. Figure 2.7 is the ray diagram showing light rays propagating through the concave-convex system with microlenses having full apertures.   Figure 2.7: Ray diagram of the concave-convex system.  As in eq. (38)–(43), the following relationship can be established for the concave-convex system: F hN2 𝑁𝑇𝑝2 v1 f2 D1 hin d Principal ray hN2’ 𝑁𝐵𝑝2 MLA magnifier optical axis Virtual image plane of the concave MLA 25   𝑁𝑝1𝐹 𝐹 + 𝑑 − 𝑕𝑖𝑛𝑑𝐹−𝐷12𝑣1𝑓2 ≤𝑁𝑝1𝐹𝐹−𝑓1+𝐷22. (45) Alternatively,  𝑁𝑝1𝐹 𝐹 + 𝑑  − 𝑕𝑖𝑛𝑑𝐹−  𝐷1 𝑑+𝑣1 2𝑣1≤𝑁𝑝1𝐹𝐹−𝑓1+𝐷22, (46) since d + v1 = f2. Then, expressing eq. 45 in terms of the maximum microlens count M for the concave-convex magnifier,  𝑀𝑝1𝐹 𝐹 + 𝑑 −  𝐷1 𝑑+𝑣1 𝑣1≤𝑀𝑝1𝐹𝐹−𝑓1+ 𝐷2.  (47) And M in terms of the other system parameters is:  𝑀 =𝐹 𝐷1 𝑑+𝑣1 +𝐷2𝑣1  𝐹−𝑓1 𝑣1𝑝1 𝑑𝐹−𝐹𝑓1−𝑑𝑓1 .  (48) We notice that the upper bound for M is higher for the concave-convex system in comparison to the convex-convex system, thus we can say that the concave-convex system will have a larger area of MLAs that contributes to a bigger eyebox.  2.2.3 Eyebox Formation With the approximation of the amount of light output defined in terms of the number of microlenses M that contribute to light propagation, we can now estimate the size of the eyebox for the MLA magnifier. The eyebox is a volume of space in front of the magnifier in which the eye can perceive the entire virtual image without cropping or partial loss of an area of the virtual image. Figure 2.8 shows visualization of the eyebox of a convex-convex MLA magnifier. In Figure 2.8, three beams of the collimated light are shown exiting on the right side from the convex-convex MLA magnifier. Beam 1 is formed by lighting originating from the topmost part of the object, say a pixel at the top of the display, with a height of hin-max from the optical axis of 26  the MLA magnifier. Beam 2 originates from the middle of the display, whereas beam 3 is originated from the bottommost part of the display. Beam 1 makes an exit angle θout-max with respect to the optical axis. Beam 2, originating from the middle of the display, is aligned with the  Figure 2.8: Eyebox size for a convex-convex MLA magnifier  optical axis and hence the exit angle is zero. Beam 3 has an exit angle of -θout-max. All other light beams can be thought of as having an exit angle less than θout-max in magnitude. The outlined area is the eyebox within which all of the collimated light beams overlap, meaning that the entire image is visible in this area. For simplicity, we assumed that the object (display) is 1-D, and Figure 2.8 shows only the x-y plane view of the MLA magnifier. If we are to consider a 2-D display, then the eyebox would be a volume instead of an area. Also, the visualization of the light propagation between the MLAs is omitted to avoid unnecessary sophistication. The dimensions of the eyebox area can be found geometrically. A represents the height to the top of the collimated light beam 1 measured from the optical axis. A is a function of the microlens Mp2 A θout-max C B Eyebox Beam 1 Beam 2 Beam 3 hin-max Ray propagation x y z (normal to page) Convex MLAs Direction of ray propagation 27  count NT for the upper half of the MLA, which itself is a function of the object height hin; for an object with a maximum height of hin-max, A can be expressed as:  𝐴 = 𝑁𝑇 𝑕𝑖𝑛−𝑚𝑎𝑥  𝑝2. (49) Then the eyebox height B is:  𝐵 = 2 𝑀𝑝2 − 𝐴 = 2   𝑀𝑝2 − 𝑁𝑇 𝑕𝑖𝑛−𝑚𝑎𝑥  𝑝2  . (50) We shall note that if A is larger than Mp2, then the eyebox height B would have a negative value, meaning no eyebox will form. Thus A must be smaller than Mp2. The eyebox length C is found using a trigonometric approach, such that  𝐶 =𝑀𝑝2−𝑁𝑇 𝑕𝑖𝑛 −𝑚𝑎𝑥  𝑝2𝑡𝑎𝑛   𝜃𝑜𝑢𝑡 −𝑚𝑎𝑥  . (51)  Figure 2.9: Eyebox size for a concave-convex MLA magnifier  For the concave-convex system the dimensions of the eyebox can be similarly estimated.  Figure 2.9 shows visualization of the eyebox for a concave-convex system. With collimated beams now directed towards the optical axis due to the inverted exit angle, the overlapped area A  B  -θout-max hin-max x y z (normal to page) B C  Concave MLA Convex MLA Direction of ray propagation 28  takes the shape of a diamond. Since the light beams 1 and 3 cross, the eyebox height B is equal to the light beam height of Mp2. Then, the length of the eyebox C can be found as:   𝐶 = 2 ×𝑁𝑇 𝑕𝑖𝑛 −𝑚𝑎𝑥  𝑝2−𝑀𝑝22𝑡𝑎𝑛   −𝜃𝑜𝑢𝑡 −𝑚𝑎𝑥  . (52) We shall note that the absolute values of B and C of the concave-convex system are much larger than those of the convex-convex system, which indicates that the concave-convex system produces a bigger eyebox.  2.2.3.1 Consideration for Thick Lenses So far the ray analysis of the microlens has been carried out under the assumption that the microlens is infinitesimally thin. The refraction of light rays happens twice through a microlens, once as it enters the entrance surface and again as it departs from the exit surface of the microlens. On a thin microlens, the thickness is negligible and the height of the ray just as it leaves the exit surface of the microlens is assumed to be virtually identical to the height when it enters the first surface, and the refraction is thought of happening only once, which justifies the use of the thin lens equation. With a thick microlens, the vertical (normal to optical axis) propagation of the light ray is no longer negligible, meaning the entrance and the exit heights of the light ray are different. Therefore the focal length of a thick microlens is a combined result of the refraction at both entrance and exit surfaces. However, there exists a principal plane within the microlens at which the light can be approximated as refracting only once through the thick lens. Figure 2.10 is the ray diagram of a pair of thick concave and convex microlenses showing the refraction of light rays through them, and the principal plane of each microlens. The location of the principal planes can be found as a distance from the vertex of the curved surface of each 29  concave and convex microlens as discussed in detail in Appendix C. Once the principal planes are located, the effective focal length (EFL) of the thick microlens can be measured from the principal plane, which would be analogous to the focal length of a thin microlens. Then, we can substitute thin microlenses with thick microlenses by aligning the thick microlens principal planes with the existing planes of the thin microlenses.  Figure 2.10: Ray propagation through a concave and a convex microlens. Solid lines are the actual refracted rays, and dashed lines are imaginary rays which do not refract at the MLA surfaces.  2.2.4 Optimization We wish to optimize the microlens parameters so that the optical performance of the MLA magnifier will conform to our requirements in regard to the angular FOV and the eye relief. We will use Recon Instruments Inc.‘s HUD system as a benchmark, thus the performance parameters such as the angular FOV and the eye relief are chosen similar to Recon‘s system for comparison, which has an angular FOV of ~20° and an eye relief of about 20 mm. As well, we wish to optimize the MLA magnifier system to have a minimum thickness for maximum compactness.  Concave MLA Principal plane 1 (H1) Principal plane 2 (H2) Object (display) Convex MLA 30  Equations (17)-(23) span a system of nonlinear equations that can be rearranged to take the system performance parameters F, θout (related to the angular FOV), and hin (related to the object size) as input parameters; the system of equations outputs the remaining MLA parameters which are the focal lengths and the pitches of the MLAs, namely f1, f2, p1, and p2.  2.2.4.1 Design Tradespace Analysis The MLA parameters can be found from a numerical analysis of the system of equations (17)-(23). However, since some of the input parameters of the MLA system are already known, we can simply conduct an analysis of the design tradespace of the MLA system to easily determine the other unknown parameters. A design tradespace is a space spanned by variables in a  multi-variate system that shows a spectrum of possible design options, from which we can find a point of optimal trade-offs between the design variables. A similar approach is introduced in [16] to find an optimal design for a light-field based optics. The design tradespace analysis can be undertaken by first developing a tradespace spanned by F and f1, which identifies all possible design options for the MLA magnifier within the space, and eliminate certain design options that do not meet the necessary optical performance conditions such as the angular FOV. Using eq. 19, 22, and 23, we can express the exit angle θout-max in terms of the microlens focal lengths, such that:  𝜃𝑜𝑢𝑡 −𝑚𝑎𝑥 =𝑓1𝑓2 𝐹−𝑓1 𝑕𝑖𝑛−𝑚𝑎𝑥 . (53) We already know hin-max (half width of the microdisplay that we plan to use) and f2 (we plan to use a commercial MLA as the second MLA layer); the active pixels of the microdisplay has an area 8×7.3 mm2, but we will use a square area of 6×6 mm2 on the display as our object so that hin 31  is identical in both y-axis and z-axis directions. Also, we chose the object to be conservative in size such that the resulting magnified image would not be cropped by the aperture of the MLAs. We now plot the tradespace for the exit angle in relation to F and f1 only, as in Figure 2.11 a).   a)  b)  c)  d)  Figure 2.11: Design tradespaces spanned by the focal length of the first MLA f1, and F, the focal length of the MLA magnifier. a) Tradespace in terms of the exit angle of the collimated beams. b) Tradespace for the exit angle with unattainable design options indicated as an area at the bottom. c) Same tradespace as in b) marked with a line that indicates design options that correspond to the exit angle of -10°. d) Tradespace in terms of the eye relief, with eye relief that corresponds to 20 mm marked in red lines.  The plots in Figure 2.11 are generated using the Octave (Matlab-alike language) script in Appendix A. Some of the design options are not physically attainable due to the fact that a maximum exists for the microlens height (sag), which depends on the diameter and the ROC of Eye relief (mm) f 1 (mm) F (mm) f 1 (mm) F (mm) FOV° (Degrees) Unattainable options F (mm) FOV° (Degrees) f 1 (mm) f 1 (mm) F (mm) FOV° (Degrees) 32  the microlens. Figure 2.12 shows the dependency of the sag on the ROC and the diameter of the microlens. The sag is calculated from the ROC and θ such that:  sag = ROC− ROC × 𝑠𝑖𝑛𝜽. (54) The ROC and the diameter has the following relationship,  ROC × 𝑐𝑜𝑠𝜽 =𝐷2, (55) and using the trigonometric identity between sine and cosine, the sag is calculated in terms of the diameter and the ROC of the microlens as:   𝑠𝑎𝑔 = ROC ×   1 − 1 −𝐷24 ROC  2  . (56) Since the diameter of each microlens can be only as large as the pitch p of the MLA, the maximum sag will be obtained when D is equal to p.  Figure 2.12: Illustration of single microlens and its ROC and θ.  We can then relate the ROC to the focal length of the microlens by using eq. 79 in Appendix C, to associate this sag limitation in the tradespace. From eq. 56, it is apparent that in some instances the sag will, mathematically speaking, have a complex value when D is larger than 2×ROC. These instances represent the physically unattainable design options, and we shall reject ROC p (maximum diameter) sag ROC Microlens Substrate θ 33  such design options. This is reflected in the tradespace, as shown in Figure 2.11 b). We set 20° as the target FOV for the MLA magnifier since we want to compare it to Recon Instruments‘ Snow HUD goggles which have a similar FOV. Figure 2.11 c) shows the tradespace marked with a dashed line which corresponds to all design options that result in a FOV of 20°, and we wish to select a design option on this line. We realize that moving downward on the marked line will result in the FOV being constant and F decreasing, which in turn will make the system more compact.   Figure 2.13: Illustration of the change in eye relief due to the difference in the collimated beam width. θout-max  is identical for both cases.  However, we find that by moving down the marked line, the eye relief will be also affected even if the FOV is kept constant, because changing F affects the beam width which in turn will affect Width of collimated beam 1 Exit pupil Eye Plane of the exit pupil re wmax -θout-max Half of magnifier aperture Exit pupil re wmax -θout-max Width of collimated beam 1 34  re. The change in re introduced by the difference in the width of the collimated beam of light is illustrated in Figure 2.13. We see that re is related to θout-max in such way that    𝑟𝑒 =𝑤𝑚𝑎𝑥𝑡𝑎𝑛 𝜃𝑜𝑢𝑡 −𝑚𝑎𝑥  , (57) where wmax is the distance from the optical axis to the centre height of the collimated beam 1, measured at the plane of the convex MLA. Also, wmax is related to the magnifier aperture and the collimated beam width such that   𝑤𝑚𝑎𝑥 = 𝐻𝑎𝑙𝑓 𝑜𝑓 𝑚𝑎𝑔𝑛𝑖𝑓𝑖𝑒𝑟 𝑎𝑝𝑒𝑟𝑡𝑢𝑟𝑒 −  𝐶𝑜𝑙𝑙𝑖𝑚𝑎𝑡𝑒𝑑  𝑏𝑢𝑛𝑑𝑙𝑒  𝑤𝑖𝑑𝑡 𝑕2 . (58) The tradespace for re with the beam width taken into consideration is plotted in Figure 2.11 d). The eye relief requirement of 20 mm produces an ―envelop‖ bounded by the two solid lines along with a broken line that indicates the design options that result in a FOV of 20°. There exists a point that lies at the closest proximity from the broken line for the FOV and the solid lines that represent the eye relief requirements, where a design option that best satisfies both requirements can be obtained. This design option yields MLA parameters whose values are listed in Table 2.2.  Table 2.2: MLA magnifier parameters for the best design option. MLA parameters F f1 f2 p1 p2 Values (mm) 5.4 0.34 1.027 0.26 0.25   2.2.5 Modulation Transfer Function for Resolution Estimation The modulation transfer function (MTF) describes the ability of an optical system to resolve features of an object. The ability is assessed in terms of the contrast and the spatial frequency of 35  the image generated by the optical system. The spatial frequency is measured in cycles/mm or lp (line pairs)/mm and defines the detail density of the object. Contrast is measured as a normalized function between the maximum and minimum luminance of the object details defined such that  𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡 =  𝑙𝑚𝑎𝑥 −𝑙𝑚𝑖𝑛𝑙𝑚𝑎𝑥 +𝑙𝑚𝑖𝑛, (59) where lmax and lmin are the maximum and minimum luminance. Optical systems introduce varying degrees of blurring which will degrade the sharpness of the image, due to the unavoidable aberrations. Thus, objects with higher spatial frequency are affected more by the aberrations and less likely to be resolved. Figure 2.14 showcases such event, where the change in contrast of the image between objects with varying spatial frequency is shown.  Figure 2.14: Two stimuli with different spatial frequency having different contrast outcomes through the same optical component [51].  The periodic gratings with black and white line pairs are the objects to the optical system. The images on the right shows the edges of each grating blurred by the optical system.  The grating at the bottom with higher spatial frequency (smaller features) is visibly less distinguishable than the 36  top grating. This suggests that the contrast in general is a function of the spatial frequency of the object. The MTF responses are thus plotted over a range of spatial frequency to show the specific dependency of the contrast on the spatial frequency of an optical system. Figure 2.15 is an example of an MTF response, which is plotted in Zemax using a computer model of the human eye (supplied by Zemax).   Figure 2.15: MTF response of a computer-modelled human eye (inset).  2.2.6 Simulation The MLA magnifier is simulated using the ray-tracing software Zemax (version 12) in order to estimate the resolution of the MLA magnifier from the simulated MTF as well as the approximate size of the eyebox by measuring the exit pupil of the system. The model is constructed based on the optimal microlens parameter values found in Section 2.2.4. The thickness of the second MLA is provided by the microlens vendor. The thickness of the first MLA is assumed to be 1 mm as we think this is a marginal thickness that allows easy handling and fabrication. Three beams of collimated light are shown in Figure 2.16, in different colours, Spatial frequency (cycle/mm) Contrast 37  each represent light coming from a different part of the display; from the centre, 1.5 mm above the centre, and 3 mm above the centre.  Figure 2.16: Simulation of the MLA magnifier with light sources at different heights on the object plane.  The rightmost plane is 20 mm away (same as the eye-relief) from the terminating surface of the MLA, which is the exit pupil plane where we want to place the eye. The MLA magnifier is simulated using a single wavelength light (555 nm). Figure 2.17 shows the close-up view of the simulated microlenses.   Figure 2.17: Close-up view of the two MLA layers  1st Concave MLA 2nd Convex MLA 38  The microlens arrays are generated using the C program in Appendix B. We then use a generalized model of an eye provided in Zemax to test the collimation of the light beams. The model of the eye used in the simulation is focused at infinity, thus if the exit rays from the MLA magnifier are indeed collimated, the rays will converge on the retina of the eye model. Figure 2.18 a) shows ray propagation through the MLA magnifier with the eye model.  a)  b)  Figure 2.18: Ray-tracing simulation of the MLA magnifier with a model of the eye. a) Image showing all of the components. b) Image zoomed in on the retina.  Retina Main focus Deficient rays Eye lens Retina Eye model MLA magnifier Display 39  Figure 2.18 b) is a close-up view of the retina, and we can see that the collimated beams focus on the retina as intended. The pupil of the eye model is selected as the aperture stop of the optical system, and the rays that do not pass through the pupil (vignetted rays) are deleted from the ray diagram. The pupil size is 4 mm. We note that there are also deficient rays that deviate from the main focal points. These deficient rays are introduced by lens aberrations and will degrade the image resolution as they increase the spot size of the focal points. Nonetheless, the simulated model represents the best case scenario as we do not take into account the rays that pass through the inter-lens gaps, i.e. the interspace between the microlenses. These rays do not refract correctly and eventually become stray light, which further degrades the image quality. The first concave MLA has a pitch of 0.26 mm and the microlenses have a diameter of 0.23 mm. The microlens diameter is slightly smaller than the pitch to prevent neighboring microlenses from sticking to each other during fabrication. This gives the MLA a fill factor of ~61%, which is the ratio between the area occupied by the microlenses and the total area of the MLA. The second MLA has a fill factor of about ~79%. Thus, both MLAs have a sizable inter-lens gap, and any light ray that passes through the gaps on one or both MLA layers will have negative effects on the image quality. Due to the limitations of using a sequential ray tracing mode in Zemax, the light through the microlenses cannot be analyzed at the same time as the light through the inter-lens gap, because the light coming through the inter-lens gap is outside the specified aperture of the microlenses. We can however block the microlenses and pretend that the inter-lens gap is now a flat aperture to emulatively analyze the effect of the light from the inter-lens gap. The light that passes through the inter-lens gaps on both MLA layers is assumed to act as stray light. The light that passes through the inter-lens gaps on one of the MLA layers but refracted by the microlenses on the other MLA layer is also assumed to act as stray light, but with a lower impact. 40  Figure 2.19 a) shows the light rays passing only through the microlenses reaching the retina, representing the best case. Figure 2.19 b) shows the light rays only from the inter-lens gap reaching the retina, representing the worst case. The inter-lens gaps are approximated by circular openings with the same area as the respective inter-lens gaps on each MLA layer. From Figure 2.19 b) we notice visually that the rays are more erratic 1and do not converge.  a)  b)  Figure 2.19: Ray-tracing simulation of the inter-lens gap. a) Image showing all of the components. b) Image zoomed in on the retina.  The degree of collimation can be quantified by measuring the diameter of the focal spots on the retina. This can be justified since the resolution of the MLA magnifier is much lower than the Retina Eye lens Pupil Retina Eye model MLA magnifier Display 41  diffraction limit. Using the spot diagram analysis feature in Zemax, the cross section of the focal spots can be drawn and the diameter of the spots can be estimated.  Figure 2.20 shows the spot diagrams for light coming through the microlenses and the inter-lens gaps. From the spot diagrams, we note that the RMS radius of the inter-lens gap focal spot is about 5 folds larger than the focal spot of the microlenses (79 µm vs. 390 µm). Note that the RMS radius is not a linear scale as it calculates the mean of the radii of all the sample points.    Figure 2.20: The spot diagrams of the focused light rays on the retina, from the microlenses (left), and from the inter-lens gaps (right).   Nonetheless, this indicates that the light from the inter-lens gaps is much less focused; in the actual MLA magnifier, this would be overlaid on the focal spot generated by the microlenses, and the resulting image from the microlenses will be perceived at the same time as the haziness of the light from the inter-lens gaps.  42  2.2.6.1 Simulated Resolution (With Microlens Arrays) We can evaluate the resolution of our MLA magnifier by inspecting the MTF response of the MLA magnifier model. We use the ‗geometric MTF analysis‘ function in Zemax as the resolving power of the MLA magnifier is not close to being diffraction limited. Figure 2.21 shows the plotted MTF response, with the inter-lens gaps blocked. The vertical axis (marked as modulation) indicates the contrast, and the horizontal axis is the spatial frequency. The maximum or cut-off spatial frequency is reached when the contrast becomes zero. In Figure 2.21, two MTF response lines are drawn for each of the off-axis light sources at 1.5 mm and 3 mm from the optical axis.  Figure 2.21: Simulated MTF response of the MLA magnifier, with inter-lens gaps blocked. The color-coded lines represent MTF responses of light sources at different locations on the object plane, whose coordinates on the object plane are noted in the labels above the plot. Lines labeled as either T or S respectively represents tangential or sagittal plane MTF responses.  The two responses are marked as either T or S, which respectively represent light passing through the tangential and the sagittal planes of the microlenses that produce dissimilar MTF Spatial frequency (cycle/mm) Contrast 43  responses due to the differences in the optical paths of each plane. The tangential and the sagittal planes of a microlens are illustrated in Figure 2.22.  Figure 2.22: The tangential and the sagittal planes of a lens produce different focal points for off-axis sources.  Figure 2.23 shows the MTF response of the light passing through the inter-lens gaps. The same sources are used at 0 mm, 1.5 mm, and 3 mm from the optical axis. We can see that the contrast for all of the light sources falls to zero at much lower spatial frequency between 1 and 1.6 cycles/mm. Note that the contrast rebounds after it first drops to zero, and the region after which this occurs is referred to as the spurious resolution, as indicated in Figure 2.23. Phase shift in wavefronts of the light originating from the object leads to the occurrence of spurious resolution, and it is not representative of the true resolution of the optical system [52-54]. Since the actual MLAs permit light transmission through both microlenses and the inter-lens gaps, the MTF response of the actual MLAs would be similar to the superpositioned responses of the two separate cases, because in reality both the microlenses and the inter-lens gaps pass light through at the same time. Sagittal plane Tangential plane Off-axis source Tangential focal plane Sagittal focal plane 44   Figure 2.23: Simulated MTF response of light coming from inter-lens gaps, with microlenses blocked. Although the same light sources are used as in Figure 2.22, the contrast reaches zero at much lower spatial frequency. Lines labeled as either T or S respectively represents tangential or sagittal plane MTF responses. The spatial frequency axis is half the scale of that in Figure 2.21.  Then, we can speculate that the combined image quality would be closer to the worse of the two responses, i.e. the stray light from the inter-lens gap would introduce blurriness and degrade the contrast of the microlenses. Using Zemax, we can also simulate what the displayed image would look like through the magnifier, as shown in Figure 2.24. We note that the high spatial frequency features such as the cheek folds on the baby‘s face are not resolved by the magnifier, but most of the facial parts can be identified.   Spatial frequency (cycle/mm) Contrast Spurious resolution 45    Figure 2.24 Image simulation using Zemax. The input image (one of the Zemax-supplied test images) used is shown (top). What it would look like through the MLA magnifier (bottom).   46  Chapter 3: Prototyping A prototype is made for the purpose of demonstrating the concept of the MLA magnifier and to compare the real-world optical parameters to those of the theoretical model.  3.1 Fabrication of the Microlens Arrays One of the MLAs of the MLA magnifier needs to be an array of concave microlenses. At the time the prototype was conceived, concave MLAs were not readily available to be purchased off-the-shelf, thus the concave MLA is made in-house.  3.1.1 Fabrication Techniques The concave MLAs are fabricated using microfabrication techniques. The three major techniques employed are photolithography, photoresist reflow, and polymer casting. We have looked into the viability of several other candidate techniques for producing the microlenses, such as embossing [27-29], ablation-etching [30-32], proximity printing [33], reflow-casting [34-37], as well as several other avenues [38-42]. We chose the photolithography and reflow-casting technique mainly because of its straight-forward implementation resulting from the well-studied process parameters, simplicity in modeling, good repeatability, and the prevalence of the materials used.  3.1.2 Process Overview Figure 3.1 shows the process steps and the cross-sections of the substrate after each process step. As can be seen, the MLA is fabricated in four steps, namely the photolithography, reflow, casting, and the curing/separation stages. These steps are discussed further in detail below. 47   Figure 3.1: Description of each process stages  3.1.2.1 Photolithography Photolithography is the imprinting of 3-dimensional patterns on a substrate (silicon wafers are a common choice) by first coating a photosensitive polymeric material (photoresist) on the substrate and selectively exposing the photosensitive material to light. The wavelength(s) of the light should be within the range of wavelengths that the photoresist is sensitive to, typically in the ultraviolet range. Photoresists come in two tones, either positive or negative, depending on their behavior upon exposure to light. Different chemical compositions make them react differently; positive photoresists, upon exposure, undergo a photochemical reaction through which the bonding of polymer is broken, making the exposed parts become more soluble in the developer. The polymer compounds in negative photoresists are ultimately crosslinked upon exposure, making the exposed parts become less soluble.  A mask is placed between the light source and the layer of photoresist, which can block the light from the source in desired areas to selectively expose the photoresist. The patterns on the mask practically control the shape that gets transferred onto the photoresist. Oftentimes considerations are made in regards to the resolution of the mask, photoresist thickness, and the gap between    Post-photolithography Post-reflow Polymer casting Cured polymer separated from the mold 48  mask and the photoresist which can all affect the fidelity of the pattern shapes transferred onto the photoresist. The thickness of the photoresist is controlled by adjusting the spin speed of the spin coater. Figure 3.2 shows sections of the mask design used to make the MLAs.  a)  b)  Figure 3.2: Sections of the MLA mask design. a) A complete array of 70x70 microlenses. The arrows indicate location of the thickness measurements after the reflow. b) Zoomed into one corner of the array. The red line shows the scanning path of the profilometer for measuring the height of the photoresist structure after reflow.   The mask design shown in Figure 3.2 is printed onto a clear transparency which becomes a light field mask, where dark areas block light, keeping the photoresist underneath unexposed. The dark circles in the MLA mask design are 230 μm in diameter and have a pitch of 260 μm, as determined in Section 2. In our making of the MLA, we use a 4-inch silicon wafer as substrate and SPR-220 7.0 positive tone photoresist manufactured by the Dow Chemical Company based in Marlborough, MA, as it is capable of being deposited in a thick layer (upwards of 30 μm according to the data sheet). After the exposure of the substrate with the transparency mask placed on top, we dissolve away the exposed photoresist with MF-26A positive tone developer, also manufactured by the  49  Dow Chemical Company. After the development, cylindrical islands of photoresist are formed, as shown in Figure 3.3 a).  a)  b)  Figure 3.3: Photo of the photoresist islands before and after the reflow. a) The islands have a flat plateau before the reflow. b) After the reflow, the surface profile of the islands is round.  3.1.2.2 Reflow In the reflow step, the processed wafer is put on a hot plate for 30 seconds, with its temperature set above the melting point of the photoresist (130°C). The photoresist cylinders are then reflowed; the top of the cylinders are rounded by the heat, forming domes. A thin layer of an adhesion promoter, HMDS, which had been spin-coated on the wafer prior to photoresist deposition to ensure that the base of the islands does not spread during reflow. The surface contour of the domes is spherical, because the contour is shaped primarily by the surface tension acting on the molten photoresist. These domes are oftentimes referred to as spherical caps by others. Figure 3.3 b) shows pictures of the photoresist islands after the reflow, taken with a digital camera attached on a compound microscope at 20× magnification. Figure 3.4 depicts a model of a single photoresist island before and after the reflow step, and shows the geometric parameters of each island, where Rs is the ROC, ts is the height of the spherical cap, r is the base 50  radius of the cylinder, and tp is the height of the cylinder, which is the same as the deposited photoresist thickness. If we assume that the footing (the base diameter) of the cylindrical islands stays constant and the photoresist does not lose volume significantly (i.e. due to evaporation) during the reflow, we can make a simple correlation between the geometric parameters between cylinders and spherical caps such that  𝑣𝑜𝑙𝑢𝑚𝑒 𝑜𝑓 𝑐𝑦𝑙𝑖𝑛𝑑𝑒𝑟 = 𝑣𝑜𝑙𝑢𝑚𝑒 𝑜𝑓 𝑠𝑝𝑕𝑒𝑟𝑖𝑐𝑎𝑙 𝑐𝑎𝑝, (60)   Figure 3.4: Parameters of the photoresist cylinder and the spherical cap.   𝜋𝑟2𝑡𝑝 =  𝜋 𝑅𝑠2 − 𝑦2 𝑅𝑠𝑅𝑠−𝑡𝑠𝑑𝑦, (61) and eq. 61 can be rearranged to find the necessary thickness of photoresist deposition tp in terms of Rs and ts, which can be calculated if the diameter of the spherical cap is known. This shows that we can control Rs with the thickness of the photoresist deposition during photolithography Reflow Rs 2r tp ts Photoresist cylinder Spherical cap 51  and the diameter of the islands, which is controlled by the diameter of the dark circles on the mask.  3.1.2.3 Molding and Removal of the Cast MLA The wafer with the islands of spherical caps is used as a mold for casting. PDMS is used as the casting material for it has good mechanical and optical characteristics (e.g. rigidity and transparency). The wafer is prepared for casting by first cleaning the surface with an air gun and aluminum foil ―fences‖ are made around the MLA area on the wafer mold to contain the PDMS resin, as shown in Figure 3.5.    Figure 3.5: The silicon wafer mold after the cast PDMS has been cut and peeled.  The casting of PDMS is performed following the standardized procedure which consists of mixing PDMS, degassing, and curing. The PDMS resin mixed with the curing agent at 5:1 ratio Aluminum foil fence Microlens Array Silicon wafer mold PDMS 52  is used to yield ~1 mm thin MLAs that are mechanically robust enough to be handled. After curing for 2 hours at 60°C, the hardened PDMS is cut and peeled off from the wafer. Figure 3.6 b) shows the PDMS cast concave microlenses.  a)  b)  Figure 3.6: pictures of the MLA taken with a microscope. a) Perspective view of the array of spherical caps on the wafer. b) Looking down at the cast concave MLA.  3.1.3 Experiments The deposition thickness of the SPR-220 7.0 that we require is close to 31 µm, which is a bit more than the maximum thickness of 30 µm specified in the data sheet. Thus experiments are initiated to determine the correlation between the spin speed of the spin coater and the thickness of the photoresist to achieve the desired thickness at lower spin speed. The thickness of the deposited photoresist as a function of the spin coater speed is measured and plotted in Figure 3.7. The thickness is measured with an Alpha Step 200 profilometer, and measurements are taken at different points of the wafer and the measured thicknesses are averaged. It is found from experiments that the sidewall of the photoresist islands after development is not completely vertical, but sloped, and the photoresist islands are not perfectly cylindrical. This is due to 1 mm 1 mm 53  process shortcomings such as light diffraction at the edges of the circular openings on the mask affecting the exposure profile in the photoresist.   Figure 3.7: The photoresist thickness vs. the spin coater speed in RPM.  Also, during the photoresist development, the top of the photoresist is subjected to the eroding of the developer for a longer amount of time than the bottom of the photoresist. Thus the top of the cylinder sidewall will recess more towards the centre of the cylinder than the bottom, contributing to the sloping of the sidewall and the resulting shape of the structure becomes more like a circular mound with a flat top, with rounded top edges. We estimate the degree of inclination of the sidewall by first taking a picture of the microlenses, and then by measuring the diameter of the top and the base of the mound using CAD software (Solidworks 2010). Figure 3.8 shows the measurements. The measurement unit of the circle diameters of the mounds in Figure 3.8 is arbitrary and not reflective of the actual diameters of the structure.  2223242526272829303132440 460 480 500 520 540Photoresist Thickness (μm)Spin Speed (RPM)54   Figure 3.8: The top and bottom diameters of the mound, as measured in Solidworks.  However, the actual slope of the sidewall can be calculated from the relative ratio between the base and the top diameters, since we know the height of each mound. The slope of the sidewall of the microlenses in Figure 3.8 is calculated to be ~84°. From experiments, the slope of the sidewall is determined to be almost constant for mound heights of 25.5, 28, and 29.5 µm. Therefore we assume that the sidewall slope is constant over a range of mound heights in the vicinity of those numbers, and we can represent the volume of the mound as a function of its height tp and the base diameter r such that  Mound volume =  𝜋𝑡𝑝0 𝑟 −𝑦𝑡𝑎𝑛  84°  2𝑑𝑦 = 𝜋  𝑟2𝑡𝑝 −𝑟𝑡𝑝2𝑡𝑎𝑛  84° +𝑡𝑝33× 𝑡𝑎𝑛  84°  . (62) tp r The height of the mound drawn is not to scale 55  The above equation can be rearranged for tp, the mound height. The ROC of the spherical cap R that we require is 0.134 mm, which translates to a concave microlens focal length f1 of  -0.335 mm for a microlens diameter r of 0.215 mm. Assuming the spherical cap volume is equal to the mound volume, the mound height tp would need to be 30.3 μm. The spin coater speed that results in a photoresist thickness of 30.3 μm is found from experiments to be 455 RPM. After reflow, the height of the spherical caps is verified with a mechanical profilometer (DEKTAK 150). It is difficult to perform the profile measurement exactly at the vertices of the spherical caps. The spherical caps are therefore scanned several times close to their vertices and the maximum value is assumed to be a good approximation of their height. To find the variance in the height of the photoresist, the height is measured at the four corners of the spherical cap array, the locations of which are indicated in Figure 3.2 a). Again due to the difficulty in measuring exactly at the vertices of the spherical caps, the array perimeter is chosen instead for measurement, as shown in Figure 3.2 b). Assuming the variance of the diameter between the spherical caps is negligible and the thickness varies linearly between the corners, the maximum height variance of the photoresist over the area occupied by the spherical caps is estimated to be ±1.4% from the center of the array area.  3.2 Making of the Prototype The fabricated concave MLA is paired with the commercial convex MLA. The MLAs are held together by a 3D-printed frame. Due to the absence of measures to precisely control the positions of the MLAs relative to each other, the translational and rotational alignments are performed manually. The rotational alignment is much more important than the translational alignment because any rotational misalignment would affect the pitch of the microlenses relative to the  56  y and z-axis of the MLA plane and change the focal length of the MLA magnifier. Even if the rotational alignment is performed manually, we are able to achieve a good alignment; the rotational alignment error is easily detected because any rotational misalignment between the MLAs forms Moiré pattern-like images of the microlenses, as shown in Figure 3.9 b) to e).   Figure 3.9: Pictures of the 3D-printed frame and the Moiré patterns caused by the rotational misalignment. a) 3D-printed frame and the convex MLA. b) – d) Progression of the rotational misalignment of the convex MLA in the counter-clockwise direction, from the most to the least misaligned. e) The convex MLA is now misaligned in the clockwise direction. The orientation of the Moiré patterns is now reversed. Thus the MLAs are thought to be in alignment when the Moiré pattern orientation is on the verge of being reversed.  a) b) c) d) e) 3D-printed frame fitted to a circular lens adopter Moiré patterns 57  Once the MLAs are aligned beyond a reasonable satisfaction (i.e. when there is the least amount of the Moiré patterns), they are fixed in place by carefully placing a transparent adhesive tape to the flat side of the concave MLA and the frame. The tape used is much thinner than the thickness of the MLAs that it is assumed to add no significant effect to the optical characteristics of the MLAs. 58  Chapter 4: Evaluation of Microlens Array Magnifier The real-world performance of the MLA magnifier in regard to the resolution and the eyebox is compared with the theoretical estimates.   4.1 Experimental Setup In order to accomplish this, a Sony microdisplay that has 1044×768 pixels over an area of 8.1×6.0 mm2, and a Canon S120 camera with 13 megapixel resolution and manually adjustable focus, aperture and exposure are used to test the MLA magnifier. Test objects are displayed on the microdisplay, and pictures of the images seen through the MLA magnifier are taken with the camera. The components are set up on an optical table, as shown in Figure 4.1.   Figure 4.1: The test setup placed on an optical table is shown.  Microdisplay Dovetail Rail MLA magnifier (retracted) Camera x-axis  y-axis  z-axis  Telescopic post 59  The microdisplay, the MLA magnifier, and the camera are placed in series in the x-axis direction. The microdisplay and the MLA magnifier are affixed to telescopic posts to allow translation and rotation in and about the y-axis. The posts are mounted on a dovetail rail, to also allow them to glide in the x-axis direction. The camera is mounted on a sliding plate which can move in the z-axis direction and rotate about the y-axis. Due to the absence of more precise tools such as an XYZ stage, the alignment among the components is manually performed. The positioning accuracy of the measurements is around 1 mm.     Figure 4.2: The microdisplay with a test image of Lenna displayed. a) The test image seen without the MLA magnifier, with camera focused at the display. b) The test image seen without the MLA magnifier, with camera focused at infinity. c) Test image seen through the MLA magnifier. The original test image used is also shown (inset). MLA magnifier is in line with the display MLA magnifier is retracted a) b) c) 60  A test image is displayed and seen through the magnifier to verify that the eye can accommodate on the virtual image generated by the magnifier as in Figure 4.2, where pictures of the displayed image with and without the MLA magnifier are shown, taken with the camera. The camera focus is set manually to infinity to mimic the relaxed state of the eye focused at far distances. From visual inspection of the image, we note that the bottom left corner of the image is more stretched than the other corners, somewhat like a pin-cushion distortion. This is believed to be due to the combination of the less-than-ideal test conditions such as the fabricated concave MLA being not perfectly flat (the concave MLA is thicker on one side by about 100 µm than the opposite side), and also the microdisplay, the MLAs, and the camera not being perfectly aligned in the z and y-axis directions. This will be characterized in depth in future studies once the ability to precisely control the alignment is obtained.  4.2 Measurement of the Resolution The MTF response of the MLA magnifier can be measured by using multiple test patterns consisting of black and white line pairs with different spatial frequency, and quantifying the composition of the line pairs in the resulting images. The black and white line pairs are displayed on a microdisplay with a known pixel pitch, thus the width of each line can be represented in the corresponding number of pixels. Figures 4.3 to 4.6 show the test patterns used with different spatial frequency and the resulting images.    61  Line Width in # of pixels Test Pattern Image Seen through the MLA Magnifier 160   40   Figure 4.3: 0.4 and 1.6 cycle/mm test patterns and their magnified images.  Line Width in # of pixels Test Pattern Image Seen through the MLA Magnifier 12   12   Figure 4.4: 6.4 cycle/mm test patterns with both vertical and horizontal lines and their magnified images. 62  Line Width in # of pixels Test Pattern Image Seen through the MLA Magnifier 10   10   Figure 4.5: 8.0 cycle/mm test patterns with both vertical and horizontal lines and their magnified images.  Line Width in # of pixels Test Pattern Image Seen through the MLA Magnifier 5   5   Figure 4.6: 12.8 cycle/mm test patterns with both vertical and horizontal lines and their magnified images. 63  The pictures of the test images seen through the magnifier are taken with a Canon S120 camera which has an angular FOV of 72.3° horizontally and 57.4° vertically, at its lowest focal length setting of 5.2 mm. The FOV of a camera is the size of the recorded images in terms of the angle subtended from the lens aperture, as in Figure 4.7.  Figure 4.7: Portrayal of the angular FOV (FOV°) of a camera and its related parameters. The object is not necessarily at infinity.  The angular field of view  FOV° = 2 × tan−1  Image  sensor  size2× Camera  focal  length  (63) is calculated based on the camera specifications provided on the Canon website [44]. Also the camera has an image sensor with 4000×3000 photodiodes, meaning the pictures it takes are made up of 4000×3000 pixels, evenly spread across its FOV.  We want to know whether the resolving power of the camera is greater than that of the MLA magnifier, so that the camera does not become a limiting factor in the evaluation of the resolution of the MLA magnifier. Since the camera is focused at infinity, we cannot represent the Focal length of camera Image sensor Optical axis Principal rays from the extreme points of the object Object Camera lens aperture approximated as a pinhole FOV° 64  spatial resolution of the camera in terms of cycles/mm, however we can represent the resolving power of the camera in terms of the angular resolution using a unit of arcmin, which is 1/60 of a degree. With the camera‘s angular field of view and the number of pixels known, the angular resolution of the photodiodes is calculated as:  Angular resolution of a photodiode =  tan−1  tan  horizontal  FOV°2 half  # of  horizontal  photodiodes ×60 arcmin1 degree   = tan−1  tan  72.3°2 2000 × 60 = 1.26 arcmin. (64) The same calculation can be arranged to find the angular resolution of the camera in the vertical direction of the image sensor, which is the same 1.26 arcmin. The angular resolution calculated from eq. 64 would be true for photodiodes at the center of the image sensor. Due to the tangential component, the photodiodes far from the center of the image sensor will have decreased angular resolution. We will however assume that all photodiodes have a constant angular resolution, as the size of each photodiode is many orders of magnitude smaller in comparison to the camera focal length that the linear approximation of the tangent is justified. To facilitate the comparison of resolution between the black and white test patterns and the camera, the spatial frequency of the test patterns is converted to angular resolution. In the images taken by the camera, the test patterns take an area of about 540×400 pixels out of the 4000×3000 pixel canvas. We also know the number of lines used in each test pattern, thus we can convert the spatial frequency to angular resolution using:  A. R. of test patterns in camera image =  total  # of  test  pattern  pixels  in  camera  image# of  test  pattern  lines× 1.26 arcmin. (65) This can be performed in either of the horizontal or vertical directions.  65  The converted values for each test pattern in both cycles/mm and angular resolution in the horizontal direction of the test patterns are listed in Table 4.1. As seen in Figure 4.6, the test pattern with the highest spatial frequency of 12.8 cycles/mm  (5 pixel-wide line pairs) fail to be resolved by the MLA magnifier. The angular resolution of this test pattern as seen by the camera is still greater by a factor of 3.4 than the minimum angular resolution of the camera of 1.26 arcmin. Thus, we conclude that the camera would be able to resolve the test pattern, had it been resolved by the MLA magnifier, and would not be a limiting factor.  Table 4.1: Horizontal spatial frequency and converted angular resolution of the test patterns. Test Pattern Line Width (# of pixels) # of Lines in 800x600 Frame of the Test Images Spatial Frequency (cycles/mm) Angular resolution of the test patterns in camera image (arcmin) 160 5 0.4 135.2 40 20 1.6 34.0 20 40 3.2 17.0 15 53.3 4.3 12.8 12 66.7 5.3 10.2 10 80 6.4 8.5 8 100 8.0 6.8 5 160 12.8 4.3  We can now measure the contrast of the test pattern images taken with the camera. The camera records the luminance of a scene as a linear function of the amount of photons it receives. However, when the raw imags taken by the camera is converted as an image format intended for viewing such as JPEG, the linearity in recorded luminance is shifted (gamma-encoding) such that it negates the non-linearity in expressing luminance in display monitors. This process of  66  re-linearizing gamma is referred to as gamma correction. Gamma describes the non-linearity in luminance as a numerical value. To find the actual luminance as recorded by the image sensor of the camera, we need to convert the gamma-encoded RGB values back to the linear values, a process referred to as gamma decoding. We note that the RGB values in the camera image are represented in the sRGB space, which is one of the standards for representing colors. As such, we will use sRGB-specific decoding initiatives [43]. First, each of the RGB values of each pixel as stored in the image needs to be decoded. Each of the RGB values is stored as an 8-bit value, and we can normalize the RGB values by dividing them with 255. We denote the normalized RGB values as R8bit-norm, G8bit-norm, and B8bit-norm, and if they are greater than 0.03928 (which corresponds to 10/255), the gamma-decoded RGB values in the sRGB space RsRGB, GsRGB, and BsRGB can be calculated as:   𝑅𝑠𝑅𝐺𝐵 ,𝐺𝑠𝑅𝐺𝐵 ,𝐵𝑠𝑅𝐺𝐵  =   𝑅8𝑏𝑖𝑡 −𝑛𝑜𝑟𝑚 , 𝐺8𝑏𝑖𝑡 −𝑛𝑜𝑟𝑚 , 𝐵8𝑏𝑖𝑡 −𝑛𝑜𝑟𝑚  +0.0551.055 2.4. (66) Eq. 66 is used to decode RGB values encoded with a gamma of 1/2.2, which is typical in sRGB space. If the R8bit-norm, G8bit-norm, and B8bit-norm values are less than or equal to 0.03928, then RsRGB, GsRGB, and BsRGB becomes   𝑅𝑠𝑅𝐺𝐵 ,𝐺𝑠𝑅𝐺𝐵 ,𝐵𝑠𝑅𝐺𝐵  =  𝑅8𝑏𝑖𝑡 ,𝐺8𝑏𝑖𝑡 ,𝐵8𝑏𝑖𝑡  ÷ 12.92. (67) The use of the conditional conversion is to more closely fit the linear luminance response. With RsRGB, GsRGB, and BsRGB values calculated, we can now convert them into the relative luminance value. The contribution of each of the RGB colors to total luminance depends on the sensitiveness of the light sensing body to the intensity of light as a function of wavelength (described as the luminosity function). The sRGB color space uses the luminosity function of the human eye to describe the relative luminance. Since each of the RGB contributes to luminance 67  differently, a weighted equation for relative luminance such as  Relative luminance = 0.2126𝑅𝑠𝑅𝐺𝐵 + 0.7152𝐺𝑠𝑅𝐺𝐵 + 0.0722𝐵𝑠𝑅𝐺𝐵  (68) is used to convert the RGB values into a relative luminance value [43]. We are interested in knowing the maximum resolution of the MLA magnifier, thus the center of the test pattern is aligned with the center of the camera image. Then the RsRGB, GsRGB, and BsRGB values are obtained as close from the centre of the camera image as possible to minimize the chance of our resolution measurements being affected by the lens aberrations. The relative luminance values of RGB values sampled at a neighboring pair of black and white lines are calculated using eq. 68, and the contrast between them is calculated using eq. 59. The resulting contrast for each test pattern seen through the magnifier is plotted in Figure 4.8, along with the simulated contrast of a point source on the optical axis (corresponding to the lines tagged as 0.0000, 0.0000 mm in Figures 2.21 and 2.23). From the plotted MTF response, we see that the contrast of the MLA magnifier as measured from the test patterns drops to nearly zero at 8 cycles/mm, which is similar to the simulated contrast response. We note that the contrast of the horizontal test patterns (representing the resolution in y-axis direction) is lower than the vertical test patterns (representing the resolution in z-axis direction), even though the pitch of the microlenses for both MLAs is identical in both y and z-axis directions. This is the effect of astigmatism, which could have resulted from the greater degree of misalignment among the test components in the y-direction than the z-axis direction. We also observe that the measured contrast over the entire range of spatial frequency is closer to the MTF response of the light coming from the inter-lens gaps. This is expected from the simulation as it is seen that the light from the inter-lens gaps generates much bigger non-converging focal spots than those generated by the microlenses, indicating a possible blurriness. 68   Figure 4.8: MTF plot of the MLA magnifier from both simulation and measured contrast.  We can visually confirm that when looking into the MLA magnifier, there is a perceivable degree of hazing that smoothens the contrast variations of the test patterns over the entire area of the image. It will be verified in future studies with another MLA with the inter-lens gaps blocked or darkened, that whether the haziness will disappear if the light transmission through the inter-lens gaps is prohibited.  4.3 Measurement of the Angular FOV We want to measure the angular FOV of the image from the MLA magnifier to compare with the theoretical design. The angular FOV as we designed is 20° at 20 mm away. Thus, in order to measure the FOV of the magnified image, the aperture of the camera lens (which corresponds to the pupil of the eye) needs to be placed at exactly 20 mm from the MLA magnifier.  00.10.20.30.40.50.60.70.80.910 2 4 6 8 10 12 14Normalized ContrastSpatial Frequency (cycles/mm)Vertical Test PatternsHorizontal Test PatternsSimulated MTF (Best Case)Simulated MTF (Worst Case)69   Figure 4.9: The microdisplay size and the total image size in pixels (top). The angular FOVs of the microdisplay and the entire image are depicted, as well as the distance to the camera aperture da (bottom).  However, it is unknown to us where the lens aperture resides in the camera housing. As an alternative, we can estimate the location of the camera aperture from the information we already have at our disposal, such as the angular FOV of the camera images and the dimensions of the microdisplay. The Canon S120 camera takes pictures with a horizontal angular FOV of 72.3° and 57.4° vertically. Figure 4.9 shows the microdisplay size in pixels, as well as the angular FOVs of the microdisplay and the camera image, at the camera aperture. Since we know the relative size of the camera image and the microdisplay, the vertical and the horizontal angular 4000 Pixels 3000 Pixels 957 Pixels 833 Pixels 72.4° 56.4° Horizontal FOV° of the microdisplay  Vertical FOV° of the microdisplay da Camera image Area of the microdisplay in the image 70  FOVs (FOV°s) subtended from the camera aperture to the microdisplay can be simply calculated using trigonometry such as:  Horizontal FOV° = 2 × tan−1  width  of  microdisplay  in  imageWidth  of  camera  image× tan  Camera  horizontal  FOV°2     = 2 × tan−1  957 Pixels4000 Pixels× tan  72.3°2  = 19.9°, (69)  Vertical FOV° = 2 × tan−1  Height  of  microdisplay  in  imageHeight  of  camera  image× tan  Camera  ve rtical  FOV°2     = 2 × tan−1  833 Pixels3000 Pixels× tan  57.4°2  = 17.3°. (70) Also, we know the actual dimensions of the display (15.7 mm × 13.6 mm including the frame), the distance to camera aperture da can be calculated from either the horizontal or the vertical angular FOV such that  𝑑𝑎 =15.7 mm2× tan  19.9°2 = 44.8 mm. (71) With the distance to the camera aperture now known, we can calculate the angular FOV much in the same way as above. Figure 4.10 shows the size of the magnified image in number of pixels. From the MLA magnifier, the distance to the camera aperture is 35 mm, because the gap between the microdisplay to the exit surface of the MLA magnifier is about 10 mm. Following the same procedure as in eq. 69 and eq. 70, the horizontal and the vertical angular FOVs of the magnified image of Lenna are calculated to be both 10.8°. Therefore, the area of the magnified image as perceived by the camera is 8.5 mm × 8.5 mm because  𝑀𝑎𝑔𝑛𝑖𝑓𝑖𝑒𝑑 𝑖𝑚𝑎𝑔𝑒 𝑤𝑖𝑑𝑡𝑕 & 𝑕𝑒𝑖𝑔𝑕𝑡 = 2 × 44.8 𝑚𝑚 × 𝑡𝑎𝑛  10.8°2 = 8.5 mm. (72) With the size of the magnified image estimated, we can now calculate the angular FOV of the image at 20 mm from the MLA magnifier such that: 71   Figure 4.10: The size of the entire picture and the magnified image of Lenna in number of pixels.   Angular FOV of the magnified image = 2 × tan−1  8.5 mm2× 20 mm = 24°, (73) which is larger than the designed angular FOV of 20°. We note from eq. 22 that the maximum exit angle θout-max (half of the angular FOV) is dependent on microlens parameters d, f1, and f2, and is also linearly dependent on the object size (twice of hin-max). Therefore, we suspect that the concave MLA being thicker than designed, and/or the focal length of the concave microlenses f1 being shorter could make the fractional coefficient term of eq. 22 larger, thereby making the angular FOV larger. Note that the above image of Lenna displayed on the microdisplay has a size of 768×768 pixels which corresponds to an area of 6.0×6.0 mm2 on the microdisplay, which is the same size as the object used in simulation.  4.4 Eyebox The size of the eyebox is measured using the test setup in Figure 4.1. Another view of the test setup is shown in Figure 4.11. The microdisplay, MLA magnifier, and the camera are first 3000 Pixels 519 Pixels 517 Pixels 4000 Pixels Magnified image 72  aligned with respect to each other, and then only the camera is allowed to move in the y-axis and the z-axis directions.   Figure 4.11: A view of the measurement setup from the rear.  The eyebox is defined as a volume within which the eye is able to see the entire virtual image without being cropped; therefore, we can detect the boundaries of the eyebox when the edges of the virtual image start to be clipped. By measuring the distance the eye moves from the position where the clipping occurs to the opposite side (same axis) until the image of that side starts to be clipped, we can estimate the eyebox size in that axis. The aperture diameter of the eye needs to be taken into account in measuring the eyebox size because when the image is at the verge of being clipped, most of the eye is actually outside of the eyebox. Figure 4.12 depicts such circumstance. Microdisplay y-axis Translation Camera Mount Support MLA Magnifier A marker for measuring distance moved z-axis Translation 73   Figure 4.12: The eye location at the extremes of the eyebox.  Taking into account the eye diameter, the size of the eyebox as measured from the distance moved by eye is simply  Eyebox size = Distance moved by eye− Eye aperture. (74) A square frame is displayed as a test object, and a camera is used to take pictures in place of the eye. The maximum size of the eyebox (at the exit pupil of the MLA magnifier) as we estimated in Section 2.2.3 is identical to the beam width of the collimated rays, which is assumed to be constant regardless of the object height, and consequently the size of the square frame. The F-number of the camera is  F− number =focal  length  of  a lensDiameter  of  the  le ns  aperture= 1.8, (75) and considering the focal length is 5.2 mm, the aperture diameter of the camera is found as  2.89 mm. To prevent chromatic aberration from possibly introducing a distortion in the frame size, a single wavelength color of green is used as the color of the frame (pixels on the display have R, G, and B values of 0, 255, and 0). The frame is 3×3 mm2 in size, which corresponds to 385×385 pixels on the microdisplay. The camera is moved in the x-axis and y-axis directions, Eyebox size Distance of the eye movement Eye aperture (pupil) Eyebox  Collimated rays from the MLA magnifier 74  until the frame edge of the either direction of movement disappears from the view, which represents that the camera aperture is completely outside the eyebox. The magnified images of the test frame at each boundary of the eyebox are shown in Figure 4.13.   Figure 4.13: Image of the displayed frame at the extremes of the eyebox.  The eyebox size as estimated from the theory, simulation, and the measurements are tabulated in Table 4.2. For the measurements, the position of the camera aperture in the camera housing is estimated by looking through the lens, such that we can position the camera aperture ~20 mm away relative to the MLA magnifier.  Camera at center of the eyebox Image of the green frame Light from inter-lens gap Camera moved in the direction of the arrows 75  Table 4.2: The size of the eyebox from theory, simulation, and measurements.  Theoretical Simulation Measurements Eyebox size (mm) 10.0  6  6.6 (horizontal) 5.6 (vertical) Notes Measured at  20 mm from the MLA magnifier Measeasured at the exit pupil of the MLA magnifier, ~20 mm from the MLA magnifier Position of the camera aperture close to 20 mm  The theoretical eyebox size as predicted in Section 2.2.3 results from ideal conditions with lens refraction modeled under small-angle and thin-lens approximation, and assumed free of Seidel aberrations, especially the field curvature. However in reality, the light rays launched from off-axis sources with larger input angles focus short of the ideal image plane, thereby bending the image plane as shown in Figure 4.14.   Figure 4.14: Illustration of the field curvature.  The implication of this for the MLA magnifier is as follows; the bending image plane of the first concave MLA starts to deviate from the focal plane of the second convex MLA, as the input angle of the light rays increases. This deviation would cause the microlenses further away from the light source to start not collimating light, thereby reducing M, the total number of Lens Object plane Actual curved Image plane Ideal Image Plane 76  microlenses that take part in propagation of the collimated beam. Thus, in simulation which takes into account the aberrations, the size of the simulated eyebox is less than the theoretical size. The measured eyebox size is similar to the simulated size, but the horizontal and vertical sizes are different. We suspect that this is due to the misalignment among the test component, as well as the inaccuracy of the measurements resulting from the inability to precisely maneuver the translational stages of the test setup.  77  Chapter 5: Conclusions and Future Work 5.1 Concluding Remarks MLAs have been widely used in the design of optical systems. The 2-D lateral arrangement of microlenses give the MLAs unique optical properties which can be utilized in many different applications; some of the applications include the use with imaging sensors to concentrate light onto the photodiodes and photovoltaics [45, 46], thereby increasing the efficiency and sensitivity of the transducers, the use in stereoscopic vision systems [47], as an I/O coupler in optical networks [48], and as a component in wavefront sensors [49, 50]. Indeed, a single MLA can be very versatile; nevertheless, when used in multiple layers, the MLAs gain even more unusual properties that no other refractive optics has, such as the very low F-number and the ability to be made very slim in thickness, which is also almost invariant to the focal length. By capitalizing on these unusual properties, the use MLAs as a magnifier in NED optics has been studied. NEDs need to be compact for reasons of form-factor and being lightweight. As well, the virtual image of the display should be projected far away so as to minimize the disparity in eye accommodation between the background scenery and the virtual image (with the exception of VR displays that cover the entire FOV of the eye). This requires that the near-eye display optics to be compact, and be able to collimate the light rays from the display or other forms of an image source, while providing a reasonable FOV.  We have demonstrated that the MLA can indeed be used as a magnifier in an NED with a usable FOV, eye relief, and an eyebox. We first theoretically modelled an MLA magnifier with 20° angular FOV and 20 mm eye-relief, benchmarked from Recon‘s Snow HUD googles. Based on the theory, we built a prototype of the MLA magnifier using microfabrication processes, and the prototype is shown to have achieved an optical performance that is close to the theoretical 78  figures with 24° angular FOV at 20 mm, and the size of the eyebox is shown to be similar to the estimated size from the ray-tracing simulation. Also, by using the MLA magnifier, it is shown that objects very close to the eye can be brought into focus, while maintaining a compact profile of ~ 10mm, measured from the display to the exit surface of the convex MLA, including the thickness of the microdisplay used. This shows that the MLA magnifier can indeed be used as a collimator in NEDs, although the current resolution of the prototype MLA magnifier leaves much to be desired.   5.2 Future Work The primary objectives of the research so far have been focused on proving the concept of the MLA-based magnifying lens, and to make it as compact as possible. Thus the optimization of the optical performance has rather been put aside, not to mention that the design of the MLA magnifier has also been limited by the lack of alternatives in choosing the second MLA layer. Therefore in the future, we wish to gain the ability to make MLAs with longer focal length in-house, to use as the second MLA layer. We will investigate into other prospective microfabrication techniques such as the imprinting of shallow features using diffraction of light [33], and grey-scale lithography, which could allow us to fabricate microlenses with lower sag height. The ability to make our own second MLA layer will also allow us to treat the focal length of the second MLA f2 as a variable which will generate a much larger pool of design options, from which we can perhaps find a design optimized for maximum resolution exceeding the requirement for an NED (i.e. the resolution of the eye), by doing a similar tradespace analysis as in Section 2.2.4.1. We also wish to study and improve on the theory of MLA, and characterize the optical properties of the MLA magnifier more in depth with regards to its resolving power, 79  aberration characteristics, and the fill factor of the microlenses. Ultimately, we would like to make the optical performance of the MLA magnifier comparable to other NED optics, and one day incorporate the MLA magnifier in commercial products. 80  Bibliography [1] The Firearm Blog. ―Meprolight TRU-DOT RDS Battery Powered Reflex Sight.‖ Internet: http://www.thefirearmblog.com/blog/2014/07/08/meprolight-tru-dot-rds-battery-powered-reflex-sight, July 8, 2014 [Mar 10, 2015]. [2] The Firearm Blog. ―Meprolight M5 Reflex Sight – SHOT Show Optic Preview.‖ Internet: http://www.thefirearmblog.com/blog/2014/01/29/meprolight-m5-reflex-sight-shot-show-optic-preview, Jan 29, 2014 [Mar 10, 2015]. [3] Jared Trimarchi. ―A view of the flight line can be seen from a U.S. Air Force C-17 Globemaster III aircraft assigned to the 816th Expeditionary Airlift Squadron (EAS) at Al Udeid Air Base, Qatar, Jan. 9, 2014.‖ Internet: http://www.defenseimagery.mil/imageRetrieve.action?guid=4e697ed5c27a948210215724c4cc6ddf3de4085a&t=2, Jan 9, 2014 [Mar 1, 2015]. As a work of the U.S. federal government, the image is in public domain. [4] A derivative of the original image uploaded by Tlwt, titled ―BMW F11 head up display at night,‖ available at: http://commons.wikimedia.org/wiki/File:BMW_F11_head_up_display_at_night.jpg, used under the Creative Commons BY-SA 3.0 Unported license. This derivative image is relicensed under the Creative Commons BY-NC-SA 3.0 Unported license by Hongbae Sam Park. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/3.0/. [5] Optics.org. ―BAE display heads for commercial aircraft.‖ Internet: http://optics.org/article/36187, Oct 10, 2008 [Nov 15, 2014]. [6] Akihabaranews.com Inc. ―T-OLED will change who you will interact with DATA.‖ Internet: http://en.akihabaranews.com/43794/displays/t-oled-will-change-who-you-will-interact-with-data, Apr 15, 2010 [Nov 14, 2014]. [7] Marines magazine. ―F-35 Lighting II Helmet Mounted Display System.‖ Internet: http://marinesmagazine.dodlive.mil/2010/03/23/f-35-lighting-ii-helmet-mounted-display-system, Mar 30, 2015 [Mar 23, 2010]. As a work of the U.S. federal government, the work is in public domain. 81  [8] H. Lee, Y. Xiong, N. Fang, W. Srituravanich, S. Durant, M. Ambati, C. Sun, and X. Zhang, ―Realization of optical superlens imaging below the diffraction limit,‖ New Journal of Physics, vol. 7, pp. 255–255, Dec. 2005. [9] C. Hembd-Sölner, R. F. Stevens, and M. C. Hutley, ―Imaging properties of the Gabor superlens,‖ Journal of Optics A: Pure and Applied Optics, vol. 1, no. 1, p. 94, 1999. [10] J. W. Duparré and F. C. Wippermann, ―Micro-optical artificial compound eyes,‖ Bioinspiration & Biomimetics, vol. 1, no. 1, pp. R1–R16, Mar. 2006. [11] K. Stollberg, A. Brückner, J. Duparré, P. Dannberg, A. Bräuer, and A. Tünnermann, ―The Gabor superlens as an alternative wafer-level camera approach inspired by superposition compound eyes of nocturnal insects,‖ Optics express, vol. 17, no. 18, pp. 15747–15759, 2009. [12] J. W. Duparre, P. Schreiber, P. Dannberg, T. Scharf, P. Pelli, R. Voelkel, H.-P. Herzig, and A. Braeuer, ―Artificial compound eyes: different concepts and their application for ultraflat image acquisition sensors,‖ 2004, pp. 89–100. [13] D. Gabor. ―Optical System Composed of Lenticules.‖ U.S. Patent 2 351 034 A, Jun. 13, 1944. [14] D. Gabor. ―System of Projecting Pictures in Stereoscopic Relief.‖ U.S. Patent 2 351 033 A, 1944, Jun. 13, 1944. [15] N. Lindlein, ―Simulation of micro-optical systems including microlens arrays,‖ Journal of Optics A: Pure and Applied Optics, vol. 4, no. 4, p. S1, 2002. [16] D. Lanman and D. Luebke, ―Near-eye light field displays,‖ ACM Transactions on Graphics, vol. 32, no. 6, pp. 1–10, Nov. 2013. [17] Sony Corporation. ―Optical device and image display apparatus.‖ U.S. Patent 7 502 168 B2, Mar 10, 2009. [18] Lumus Ltd. ―Light guide optical device.‖ U.S. Patent 7 457 040 B2, Nov 25, 2008. [19] J. Rolland and O. Cakmakci, ―Head-worn displays: the future through new eyes,‖ Opt. Photon. News, vol. 20, no. 4, pp. 20–27, 2009. [20] O. Cakmakci and J. Rolland, ―Head-Worn Displays: A Review,‖ Journal of Display Technology, vol. 2, no. 3, pp. 199–216, Sep. 2006. [21] Motorola Inc. ―Direct retinal scan display with planar imager.‖ U.S. Patent 5 369 415 A, Nov 29, 1994. 82  [22] V. Shaoulov, R. Martins, and J. P. Rolland, ―Compact microlenslet-array-based magnifier,‖ Optics letters, vol. 29, no. 7, pp. 709–711, 2004. [23] H. Yang, C.-K.Chao, M.-K.Wei, and C.-P. Lin, ―High fill-factor microlens array mold insert fabrication using a thermal reflow process,‖ Journal of Micromechanics and Microengineering, vol. 14, no. 8, pp. 1197–1204, Aug. 2004. [24] M.-H. Wu and G. M. Whitesides, ―Fabrication of two-dimensional arrays of microlenses and their applications in photolithography,‖ Journal of micromechanics and microengineering, vol. 12, no. 6, p. 747, 2002. [25] A. Ingman, ―The Head Up Display Concept,‖ Lund University School of Aviation, Maret, 2005. [26] Optikos Corporation. ―How to measure MTF and other Properties of Lenses.‖ Wakefield, MA. July 16, 1999. [27] C. Y. Chang, S. Y. Yang, and J. L. Sheh, ―A roller embossing process for rapid fabrication of microlens arrays on glass substrates,‖ Microsystem Technologies, vol. 12, no. 8, pp. 754–759, Feb. 2006. [28] J. Schulze, W. Ehrfeld, H. Loewe, A. Michel, and A. Picard, ―Contactless embossing of microlenses: a new technology for manufacturing refractive microlenses,‖ in Lasers and Optics in Manufacturing III, 1997, pp. 89–98. [29]S. Zio, I. Frese, H. Kasprzak, and S. Kufner, ―Contactless embossing of microlenses—a parameter study,‖ Optical Engineering, vol. 42, no. 5, pp. 1451–1455, 2003. [30] Z. Deng, F. Chen, Q. Yang, H. Liu, H. Bian, G. Du, Y. Hu, J. Si, X. Meng, and X. Hou, ―A facile method to fabricate close-packed concave microlens array on cylindrical glass,‖ Journal of Micromechanics and Microengineering, vol. 22, no. 11, p. 115026, Nov. 2012. [31] F. Chen, H. Liu, Q. Yang, X. Wang, C. Hou, H. Bian, W. Liang, J. Si, and X. Hou, ―Maskless fabrication of concave microlens arrays on silica glasses by a femtosecond-laser-enhanced local wet etching method,‖ Optics express, vol. 18, no. 19, pp. 20334–20343, 2010. [32] C. S. Lim, M. H. Hong, A. S. Kumar, M. Rahman, and X. D. Liu, ―Fabrication of concave micro lens array using laser patterning and isotropic etching,‖ International Journal of Machine Tools and Manufacture, vol. 46, no. 5, pp. 552–558, 2006.  83  [33] T.-H. Lin, H. Yang, and C.-K. Chao, ―Concave microlens array mold fabrication in photoresist using UV proximity printing,‖ Microsystem Technologies, vol. 13, no. 11–12, pp. 1537–1543, Oct. 2006. [34] C.-P. Lin, H. Yang, and C.-K. Chao, ―Hexagonal microlens array modeling and fabrication using a thermal reflow process,‖ Journal of micromechanics and microengineering, vol. 13, no. 5, p. 775, 2003. [35] H. Yang, C.-K. Chao, M.-K. Wei, and C.-P. Lin, ―High fill-factor microlens array mold insert fabrication using a thermal reflow process,‖ Journal of Micromechanics and Microengineering, vol. 14, no. 8, pp. 1197–1204, Aug. 2004. [36] E. Roy, B. Voisin, J.-F. Gravel, R. Peytavi, D. Boudreau, and T. Veres, ―Microlens array fabrication by enhanced thermal reflow process: Towards efficient collection of fluorescence light from microarrays,‖ Microelectronic Engineering, vol. 86, no. 11, pp. 2255–2261, Nov. 2009. [37] S.-Y. Hung, ―Optimal design using thermal reflow and caulking for fabrication of gapless microlens array mold inserts,‖ Optical Engineering, vol. 46, no. 4, p. 043402, Apr. 2007. [38] G. J. Woodgate and J. Harrold, ―A new architecture for high resolution autostereoscopic 2D/3D displays using free-standing liquid crystal microlenses,‖ SID Int,‖ in Symp. Digest Tech. Papers, 2005, vol. 36, pp. 378–381. [39] Y. Li, X. Yi, and J. Hao, ―Design and fabrication of 128x128 diffractive microlens arrays on Si for PtSi focal plane arrays,‖ in Photonics China‘98, 1998, pp. 132–137. [40] X. Yu, Z. Wang, and Y. Han, ―Microlenses fabricated by discontinuous dewetting and soft lithography,‖ Microelectronic Engineering, vol. 85, no. 9, pp. 1878–1881, Sep. 2008. [41] H. Wu, T. W. Odom, and G. M. Whitesides, ―Reduction photolithography using microlens arrays: applications in gray scale photolithography,‖ Analytical chemistry, vol. 74, no. 14, pp. 3267–3273, 2002. [42] Y. Lu and S. Chen, ―Direct write of microlens array using digital projection photopolymerization,‖ Applied Physics Letters, vol. 92, no. 4, p. 041109, 2008. [43] World Wide Web Consortium. ―A Standard Default Color Space for the Internet - sRGB.‖ Internet: http://www.w3.org/Graphics/Color/sRGB, Nov 5, 1996 [Jan 14, 2015]. 84  [44] Canon Canada Inc. ―PowerShot S120 Specifications.‖ Internet: http://www.canon.ca/inetCA/en/products/method/gp/pid/28393#_030, Feb 3, 2015 [Feb 3, 2015]. [45] W. C. Sweatt, B. H. Jared, G. N. Nielson, M. Okandan, A. Filatov, M. B. Sinclair, J. L. Cruz-Campa, and A. L. Lentine, ―Micro-optics for high-efficiency optical performance and simplified tracking for concentrated photovoltaics (CPV),‖ 2010, pp. 765210–765210–8. [46] A. El Gamal and H. Eltoukhy, ―CMOS image sensors,‖ Circuits and Devices Magazine, IEEE, vol. 21, no. 3, pp. 6–20, 2005. [47] A. Nakai, K. Matsumoto, and I. Shimoyama, ―A stereoscopic display with a vibrating microlens array,‖ in Micro Electro Mechanical Systems, 2002. The Fifteenth IEEE International Conference on, 2002, pp. 524–527. [48] S. Tang, R. T. Chen, D. J. Gerold, M. M. Li, C. Zhao, S. Natarajan, and J. Lin, ―Design limitations of highly parallel free-space optical interconnects based on arrays of vertical-cavity surface-emitting laser diodes, microlenses, and photodetectors,‖ in OE/LASE‘94, 1994, pp. 323–333. [49] H. W. Choi, E. Gu, C. Liu, C. Griffin, J. M. Girkin, I. M. Watson, and M. D. Dawson, ―Fabrication of natural diamond microlenses by plasma etching,‖ Journal of Vacuum Science & Technology B, vol. 23, no. 1, pp. 130–132, 2005. [50] Stanford Nanofabrication Facilities. ―Resist Modules.‖ Internet: https://snf.stanford.edu/SNF/processes/process-modules/photolithography/resist-modules, Jul 7, 2010 [Jun 11, 2014]. [51] Nikon Instruments Inc. ―Modulation Transfer Function.‖ Internet: http://www.microscopyu.com/articles/optics/mtfintro.html, Aug 3, 2001 [December 30, 2014]. [52] Alexander Hornberg. Handbook of Machine Vision. Darmstadt, Germany: WILEY-VCH, 2006, pp. 6-7. [53] Mark Nicholson. ―How to Optimize on MTF.‖ Internet: https://www.zemax.com/support/knowledgebase/how-to-optimize-on-mtf, May 7, 2007 [April 12, 2015]. [54] Matt Young. Optics and Lasers: Including Fibers and Optical Waveguides. Heidelberg, Germany: Springer-Verlag, 1992, pp. 188-189. 85  Appendices Appendix A   Beginning of the Octave Script filter_mask=exit_angle_filter; filter_mask(! isnan(filter_mask)) = 1;  exit_angle_temp = zeros(size(exit_angle)); for n = 1:rows(exit_angle)  for m = 1:columns(exit_angle)   if (abs(exit_angle(n,m))<abs(target_exit_angle*1.005))    if (abs(exit_angle(n,m))>abs(target_exit_angle*0.995))     exit_angle_temp(n,m) = NaN;    endif   endif  endfor endfor   for n = 1:rows(exit_angle_temp)  for m = 1:columns(exit_angle_temp)   if (isnan(exit_angle_temp(n,m)))    exit_angle(n,m) = NaN;   endif  endfor endfor  figure; % subplot (1, 2, 1) pcolor(F_not,-1*f1,exit_angle); shading interp; t = colorbar;   % set(pcolor_temp,'facealpha',0); set(get(t,'ylabel'),'string','Degrees','fontsize',16); title("exit_angle",'fontsize',16); xlabel('F (mm)'); ylabel('f1 (mm)'); % axis([1 10],[0.1 1]) % axis([1 10]); xlim([1 10]); ylim([0.1 1]); % hold on; % plot(F_not_line,-f1_set);  figure; % subplot (1, 2, 2) colormap('default'); pcolor(F_not,-1*f1,exit_angle_filter); shading interp; t = colorbar;  set(get(t,'ylabel'),'string','Degrees','fontsize',16); title("exit_angle_filtered",'fontsize',16); xlabel('F (mm)'); ylabel('f1 (mm)');  light_efficiency = pitch_ratio.*p1./light_cone_diameter; cone_center_height_normalized = (F_not)./(F_not-sag1).*(F_not*nl+d2)./(F_not*nl); % output_lens_number = (light_cone_diameter/2+p2/2)./(cone_center_height_normalized*p1-p2);  function negative_optimization4 clear; clc;  %nl is the refractive index of the lens nl=1.4; n2=1.46; %p1 is period of lens 1 p1=0.26; f2_set = 1.027; v1_set = -0.33; display_halfheight = 3.2; MLA_halfaperture = 6; eye_relief = 20; target_exit_angle = 10; target_degree_height = (atan(display_halfheight/eye_relief)*180/pi)/display_halfheight %F_not is the back focal length of the superlens assuming lenses are perfect and thin F_not = meshgrid(1:0.02:10); F_not_line = (1:0.02:10); %f1 is the focal length of the first lens array % f1 = linspace(-1,-0.1,19)' f1 = flipud(meshgrid(-0.1:-0.002:-1.0)'); f1_line = (-0.1:-0.002:-1.0); %d1 is the thickness of the 1st lens d1 = 1; %d2 is the thickness of the 2nd lens d2 = 1;  %r1 is the radius of lens 1 r1 = (1 - nl)*f1;  %sag1 sag of lens 1 sag1 = abs(abs(r1) - cos(asin(p1./(2*abs(r1)))).*abs(r1));  temp = asin(p1./(2*abs(r1))); temp1 = temp; temp1(real(temp1) > 1) = 0;  for n = 1:rows(temp)  for m = 1:columns(temp)   if (temp(n,m)>1)    sag1(n,m) = r1(n,m)-cos(asin(0.5))*r1(n,m);   endif  endfor endfor    %v1 is the image distance from center of lens 1 sag v1 = abs(F_not.*f1./(F_not-f1)); %h1 is the principle plane of the 1st lens h1 = 0; %h2 is the principle plane of the 2nd lens h2 = 0;  86  figure; % subplot (1, 2, 1) pcolor(F_not,-1*f1,v1); shading interp; t = colorbar;  set(get(t,'ylabel'),'string','Degrees','fontsize',16); title("v1",'fontsize',16); xlabel('F (mm)'); ylabel('f1 (mm)'); xlim([1 10]); ylim([0.1 1]);    %F is the back focal length with thickness of the lens accounted for F = (F_not-(d1-h1))/nl;    %u is the object distance for the negative refractive surface of lens array 1, F*nl+d u = F_not; % u = F*nl+d1;  %v is the negative image distance from the negative lens of lens array 1 % v = R.*u./(u*(1-nl)-R*nl) v = f1.*u./(u-f1);  % R1 is the radius of the first lens R1 = u.*v*(1-nl)./(v*nl+u); % R1 = f1*(1-nl);  % Pixel_angle is the viewing angle/ angle of the cone of light launched from a pixel on the display pixel_angle=120;  disp("asin test"); asin(1/8); cos(7.18); cos(0.12533); p1./(2*R1); temp = asin(p1./(2*R1)); temp1 = temp; temp1(real(temp1) > 1) = 0;  sag1 = R1-cos(asin(p1./(2*R1))).*R1; %only look at attainable sag1 sag_temp=sag1;   for n = 1:rows(temp)  for m = 1:columns(temp)   if (!isreal(sag_temp(n,m)))    sag1(n,m) = NaN;   % else    % if (temp(n,m)>1)     % sag1(n,m) = R1(n,m)-cos(asin(0.5))*R1(n,m);    % endif   endif  endfor endfor  h2 = d2/nl;  f2 = h2+abs(v)+abs(sag1); back_focal_length_lens2 = abs(v)+abs(sag1); % light_cone_diameter = (nl+d2./(-1*v))/(nl)*p1 output_lens_number = F_not.*(p1*(d2+v1)+p2.*v1).*(F_not-f1)./(p1*v1.*(d2*F_not-F_not.*f1-d2*f1));  max_diameter = -1*f1*0.6; % beam_width = output_lens_number.*max_diameter; beam_width = output_lens_number.*p2; filtered_beam_width = beam_width; filtered_beam_width(filtered_beam_width < 4) = NaN; beam_width_mask = filtered_beam_width; beam_width_mask(! isnan(beam_width_mask)) = 1; filter_mask;  w_max = abs(MLA_halfaperture-beam_width/2); new_exit_angle = -1*atan(w_max/eye_relief)*360/(2*pi);  figure; % subplot (1, 2, 1) pcolor(F_not,-1*f1,new_exit_angle); shading interp; t = colorbar;  set(get(t,'ylabel'),'string','Degrees','fontsize',16); title("new_exit_angle",'fontsize',16); xlabel('F (mm)'); ylabel('f1 (mm)'); xlim([1 10]); ylim([0.1 1]);  figure; % subplot (1, 2, 2) colormap('default'); pcolor(F_not,-1*f1,beam_width); shading interp; t = colorbar;  set(get(t,'ylabel'),'string','mm','fontsize',16); title("Collimated beam width",'fontsize',16); xlabel('F (mm)'); ylabel('f1 (mm)'); xlim([1 10]); ylim([0.1 1]);  w_max = MLA_halfaperture-beam_width/2; eye_relief_new = w_max./tan(exit_angle*2*pi/360);  for n = 1:rows(eye_relief_new)  for m = 1:columns(eye_relief_new)   if (abs(eye_relief_new(n,m))<abs(eye_relief*1.005))    if (abs(eye_relief_new(n,m))>abs(eye_relief*0.995))     eye_relief_new(n,m) = NaN;    endif   endif  endfor endfor  figure; % subplot (1, 2, 1) pcolor(F_not,-1*f1,eye_relief_new); shading interp; t = colorbar;  set(get(t,'ylabel'),'string','mm','fontsize',16); title("eye_relief_r",'fontsize',16); xlabel('F (mm)'); ylabel('f1 (mm)'); xlim([1 10]); ylim([0.1 1]); exit_angle = new_exit_angle; 87  light_cone_diameter = (nl+d2./(back_focal_length_lens2))/(nl)*p1; pitch_ratio=1./(1-f1./F_not); p2=pitch_ratio*p1; exit_angle = 1./(f2.*(F_not./f1-1))*360/(2*pi)*display_halfheight; p2_temp=p2;  for n = 1:rows(p2)  for m = 1:columns(p2)   if (abs(p2(n,m))<abs(0.25*1.005))    if (abs(p2(n,m))>abs(0.25*0.995))     p2_temp(n,m) = NaN;    endif   endif  endfor endfor  figure; % subplot (1, 2, 2) colormap('default'); pcolor(F_not,-1*f1,p2_temp); shading interp; t = colorbar;  set(get(t,'ylabel'),'string','Degrees','fontsize',16); title("p2",'fontsize',16); xlabel('F (mm)'); ylabel('f1 (mm)');  figure; % subplot (1, 2, 1) pcolor(F_not,-1*f1,exit_angle); shading interp; t = colorbar;  set(get(t,'ylabel'),'string','Degrees','fontsize',16); title("exit_angle_1",'fontsize',16); xlabel('F (mm)'); ylabel('f1 (mm)'); xlim([1 10]); ylim([0.1 1]);  exit_angle_filter=exit_angle; % exit_angle_filter((exit_angle_filter < -6)) = NaN; exit_angle_filter((exit_angle_filter < (-31))) = NaN; exit_angle_filter; % exit_angle_filter((exit_angle_filter > -5)) = NaN; exit_angle_filter((exit_angle_filter > (-30))) = NaN; exit_angle_filter; disp("asin test2");  figure; f1_set= v1_set*F_not_line./(v1_set+F_not_line); plot(F_not_line,-f1_set) figure; % subplot (1, 2, 1) pcolor(F_not,-1*f1,pitch_ratio); shading interp; t = colorbar;  set(get(t,'ylabel'),'string','Degrees','fontsize',16); title("pitch_ratio",'fontsize',16); xlabel('F (mm)'); ylabel('f1 (mm)'); xlim([1 10]); ylim([0.1 1]); target_exit_angle = -10; %--------------------------------------------------------- exit_angle_filter=exit_angle; % exit_angle_filter((exit_angle_filter < -6)) = NaN; exit_angle_filter((exit_angle_filter > -10.5)) = NaN; exit_angle_filter; % exit_angle_filter((exit_angle_filter > -5)) = NaN; exit_angle_filter((exit_angle_filter < -9.5)) = NaN; exit_angle_filter; disp("asin test2");  figure; f1_set= v1_set*F_not_line./(v1_set+F_not_line); plot(F_not_line,-f1_set);  figure; % subplot (1, 2, 1) pcolor(F_not,-1*f1,pitch_ratio); shading interp; t = colorbar;  set(get(t,'ylabel'),'string','Degrees','fontsize',16); title("pitch_ratio",'fontsize',16); xlabel('F'); ylabel('f1'); xlim([1 10]); ylim([0.1 1]);  filter_mask=exit_angle_filter; filter_mask(! isnan(filter_mask)) = 1;  exit_angle_temp = zeros(size(exit_angle)); for n = 1:rows(exit_angle)  for m = 1:columns(exit_angle)   if (abs(exit_angle(n,m))<abs(target_exit_angle*1.005))    if (abs(exit_angle(n,m))>abs(target_exit_angle*0.995))     exit_angle_temp(n,m) = NaN;    endif   endif  endfor endfor new_exit_angle_temp = new_exit_angle; for n = 1:rows(sag1)  for m = 1:columns(sag1)   if (!isreal(sag_temp(n,m)))    new_exit_angle_temp(n,m) = NaN;   endif  endfor endfor  for n = 1:rows(exit_angle_temp)  for m = 1:columns(exit_angle_temp)   if (isnan(exit_angle_temp(n,m)))    new_exit_angle_temp(n,m) = NaN;   endif  endfor endfor  clc; clear all; endfunction End of the Script  88  Appendix B   Beginning of the C Program      sag1_temp1 = (100.0 / (FD->cv * FD->cv) - 100.0 * (y * y)) / 100.0;    if(FD->cv > 0.0){       UD->sag1 = (100.0 / fabs(FD->cv) - 100.0 * sqrt(sag1_temp1)) / 100.0;      }else{       UD->sag1 = -((100.0 / fabs(FD->cv) - 100.0 * sqrt(sag1_temp1)) / 100.0);      }      //UD->sag1 = 1.0/100.0;     }    }else{     if ((fabs(100.0 * y)) >= (100.0 * (ld / 2.0))){      sag1_temp1 = (100.0 / (FD->cv * FD->cv) - 100.0 * (ld * ld) / 4.0) / 100.0;      if(FD->cv > 0){       UD->sag1 = (100.0 / fabs(FD->cv) - 100.0 * sqrt(sag1_temp1)) / 100.0;      }else{       UD->sag1 = -((100.0 / fabs(FD->cv) - 100.0 * sqrt(sag1_temp1)) / 100.0);      }     }else{      sag1_temp1 = (100.0 / (FD->cv * FD->cv) - 100.0 * (y * y)) / 100.0;      if(FD->cv > 0){       UD->sag1 = (100.0 / fabs(FD->cv) - 100.0 * sqrt(sag1_temp1)) / 100.0;      }else{       UD->sag1 = -((100.0 / fabs(FD->cv) - 100.0 * sqrt(sag1_temp1)) / 100.0);      }     }    }    }   /* forget supporting a hyper hemisphere! */   UD->sag2 = UD->sag1;   break;       case 4:   /* ZEMAX wants a paraxial ray trace to this surface */   /* x, y, z, and the optical path are unaffected, at least for this surface type */   /* for paraxial ray tracing, the return z coordinate should always be zero. */   /* paraxial surfaces are always planes with the following normals */   /* for a lens array, only consider the single lens on axis */   UD->ln =  0.0;   UD->mn =  0.0;   UD->nn = -1.0;   power = (FD->n2 - FD->n1)*FD->cv;   if ((UD->n) != 0.0)           { #include <windows.h> #include <math.h> #include <string.h> #include "usersurf.h" /* Modified by Sam Park, Oct 8, 2013  in the main switch, switch(FD->type), cases 3 and 5 are modified, to have the ability to separately control the lens diameter and the aperture. The lens parameters are renamed (Width -> Period) and a new parameter (Lens diameter) is added. */ /* Written by Kenneth E. Moore Oct 11, 1996  This DLL models an arbitrary number of lens elements in a lens array. The individual lenses are generally conic aspheres. The user provides the number of elements in x and y, the size in x and y of each element, and the radius, conic, and glass. This surface breaks up the beam into numerous separate beams, and so most ZEMAX features will fail to work with this surface. However, the spot diagrams, image analysis, etc, all work okay. Modified GetCellCenter 9-25-01 KEM to accept rays at edge of lenses Modified GetCellCenter 2-07-06 KEM to bound cell values to valid range, useful for very fast lenslets */  int __declspec(dllexport) APIENTRY UserDefinedSurface(USER_DATA *UD, FIXED_DATA *FD); int GetCellCenter(int nx, int ny, double wx, double wy, double x, double y, double *cx, double *cy); /* a generic Snells law refraction routine */ int Refract(double thisn, double nextn, double *l, double *m, double *n, double ln, double mn, double nn);  BOOL WINAPI DllMain (HANDLE hInst, ULONG ul_reason_for_call, LPVOID lpReserved)  {    return TRUE;    }  /* this DLL models a lens array surface type */ int  __declspec(dllexport) APIENTRY UserDefinedSurface(USER_DATA *UD, FIXED_DATA *FD)  {    int i, nx, ny, error, error_x, error_y, miss_flag;    double p2, alpha, power, a, b, c, rad, casp, t, zc;    double wx, wy, cx, cy, x, y, z;    double new_x, new_y;    double ld = 0;  /*  double monitor_cv;*/    double sag1_temp1, sag1_temp2;    //int cv_flag;    switch(FD->type)     {       case 0: 89         /* ZEMAX is requesting general information about the surface */          switch(FD->numb)           {             case 0:              /* ZEMAX wants to know the name of the surface */            /* do not exceed 12 characters */            strcpy(UD->string,"Temp Lens Array");                break;             case 1:              /* ZEMAX wants to know if this surface is rotationally symmetric */                /* it is not, so return a null string */              UD->string[0] = '\0';                break;             case 2:              /* ZEMAX wants to know if this surface is a gradient index media */                /* it is not, so return a null string */              UD->string[0] = '\0';              break;             }          break;       case 1:        /* ZEMAX is requesting the names of the parameter columns */          /* the value FD->numb will indicate which value ZEMAX wants. */          /* they are all "Unused" for this surface type */          /* returning a null string indicates that the parameter is unused. */          switch(FD->numb)           {             case 1:              strcpy(UD->string, "Number X");                break;             case 2:              strcpy(UD->string, "Number Y");                break;             case 3:              strcpy(UD->string, "Period X");                break;             case 4:              strcpy(UD->string, "Period Y");                break;    case 5:              strcpy(UD->string, "Lens Diameter");                break;              default:              UD->string[0] = '\0';              break;             }        break;       case 2:        /* ZEMAX is requesting the names of the extra data columns */          /* the value FD->numb will indicate which value ZEMAX wants. */          /* they are all "Unused" for this surface type */          /* returning a null string indicates that the extradata value is unused. */          switch(FD->numb)           {             default:              UD->string[0] = '\0';              break;             }             (UD->l) = (UD->l)/(UD->n);             (UD->m) = (UD->m)/(UD->n);             (UD->l) = (FD->n1*(UD->l) - (UD->x)*power)/(FD->n2);             (UD->m) = (FD->n1*(UD->m) - (UD->y)*power)/(FD->n2);             /* normalize */             (UD->n) = sqrt(1/(1 + (UD->l)*(UD->l) + (UD->m)*(UD->m) ) );             /* de-paraxialize */             (UD->l) = (UD->l)*(UD->n);             (UD->m) = (UD->m)*(UD->n);             }          break;       case 5:   /* ZEMAX wants a real ray trace to this surface */    /* clear the multiple intercept test flag */   miss_flag = 0;    if (0)   {    outofbounds:;    UD->ln =  0.0;    UD->mn =  0.0;    UD->nn = -1.0;    if (Refract(FD->n1, FD->n2, &UD->l, &UD->m, &UD->n, UD->ln, UD->mn, UD->nn)) return(-FD->surf);    return(0);   }   /* okay, not a plane. */   nx = (int)FD->param[1];   ny = (int)FD->param[2];    wx = FD->param[3];   wy = FD->param[4];      x = UD->x;   y = UD->y;    new_x=x;   new_y=y;          /* make sure nx and ny are both odd, otherwise the chief ray is a problem... */   if (!nx&1)nx++; */     /* go back to the tangent plane */     t = -t;     (x) = (UD->l) * t + (x);     (y) = (UD->m) * t + (y);     x = x + cx;     y = y + cy;     cx = new_cx;     cy = new_cy;     goto try_again;    }   }   zc = (z) * FD->cv;   rad = zc * FD->k * (zc * (FD->k + 1) - 2) + 1;   casp = FD->cv / sqrt(rad);   if(FD->cv == 0.0){    UD->ln = 0.0;    UD->mn = 0.0;    UD->nn = -1.0;   }else{    //original    UD->ln = (x) * casp; 90         break;      case 3:   /* ZEMAX wants to know the sag of the surface */   /* if there is an alternate sag, return it as well */   /* otherwise, set the alternate sag identical to the sag */   /* The sag is sag1, alternate is sag2. */   UD->sag1 = 0.0;   UD->sag2 = 0.0;   /* if a plane, just return */ //- default value   //if (FD->cv == 0) return(0);    //if above returns 0, then below is skipped so aperture stops don't work at radius of infinity   // which is FD->cv = 1/r = 0 so removing above if statement might make it work?    /* figure out the center coordinates of which "cell" we are in */   nx = (int)FD->param[1];   ny = (int)FD->param[2];   wx = FD->param[3];   wy = FD->param[4];   // assign ld value of field 5 which is lens diameter. lens assumed to be perfect circle (Sam Oct 2 2013)   ld = FD->param[5];         x = UD->x;         y = UD->y;         /* make sure nx and ny are both odd, otherwise the chief ray is a problem... */         if (!nx&1)nx++;         if (!ny&1)ny++;         if (wx <= 0.0 || wy <= 0.0) return(-1);         error = GetCellCenter(nx, ny, wx, wy, UD->x, UD->y, &cx, &cy);          if (error) return(0);          /* offset the coordinates */          x -= cx;    y -= cy;    //original          //p2 = x*x + y*y;    //new    p2 = ld*ld/4.0 + ld*ld/4.0;          alpha = 1.0 - (1.0+FD->k)*FD->cv*FD->cv*p2;     //new UD->sag1 as per geometric sag finding    // two conditions where radius of curvature 1/cv is smaller than lens radius    // and radius of curvature is bigger than lens radius but lens diameter is smaller than lens period   if (!ny&1)ny++;   if (wx <= 0.0 || wy <= 0.0) return(-1);    error = GetCellCenter(nx, ny, wx, wy, UD->x, UD->y, &cx, &cy);    if (error) goto outofbounds;   new_x -= cx;   new_y -= cy;   if(fabs(new_y) > (FD->param[5] / 2.0)) return(FD->surf);   //if(FD->cv != 0.0){   try_again:;   /* offset the coordinates */   x -= cx;   y -= cy;    UD->mn = (y) * casp;    UD->nn = ((z) - ((1/FD->cv) - (z) * FD->k)) * casp;   }   /* restore coordinates */         UD->x = x + cx;         UD->y = y + cy;         UD->z = z;         if (Refract(FD->n1, FD->n2, &UD->l, &UD->m, &UD->n, UD->ln, UD->mn, UD->nn)) return(-FD->surf);    break;       case 6:        /* ZEMAX wants the index, dn/dx, dn/dy, and dn/dz at the given x, y, z. */          /* This is only required for gradient index surfaces, so return dummy values */          UD->index = FD->n2;          UD->dndx = 0.0;          UD->dndy = 0.0;          UD->dndz = 0.0;        break;       case 7:        /* ZEMAX wants the "safe" data. */          /* this is used by ZEMAX to set the initial values for all parameters and extra data */          /* when the user first changes to this surface type. */          /* this is the only time the DLL should modify the data in the FIXED_DATA FD structure */          FD->param[1] = 50;          FD->param[2] = 50;          if (FD->cv == 0.0) FD->param[3] = 1.0;          else FD->param[3] = 0.5/FD->cv;          FD->param[4] = FD->param[3];          for (i = 5; i <= 8; i++) FD->param[i] = 0.0;          for (i = 1; i <= 200; i++) FD->xdata[i] = 0.0;          break;       }    return 0;    }  int Refract(double thisn, double nextn, double *l, double *m, double *n, double ln, double mn, double nn) {  double nr, cosi, cosi2, rad, cosr, gamma;  if (thisn != nextn)  {   nr = thisn / nextn;   cosi = fabs((*l) * ln + (*m) * mn + (*n) * nn);   cosi2 = cosi * cosi;   if (cosi2 > 1) cosi2 = 1;   rad = 1 - ((1 - cosi2) * (nr * nr));   if (rad < 0) return(-1);   cosr = sqrt(rad);   gamma = nr * cosi - cosr;   (*l) = (nr * (*l)) + (gamma * ln);   (*m) = (nr * (*m)) + (gamma * mn);   (*n) = (nr * (*n)) + (gamma * nn);  }  return 0; }  int GetCellCenter(int nx, int ny, double wx, double wy, double x, double y, double *cx, double *cy) { double hwx, hwy, tx, ty; double tnx, tny; int mx, my; *cx = 0.0; 91    z = 0.0;   if(FD->cv !=0.0){    a = (UD->n) * (UD->n) * FD->k + 1;    b = ((UD->n)/FD->cv) - (x) * (UD->l) - (y) * (UD->m);    c = (x) * (x) + (y) * (y);    rad = b * b - a * c;    if (rad < 0) return(FD->surf);  /* ray missed this surface */    //if ((rad < 0)||( y > 0.1)) return(FD->surf);  /* ray missed this surface */    if (FD->cv > 0) t = c / (b + sqrt(rad));    else           t = c / (b - sqrt(rad));   }else{     t = 0;    UD->x = x + cx;    UD->y = y + cy;    UD->z = z;    return(0);   }   //setting x=x y=y z=z below will make the lens paraxial   (x) = (UD->l) * t + (x);   (y) = (UD->m) * t + (y);   (z) = (UD->n) * t + (z);    //Original    UD->path = t;   //}          /* okay, if the ray makes a steep angle of intercept,          we may actually have hit the wrong element. Check it again! */          if (miss_flag == 0){             double new_cx, new_cy;    /* avoid infinite loop */    miss_flag = 1;     /* restore global coordinates prior to test */    UD->x = x + cx;    UD->y = y + cy;    error = GetCellCenter(nx, ny, wx, wy, UD->x, UD->y, &new_cx, &new_cy);    if (error) goto outofbounds;    if (new_cx != cx || new_cy != cy){     /* we hit the wrong one!   if(FD->cv == 0.0){    if(fabs(100.0 * y) <= 100*(ld / 2.0)){     UD->sag1=0.0;    }else{     UD->sag1=0.0;    }   }else{    if (fabs(100.0 / FD->cv) <= (100.0 * ld / 2.0)){     if ((fabs(100.0 * y)) >= (100.0 * (1.0/FD->cv))){      if(FD->cv > 0){  UD->sag1 = (100.0/(FD->cv))/100.0;      }else{  UD->sag1 = -(100.0/(FD->cv))/100.0;      }     }else{  *cy = 0.0; tnx = 0; tny = 0; hwx = 0.5 * wx; hwy = 0.5 * wy; mx = (nx-1)/2; // the maximum legal cell number (plus or minus) my = (ny-1)/2;  /* do cx */ if ((100.0 * fabs(x)) > (100.0 * hwx))  {    tx = x;    tnx = 0;  if ((100.0 * x) > 0)  {   while((100.0 * tx) > (100.0 * hwx))   {    tnx++;    tx -= wx;   }  }else{       while((100.0 * tx) < -(100.0 * hwx))        {          tnx--;          tx += wx;          }       }    *cx = tnx*wx;    }  /* do cy */ if ((100.0 * fabs(y)) > (100.0 * hwy))  {    ty = y;    tny = 0;    if ((100.0 * y) > 0)     {       while((100.0 * ty) > (100.0 * hwy))        {          tny++;          ty -= wy;          }       }else{       while((100.0 * ty) < -(100.0 * hwy))        {          tny--;          ty += wy;          }       }    *cy = tny*wy;     } // Modified 2-07-06 KEM to bound cell values to valid range, useful for very fast lenslets // new test to bound cell values to valid range while (tnx > mx) tnx--; while (tnx <-mx) tnx++; while (tny > my) tny--; while (tny <-my) tny++; *cx = tnx*wx; *cy = tny*wy; /* are there this many cells? */ if (fabs(tnx) > (nx-1)/2 || fabs(tny) > (ny-1)/2) return -1; else return 0; } End of the C Program 92  Appendix C   The refraction of light through the concave microlens, originating from a point source is shown in Figure C.1. The actual light ray is represented in bold lines. We can define the position of the principal plane by using the vertex of the curved microlens surface as a reference plane. Then we only need to find x which is the distance to the principal plane from the microlens surface.  Figure C.1: Ray diagram of the thick concave lens. The bold lines are the actual light rays, and the broken lines are the virtual rays. The direction of ray propagation is indicated with the arrows.  The effective focal length can be calculated in the following steps. We will use the small angle approximation and assume the thickness of the MLA is much bigger than the sag of the microlenses. From Snell‘s law,  𝐹1′ = 𝐹1𝑛𝑙  (76) because   𝑛𝑙 sin𝜃𝐹1′ = 𝑛𝑚 sin𝜃𝐹1 ≈ 𝑛𝑙𝜃𝐹1′ ≈ 𝑛𝑚𝜃𝐹1 , (77) where nl and nm are the refractive indices of the microlens and the surrounding medium, and EFL F1 v1 t1 𝐹1′  H1 sag x1 𝜃𝐹1  𝜃𝐹1′  𝜃𝑣1  𝑕1′  h1 Virtual image plane Image plane resulting from refraction at the flat surface 93   𝐹1′𝜃𝐹1′ = 𝐹1𝜃𝐹1 = 𝑕1. (78) Eq. 77 is obtained assuming paraxial conditions, which will simplify the derivation of the principal plane. Since the medium surrounding the microlens will be air, nm is 1. nl is the refractive index of the microlens material. Assuming the radius of curvature R of the curved surface is known, we can find the focal length fR of the curved surface since  𝑓𝑅 =𝑅𝑛𝑚𝑛𝑚−𝑛𝑙. (79) The image distance v1 can then be calculated from the thin lens equation and by treating the sum of 𝐹1′  and the concave microlens thickness t1 as the object distance from the curved surface. Therefore  1𝐹1′+𝑡1+1𝑣1=1𝑓𝑅. (80) Combining equations (79) and (80) and rearranging for v1,  𝑣1 =𝑅𝑛𝑚  𝐹1′+𝑡1  𝑛𝑚−𝑛𝑙  𝐹1′+𝑡1 −𝑅𝑛𝑚. (81) We also note that  𝜃𝑣1𝜃𝐹1′=𝐹1′+𝑡1𝑣1, (82) and from eq. 77,  𝐹1+𝑥1𝑣1+𝑥1=𝜃𝑣1𝜃𝐹1= 𝐹1′+𝑡1 𝑛𝑚𝑣1𝑛𝑙. (83) We can now solve for the distance x to the principal plane from the centre of the curved surface by solving eq. 83 in terms of x1 such that:  𝑥1 =𝑣1 𝐹1𝑛𝑙− 𝐹1′+𝑡1 𝑛𝑚   𝐹1′+𝑡1 𝑛𝑚−𝑣1𝑛𝑙. (84) 94   Figure C.2: Ray diagram of the convex microlens.  The refraction of light through the convex microlens, originating from a point source is shown in Figure C.2. The distance x2 to the principal plane from the curved surface of the second convex microlens can be similarly found by observing the geometrical properties of the refracted rays. The distance to the image plane of the flat surface 𝐹2′  (measured from the flat side of microlens) and the angle it makes w.r.t. the optical axis 𝜃𝐹2′  have a trigonometric relationship such that   𝑕2′𝐹2′+𝑡2= 𝑡𝑎𝑛 𝜃𝐹2′ ≈ 𝜃𝐹2′ , (85) under small-angle approximation. Likewise, F2 and its angle θF2 w.r.t. 𝑕2′  has the following relationship,  𝑕2′𝐹2+𝑡2−𝑥2= 𝑡𝑎𝑛 𝜃𝐹2 ≈ 𝜃𝐹2 . (86) From equations (85) and (86), x2 can be found as:  𝑥2 =𝑛𝑙 𝐹2′+𝑡2 𝑛𝑚−  𝐹2′ + 𝑡2 , (87) t2 F2 x2 𝐹2′  H2 𝑕2′  sag 𝜃𝐹2′  𝜃𝐹2  Object plane Image plane resulting from refraction at the flat surface 95  In order for the convex microlens to collimate light, we want   𝐹2 = 𝑣1 + sag of the concave microlens, (88) such that the object plane of the convex microlens to coincide with the image plane of the concave microlens. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0167164/manifest

Comment

Related Items