Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Design and application of combined multiphoton microscopy and optical coherence tomography system Zhou, Yifeng 2012

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_2012_fall_zhou_yifeng.pdf [ 4.1MB ]
Metadata
JSON: 1.0073160.json
JSON-LD: 1.0073160+ld.json
RDF/XML (Pretty): 1.0073160.xml
RDF/JSON: 1.0073160+rdf.json
Turtle: 1.0073160+rdf-turtle.txt
N-Triples: 1.0073160+rdf-ntriples.txt
Original Record: 1.0073160 +original-record.json
Full Text
1.0073160.txt
Citation
1.0073160.ris

Full Text

Design and Application of Combined Multiphoton Microscopy and Optical Coherence Tomography System  by  Yifeng Zhou  B.A.Sc. University of Science and Technology of China, 2009  A THESIS SUBMITED IN PARTIAL FULFILLEMNT OF THE REQUIEMENTS FOR THE DEGREE OF  MASTER OF APPLIED SCIENCE  in  The Faculty of Graduate Studies  (Electrical and Computer Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (VANCOUVER) September, 2012 © Yifeng Zhou, 2012  Abstract Optical coherence tomography (OCT) is a non-invasive optical tomographic technique based on the principle of interferometry. It can capture micrometer-resolution, three-dimensional images of tissues over millimeter field-of-view at a fast speed. Multiphoton microscopy (MPM) is an emerging imaging modality based on the excitation of nonlinear signals from fluorescent molecules and the induction of second harmonic generation (SHG). It is capable of en-face high-resolution imaging with sub-micron resolution. Although OCT and MPM are essential imaging tools for disease diagnosis, each one of them has shortages, such as the low resolution of OCT and the low depth penetration of MPM. The purpose of this study is to design a multimodal imaging system by combining MPM and OCT into a single platform so that the two modalities can complement and overcome the shortages of each other.  The design consists of two parts: hardware and software. For hardware, the two modalities are integrated into a single platform, sharing the laser source, the controlling scanners and the sample arm. In addition, the OCT has a reference arm for interference and a custom-built spectrometer for signal detection, whereas the MPM uses two photomultiplier tubes (PMT) for photon detection. For software, two user interfaces are specially designed to control beam scanning and data acquisition of the MPM and OCT respectively.  The performance of this mutlimodal system is demonstrated by imaging biological samples. The results indicate that our system is capable of multiscale imaging of multilayered tissues with clear structures. One of the important applications of the multimodal system is measuring the refractive index (RI) and thickness of biological tissues. This capability is demonstrated on fish cornea. The results show our system is capable of imaging as well as quantitative characterization of RI and thickness of ii  multilayered biological tissues. This system can potentially be a powerful tool for disease detection and surgery treatment.  iii  Preface Chapter 3 is based on previous work conducted in UBC Biophotonics laboratory by Dr. Tang and Kenny K.H. Chan. I was responsible for designing the spectrometer of the spectral domain OCT (SDOCT) system and improving the performance of the integrated MPM/OCT system.  In chapter 4, the data processing algorithms for SDOCT user interface design were accomplished by Kenny K.H. Chan, including the background subtraction, numerical dispersion compensation, non-uniform fast Fourier transform and multi-core processing. I was responsible for MPM and OCT user interfaces and image post-processing. The pseudo color mapping function was developed by Samuel Davies, a former summer student.  In chapter 6, the microfluidic chips used in the experiments were made by Dr. Lin Feng. I was responsible for conducting the experiments and processing the results.  Portions of chapter 7 are composed of material that has been prepared for publication. Yifeng Zhou, Kenny K.H. Chan, Tom Lai, and Shuo Tang, “Measurement of refractive index and thickness of biological tissues with multiphoton microscopy and optical coherence tomography.” to be submitted. I conducted all the experiments and wrote the manuscript.  The use of animal tissues in this thesis is for the purpose of demonstrating the imaging ability and applications of the developed MPM/OCT system. No additional studies were performed on the animal tissues and no live animals were kept in the lab. The fish cornea was acquired from the local supermarkets. The mouse skin was from the BC Cancer Research Center. The laboratory animal certification number is A10-0338. iv  Table of contents Abstract .........................................................................................................................ii Preface .......................................................................................................................... iv Table of contents........................................................................................................... v List of tables.............................................................................................................. viii List of figures ............................................................................................................... ix List of abbreviations ................................................................................................... xi Acknowledgements ....................................................................................................xii Chapter 1 introduction and background ................................................................... 1 1.1 Brief history of MPM and OCT ....................................................................... 3 1.2 Problem statement and motivation................................................................... 4 1.3 Organization of the thesis ................................................................................ 6 Chapter 2 principles of MPM and OCT .................................................................... 8 2.1 Principle of MPM ............................................................................................ 8 2.1.1 TPEF ................................................................................................... 10 2.1.2 SHG..................................................................................................... 11 2.1.3 MPM resolution .................................................................................. 13 2.2 Principle of OCT ............................................................................................ 15 2.2.1 Michelson interferometer .................................................................... 15 2.2.2 OCT..................................................................................................... 16 2.2.3 OCT resolution.................................................................................... 21 2.2.4 Image depth ......................................................................................... 22 2.2.5 Dispersion effect ................................................................................. 23 Chapter 3 hardware ................................................................................................... 25 3.1 Light source ................................................................................................... 26 3.2 Pre-compensation unit ................................................................................... 27 v  3.3 Sample arm and reference arm ...................................................................... 27 3.4 MPM photon detection .................................................................................. 30 3.5 OCT spectrometer .......................................................................................... 31 3.6 Scanning control and data acquisition ........................................................... 34 Chapter 4 software design ......................................................................................... 37 4.1 MPM .............................................................................................................. 39 4.1.1 User interface ...................................................................................... 39 4.12 MPM image post-processing ............................................................... 53 4.2 OCT user interface ......................................................................................... 56 4.2.1 Framework of the OCT user interface ................................................ 56 4.2.2 Flow chart ........................................................................................... 60 4.2.3 Scanners waveforms and synchronization .......................................... 63 4.2.4 The delay shift in SDOCT .................................................................. 65 4.2.5 OCT real-time data processing ........................................................... 66 Chapter 5 system characterization ........................................................................... 67 5.1 MPM .............................................................................................................. 67 5.1.1 Lateral resolution ................................................................................ 67 5.1.2 Axial resolution ................................................................................... 70 5.1.3 Frame rate ........................................................................................... 72 5.2 OCT................................................................................................................ 72 5.2.1 Axial resolution ................................................................................... 72 5.2.2 Imaging depth ..................................................................................... 74 5.2.3 SNR and sensitivity fall-off ................................................................ 75 5.2.4 Lateral resolution ................................................................................ 75 5.2.5 Frame rate ........................................................................................... 77 5.3 Field of view of MPM/OCT .......................................................................... 78 vi  5.4 Summary of the overall performance of the MPM/OCT system ................... 80 Chapter 6 system capability and image demonstration ......................................... 81 6.1 Scalable multimodality imaging .................................................................... 81 6.2 Cells and collagen interaction ........................................................................ 85 6.3 OCM .............................................................................................................. 89 Chapter 7 application in refractive index and thickness measurement ................ 92 7.1 Introduction .................................................................................................... 92 7.2 Principles........................................................................................................ 94 7.3 Experiment and results ................................................................................. 100 7.3.1 System validation .............................................................................. 100 7.3.2 Measurement of biological tissues .................................................... 103 7.4 Discussion .................................................................................................... 109 Chapter 8 conclusions and future work ................................................................. 112 8.1 Conclusions .................................................................................................. 112 8.2 Future work .................................................................................................. 113 Bibliography ............................................................................................................. 115  vii  List of tables Table 5.1: The measurement of SNR, sensitivity fall-off, axial resolution of OCT .... 74 Table 5.2: FOV vs input voltages................................................................................. 79 Table 5.3: Summary of the MPM/OCT system performance ...................................... 80 Table 7.1: The measurement of RI of standard samples ............................................ 102 Table 7.2: RI and thickness of fish cornea………………………………………….108  viii  List of figures Figure 1.1: Penetration depth and resolution of typical imaging modalities. ................ 2 Figure 2.1: Schematics of a typical MPM system. ........................................................ 9 Figure 2.2: Jablonski diagrams of OPEF and TPEF .................................................... 10 Figure 2.3: Illustration of the process of SHG. ............................................................ 12 Figure 2.4: Illustration of the MPM resolution.. .......................................................... 14 Figure 2.5: Optical layout of Michelson interferometer. ............................................. 15 Figure 2.6: Typical optical setup of SDOCT system. .................................................. 18 Figure 2.7: Reflection of the laser beam in a sample of multiple layers. .................... 19 Figure 2.8: SDOCT signal reconstruction via FT. ....................................................... 20 Figure 2.9: The source spectrum and its Fourier Transform. ....................................... 22 Figure 3.1: Schematics of combined MPM/OCT system. ........................................... 26 Figure 3.2: Illustration of the pre-compensation unit. ................................................. 27 Figure 3.3: Schematics of the sample arm. .................................................................. 29 Figure 3.4: Schematics of the reference arm. .............................................................. 30 Figure 3.5: Schematics of the spectrometer and first order diffraction. ...................... 34 Figure 3.6: Schematics of the data acquisition and scanning control units. ................ 35 Figure 4.1: Diagram of the MPM user interface. ......................................................... 41 Figure 4.2: Flow chart of the MPM user interface operation. ..................................... 42 Figure 4.3: Illustration of the MPM image acquisition procedure and data processing. ................................................................................................................... 44 Figure 4.4: Scanners controlling waveforms and associated trigger signals for MPM. ........................................................................................................................... 45 Figure 4.5: Illustration of the movement of the focal spot in one loop of MPM data acquisition. ................................................................................................................... 47 Figure 4.6: Illustration of the delay shit of MPM. ....................................................... 49 Figure 4.7: Simulation of the delay shift with MATLAB. ........................................... 51 Figure 4.8: Illustration of the improvement of image by pseudo coloring. ................. 53 Figure 4.9: The techniques of cross-section and 3-D reconstruction used in MPM image processing. ......................................................................................................... 54 Figure 4.10: Illustration of the advantages of image merging. .................................... 55 Figure 4.11: Diagram of the OCT user interface. ........................................................ 58 Figure 4.12: Enlarged view of the spectrum of a mirror. ............................................. 59 Figure 4.13: Flow chart of the OCT user interface operation. ..................................... 60 Figure 4.14: Illustration of the OCT image acquisition procedure and data processing. ................................................................................................................... 62 Figure 4.15: Scanners controlling waveforms and associated trigger signals for ix  OCT. ............................................................................................................................. 63 Figure 4.16: Illustration of the delay shift in OCT imaging. ....................................... 65 Figure 5.1: Schematic of the phantom for MPM resolution measurement. ................. 67 Figure 5.2: Illustration of the MPM lateral resolution measurement with 0.1 m beads. ........................................................................................................................... 69 Figure 5.3: Illustration of the MPM axial resolution measurement with 0.5 m beads. ........................................................................................................................... 71 Figure 5.4: Measurement of axial resolution, imaging depth, SNR and sensitivity fall-off. ......................................................................................................................... 73 Figure 5.5: Illustration of the OCT lateral resolution measurement with wafer. ......... 76 Figure 5.6: Illustration of the OCT lateral resolution measurement with wafer using the 4× objective lens........................................................................................... 77 Figure 5.7: Illustration of the lateral field of view measurement for 40× objective lens. .............................................................................................................................. 79 Figure 5.8: Linear fitting of the measurement. ............................................................ 80 Figure 6.1: MPM/OCT images of onion. ..................................................................... 82 Figure 6.2: MPM/OCT images of mouse ear skin. ...................................................... 84 Figure 6.3: The interaction between breast cancer cells and collagen on microfluidic chips. ....................................................................................................... 87 Figure 6.4: Microfluidic chips with collagen gel, no cells. ......................................... 88 Figure 6.5: OCT/OCM images of an onion. ................................................................ 90 Figure 6.6: OCT/OCM images of fish cornea acquired with the 4× objective lens. ...................................................................................................................................... 91 Figure 7.1: Illustration of the relationship between the optical pathlength and geometric thickness of a sample in OCT imaging. ...................................................... 95 Figure 7.2. Illustration of the multi-layer refractions in the sample arm. .................... 96 Figure 7.3. The multi-layer refraction of laser beam in sample arm. .......................... 99 Figure 7.4. MPM/OCT images of the phantom. ........................................................ 101 Figure 7.5: Illustration of the structure of human eye and cornea. ............................ 104 Figure 7.6 MPM/OCT images of fish cornea. ........................................................... 106 Figure 7.7: Endothelium of the fish cornea. .............................................................. 109  x  List of abbreviations MPM OCT OCM SDOCT RI OPEF TPEF SHG MRI CT FT NUFFT DC CCT FWHM FOV SNR NA DAQ DP PMT NDF SPT CCD FB Nd YAG Ti:sapphire SF AR PL OPL NADPH  Multiphoton microscopy Optical coherence tomography Optical coherence microscopy Spectral domain optical coherence tomography Refractive index One-photon excitation fluorescence Two-photon excitation fluorescence Second harmonic generation Magnetic resonance imaging Computed tomography Fourier transform Non-uniform Fourier transform Direct current Cross-correlation term Full width at half maximum Field of view Signal to noise ratio Numerical aperture Data acquisition Data processing Photomultiplier tube Neutral density filter Sampling time Charge coupled device Frame grabber Neodymium yttrium aluminium garnet Titanium-sapphire Sensitivity fall-off Axial resolution Pixel location of the peak in the OCT cross-sectional image Optical pathlength Nicotinamide adenine dinucleotide phosphate  xi  Acknowledgements I would like to thank my supervisor, Dr. Shuo Tang, for her continual guidance. I am grateful to have been offered the opportunity to work in her lab and given the project ownership to develop an integrated MPM/OCT imaging system.  I would also like to thank the past and present member of the Biophotonics labs for their expertise, assistances and support.  Finally, I would like to thank my family and friends for their unconditional support through the course of my studies.  Yifeng Zhou University of British Columbia August 2012  xii  Chapter 1 introduction and background Medical imaging is widely acclaimed as a hallmark of modern medicine. It is an indispensable tool used to create images of the human body for clinical purposes, such as disease diagnosis, treatment planning, and surgical guidance [1-2]. So far, many imaging technologies have been developed for clinical applications, such as magnetic resonance imaging (MRI), computed tomography (CT), ultrasound, optical coherence tomography (OCT), multiphoton microscopy (MPM) and confocal microscopy. Several criteria, including acquisition time, imaging depth, resolution, safety and sample intrusiveness, have been proposed to evaluate the multitude of medical imaging modalities. Imaging depth and resolution are regarded as two of the most important criteria. Figure 1.1 provides an overview to the penetration depth and resolution of typical imaging modalities.  1  Penetration depth (log)  1m  MRI  Resolution  1mm  100cm  CT 10cm  300µm  1cm  Ultrasound 150µm  100mm 10mm  Confocal microscopy and MPM  1mm  1µm  OCT 2-10µm  100um  Figure 1.1: Penetration depth and resolution of typical imaging modalities.  Traditional imaging modalities, such as ultrasound, CT and MRI can provide investigation of structures in human body at the organ level, with penetration depth ranging from tens of millimeters to centimeters [3-5]. However, these techniques are poor in resolution. To study biological samples at the tissue and cellular levels, higher resolution is needed. Microscopy using multiphoton and confocal techniques can acquire images of tissues with high resolution [6]. The resolution can generally reach one micrometer but the penetration depth is limited around several hundred micrometers due to the severe scattering in biological samples. Compared to a confocal microscope, MPM uses light source with longer wavelength (near-infrared-light) and does not require a pinhole for excluding out-of-focus fluorescence, which results in less scattering and deeper tissue penetration [7-8]. Compared to microscopy techniques, OCT can provide relatively deeper imaging 2  depth but lower resolution. It can obtain images of tissues with a resolution of several micrometers but the penetration depth can reach a few millimeters [9].  The differences in penetration depth and resolution result in the different spatial information provided by different imaging modalities. To conquer the penetration depth versus resolution tradeoff and to make full use of the information obtained by these imaging techniques, a system of combined MPM and OCT is proposed in this study. By integrating the two imaging modalities, the drawbacks of MPM and OCT can be overcome and the capabilities of the two can be fully used. This integration makes it a great candidate for high resolution tissue imaging and could possibly find many clinical applications.  1.1 Brief history of MPM and OCT  In this work, MPM consists of two-photon excitation fluorescence (TPEF) and second harmonic generation (SHG). The concept of TPEF was first described by Maria Goeppert-Mayer in 1931 [10], and was first observed in 1961 in a CaF2:Eu2+ crystal using laser excitation by Wolfgang Kaiser [11]. The first two-photon excitation microscope was developed by Winfried Denk who combined the concept of two-photon absorption with the use of a Ti:sapphire laser which has a pulse width of approximately 100 femtoseconds and a repetition rate of about 80 MHz [12].  Second harmonic generation was first demonstrated by Peter Franken et al. in 1961 [13]. In 1974, Hellwarth and Christensen first combined the SHG and microscopy by imaging SHG signals from polycrystalline ZnSe [14]. By 1986, Freund published the first biological SHG images which demonstrated the orientation of collagen fibers in rat tail tendon [15]. By using a short-pulse laser such as a femtosecond laser, SHG microscopy can achieve 3  high axial and lateral resolution comparable to that of confocal microscopy without having to use pinholes. SHG microscopy is particularly suited to study collagen, which is found in most load-bearing tissues such as human cornea and skin. Compared to fluorescence microscopy, SHG does not involve the excitation of molecules so that the molecules shouldn’t suffer the effects of phototoxicity or photobleaching [16]. This advantage makes it particularly useful to study live cells and tissues.  The first OCT system was developed by Huang et al. in 1991 [17]. The ex-vivo images of the human retina and coronary arteries with a resolution of 15 μm acquired by this early stage OCT system demonstrate the capability of OCT to image in transparent as well as highly scattering materials. The first in-vivo OCT images – displaying human retina – were published by Fercher et al. [18] and Swanson et al. [19] in 1993. By 1996, Carl Zeiss Meditec developed the first commercial ophthalmic OCT instrument [20]. So far, OCT with micrometer resolution and cross-sectional imaging capabilities has become a powerful biomedical imaging technique. It is particularly suited to ophthalmic applications. By using a longer wavelength near 1300 nm, the applications of OCT have been expanded into other medical areas including gastroenterology, hepatology, gynaecology, pulmonology, urology and cardiology.  1.2 Problem statement and motivation  Different imaging modalities, such as OCT and MPM can detect different contrast mechanisms, such as scattering, TPEF and SHG, which provide extra sets of information about biological tissues. MPM and OCT can also provide complementary resolution and penetration depth. The focus of this thesis is to develop a multimodal MPM/OCT system for clinical applications. 4  For the hardware design, the integrated system uses the same laser source and sample arm for both MPM and OCT. The work of MPM part is already done by Dr. Tang, so only a reference arm and a spectrometer are needed for the OCT part. The spectrometer should be carefully designed so that the axial depth-dependent sensitivity fall-off [21-22] is reduced and the source limited resolution can be achieved.  Since MPM and OCT use different data acquisition mechanisms, specific software programs are designed to control the two modalities separately. For MPM, an X scanner and a Y scanner are used for en-face image acquisition and an additional Z scanner is used for depth scanning for achieving three-dimension (3-D) image acquisition. The movements of these three scanners need to be synchronized with the two-channel signal detection, TPEF and SHG. For OCT, depth profile is acquired by Fourier transforming the interference fringes detected by the one-dimension (1-D) CCD camera. Together with an X scanner, cross-sectional images can be acquired and with an additional Y scanner, 3-D images can be acquired. The controls of the two scanners also need to be synchronized with a frame grabber, which is used for OCT data acquisition. Compared to MPM, OCT needs much more complicated real-time data processing. The non-uniform Fourier transform (NUFFT) algorithm [22] developed by a former Master’s student is applied in the OCT program.  Furthermore, this multimodal system is applied to biomedical applications. One application is to measure the refractive index (RI) of biological tissues. RI of biological tissue is a key parameter for characterizing the light-tissue interactions [23-24]. Knowledge of the tissue RI plays an important role in many biomedical applications. For example, in optical diagnostics, malignant tissues can be identified from normal tissues by 5  measuring and comparing their RI [25-26]. In laser-eye treatments, the RI and thickness of cornea are important indicators of cornea states, such as cornea hydration and intraocular pressure, before and after laser surgery [27-30]. In this study, cornea is imaged extensively. Other samples like onion, mouse skin and cells in microfluidic chips are also imaged to demonstrate the capability of the system.  1.3 Organization of the thesis  Chapter 2:  Theory of MPM and OCT, starting from the fundamental of TPEF, SHG and interferometry, will be examined. System specifications such as resolution, imaging depth, and dispersion effect will be discussed.  Chapter 3:  A layout of the multimodal MPM/OCT system and general hardware components, such as the light source, the spectrometer, the sample arm, the photon detector and the data acquisition unit will be demonstrated.  Chapter 4:  The system controlling software programs as well as MPM/OCT data processing will be introduced in this chapter.  Chapter 5:  Performance characterization of the system, including resolution, horizontal imaging range, sensitivity fall-off and imaging speed, will be presented.  Chapter 6:  The imaging capability of the MPM/OCT system will be evaluated and demonstrated on biological samples. 6  Chapter 7:  A novel way of RI and thickness measurement based on the multimodal MPM/OCT system will be demonstrated. The precision of the system on RI and thickness measurement is firstly evaluated with three standard samples, water, air and cover glass. Afterwards experiments on fish cornea will be presented to show the system ability in determining the RI and thickness of biological tissues.  Chapter 8:  The final chapter will conclude this thesis and propose a prospective discussion on future work.  7  Chapter 2 principles of MPM and OCT A solid background on the theory of MPM and OCT is necessary for the design and optimization of the combined system. This chapter will give a brief description of the principle of MPM and OCT, including TPEF, SHG and interferometry. In addition, a set of typical parameters used to characterize and gauge the performance of the two imaging mechanisms, such as axial resolution, lateral resolution and imaging range, will also be discussed.  2.1 Principle of MPM  MPM is a form of laser-scanning microscopy using pulsed near infrared wavelength light to excite fluorescence and SHG only within a thin raster-scanned focal plane and nowhere else. Since its first emergence around two decades ago, MPM has found a niche in the world of biological imaging as one of the important noninvasive means in tissues [31]. A typical schematic of MPM setup [32] is shown in Figure 2.1.  The laser beam is produced by a pulsed laser and passes through a dispersion pre-compensation unit [33]. Afterwards, it is raster scanned by two galvanometer mirrors driven by a computer in an XY en-face mode. The scanned laser beam is expanded by two lenses to fill the back aperture of the objective. The expanded beam is then delivered and focused within the sample by the objective lens. Two types of signals, TPEF and SHG, are generated simultaneously in the sample. Both of them are collected by the same objective lens in a backward direction and then separated from the excitation source by a dichroic mirror. After that, they are separated by a second dichroic mirror and then detected by two 8  photomultiplier tubes, respectively. By raster scanning the two galvanometer mirrors, an en-face image can be generated in the computer. For Z scanning, the objective lens is mounted on a piezo scanner, which translates the objective lens up and down to acquire stacks of en-face images in three dimensions.  Figure 2.1: Schematics of a typical MPM system. PMT: photomultiplier tube. The red lines with arrow represent the laser beam from the source. The purple line with arrow represents the beam filtered by the dichroic mirror A, which is a combination of TPEF and SHG. The green line with arrow denotes the TPEF signal. The blue line with arrow denotes the SHG signal. The black lines are electrical wires.  9  2.1.1 TPEF  Fluorescence is a process of photon emission by a molecule or fluorophore, which consists of three stages of processes: excitation, internal conversion and emission. Figure 2.2 shows the energy diagram of one-photon excitation fluorescence (OPEF) and TPEF. TPEF differs from OPEF with regard to the first stage of the process. In TPEF, a fluorophore transmits from its ground state to an excited state by simultaneously absorbing two photons. In specific, the fluorophore is excited by a photon to a virtual intermediate state and then excited by another photon to the excited state (within 15 fs) [34-35]. The two photons absorbed in TPEF have approximately half the energy and double the wavelength of the photon required for OPEF.  Figure 2.2: Jablonski diagrams [36] of OPEF and TPEF. (a) OPEF, the blue arrow denotes the process of excitation, the black arrow denotes the process of internal conversion and the green denotes the process of fluorescence emission. (b) TPEF, the red arrows denote the process of excitation. The probability of a two-photon absorption occurring in a fluorophore is extremely low. 10  Thus, a high photon flux density is required for TPEF. Although TPEF was predicted by Maria Goppert-Mayer in 1931, the first investigations of this phenomenon only became possible with the invention of lasers. Thanks to the advent of pulsed laser, the rate of TPEF is increased by around 100000-fold, compared to continuous-wave laser operated at the same average power level [10].  Due to the fact that two photons are absorbed at the same time during the excitation of the fluorophore, the probability of TPEF increases quadratically with the excitation intensity [37]. Thus, TPEF is much more likely to happen where the laser beam is tightly focused than where it is more diffused. As a result, the excitation is restricted to the small focal volume, which serves to reject the out-of-focus signal. This is the key advantage of TPEF microscope compared to OPEF microscope, such as the confocal fluorescence microscope which needs a pinhole to reject out-of-focus fluorescence.  2.1.2 SHG  SHG is another important type of nonlinear optical process responsible for forming images in MPM. In general, the nonlinear polarization of a material can be expressed as [38],  (2.1)  Here P is the induced polarization,  is the nth-order nonlinear susceptibility, and E is  the electric field vector. The first term denotes normal absorption and reflection of light. The second term includes SHG. The third term includes third harmonic generation (THG) and TPEF [38]. 11  Unlike TPEF, SHG is a parametric process which does not involve a process of photon absorption and emission. As shown in Figure 2.3, the sample is illuminated by an intense laser and generates a nonlinear polarization, which in turn produces a coherent wave at exactly twice the incident frequency [38], the SHG. SHG occurs only when the electric field of the excitation light is strong enough. Moreover, it requires the molecules to be non-centrosymmetric since it is emitted coherently.  Figure 2.3: Illustration of the process of SHG. The interaction between the photons and the nonlinear material transforms the initial photons to new ones with twice the energy, therefore twice the frequency of the original ones. The primary tissue constituent responsible for SHG is collagen, which is the most abundant extracellular structural protein of the vertebrate body, making up around 6% of muscle tendons [39]. Collagen, in the form of elongated fibrils, is mainly found in bone, cartilage, skin, blood vessels, tendon and cornea [40]. Detailed characterization of collagen is of great importance since the structural modifications of the collagen fibrils are usually associated with various physiological processes such as diabetes, aging and cancer [41]. 12  There are several advantages of MPM compared with traditional one-photon excitation fluorescence microscopy (including confocal fluorescence microscopy). Firstly, TPEF and SHG are restricted to a tiny focal volume, resulting in the elimination of confocal aperture, which will severely attenuate the signal intensity. Secondly, this localized excitation also greatly reduces the phototoxicity because photodamage is largely confined to the small focal volume in the sample. Thirdly, by the use of laser at longer wavelengths (near infrared), deeper penetration depths are achieved as a result of less scattering within the sample. Furthermore, by combing TPEF and SHG, MPM can provide more biochemically specific contrasts, resulting in more details detected from the sample.  2.1.3 MPM resolution  Resolving power is one of the most important characteristics of an imaging system. As for MPM, the resolving power can be described by resolutions, including lateral resolution and axial resolution, which are defined as the minimum spacing at which two points can be distinguished by the imaging system. Fig. 2.4 illustrates the resolution of MPM. The theoretical estimation of the lateral resolution of MPM can be derived based on the Rayleigh criterion [42],  (2.2)  Where  is the lateral resolution of MPM,  is the center wavelength of the  illuminating source, NA is the numerical aperture of the objective lens used for imaging.  The axial resolution of MPM is referred to as the focus depth of the illuminating beam as 13  shown in Figure 2.4 (b). For a high NA objective, the axial resolution can be estimated as ~3-4 times of its lateral resolution.  Figure 2.4: Illustration of the MPM resolution. (a) Focal volume formed by the illuminating Gaussian beam. (b) Beam width and focus depth of the illuminating beam. (c) Focal spot of the illuminating beam. (d) Intensity profile of the focal spot along the x axis.  14  2.2 Principle of OCT 2.2.1 Michelson interferometer  OCT is an interference technique. The most typical optical configuration of interferometry is the Michelson interferometer. Figure 2.5 shows a simplified version of Michelson interferometer, which consists of a laser source, a beam splitter, two mirrors and a detector. The laser beam emitted from the source is split into two paths by the beam splitter between the two arms of the interferometer. The reflected beams from the sample and reference arms are recombined by the splitter and then projected onto the detector, which records the interference pattern created by the two superimposed beams.  Figure 2.5: Optical layout of Michelson interferometer. The sample mirror is fixed while the reference mirror is movable along the optical axis. ls: the length of the sample. lr: the length of the reference arm. 15  The detected signal is given by [22, 43]:  (2.3)  Where I is the intensity of the detected signal, k is the angular wavenumber, length mismatch between the two arms,  and  is the path  are the amplitude of the reflected  electrical field from the reference and sample mirror, respectively.  In this expression, the first two terms are DC terms, which can be removed during data processing. The remaining term, called the cross-correlation term (CCT), contains the interference information between the two reflected beams. The CCT has a frequency determined by the path length difference  . The change in  will result in increase or  decrease of the modulation frequency of the CCT. By analyzing the frequency change of CCT, the path length mismatch  can be determined. This allows for the determination of  reflectivity changes from different depths within a sample of interest.  2.2.2 OCT  OCT is developed from the Michelson interferometer based on the principle of low coherence interferometry. A typical spectral domain OCT (SDOCT) setup consists of an interferometer, a low coherence broad bandwidth light source and a high sensitive spectrometer. As shown in Figure 2.6, light is split into and recombined from reference and sample arm, respectively. The interference wave is then diffracted by a diffraction grating and redirected to different locations on a CCD camera according to different wavelengths. The intensity of the interference fringes detected by the CCD camera can be expressed as 16  follows [22, 44],  (2.4)  Here  is the spectral intensity distribution of the light source,  the reference mirror,  and  are the reflectivities in the ith and jth layers within the  sample (as shown in Figure 2.7), and j while  is the reflectivity of  is the path length mismatch between the layers of i  is the path length difference between the ith layer and the reference mirror.  The first and second terms in the square bracket of Equation 2.4 are non interferometric and contribute to a DC term, which can be removed by background subtraction in post-processing. The last term is due to autocorrelation between different layers of the sample, which is usually very small and can be neglected. The third term contains the interference information between the sample and the reference mirror and can be used to determine the reflectivity of different layers within the sample.  17  Figure 2.6: Typical optical setup of SDOCT system. CCD: charged-coupled device. The spectrometer consists of a grating, a focusing lens and a CCD camera.  18  Figure 2.7: Reflection of the laser beam in a sample of multiple layers. The laser beam is reflected multiple times at the interface of adjacent layers. The interference of the beams reflected from different layers will give rise to the autocorrelation term in Equation 2.4. The depth profile of the sample can be retrieved from the detected interference fringe by performing a rescaling and an inverse Fourier transform (FT). As shown in Figure 2.8, the detected signal  is rescaled from  domain to k domain and then transformed by  inverse FT from k domain to z domain, resulting in the following Equation (neglecting the DC and autocorrelation terms) [22, 44],  (2.5)  Where  is the FT of the spectrum of the broad band light source,  represents the path length difference between the reference arm and the depth location of the reflection within the sample of interest, and  is a convolution. 19  Figure 2.8: SDOCT signal reconstruction via FT. (a) The interference fringe is diffracted by the grating and then delivered to the 1-D CCD camera, where the A-line signal is recorded. (b) The A-line signal detected by the CCD camera in domain. (c) The A-line signal is rescaled to domain according to  . (d) The A-line signal is Fourier  transformed from k domain to z domain. The result forms one A-line in the final OCT image. In SDOCT, the reference mirror is immobilized. A FT of the spectral measurement will produce an A-line scan profile of the sample in the depth direction. By scanning the beam in X direction, a 2-D cross-sectional image can be obtained. By further scanning the beam in Y direction, a stack of cross-sectional images can be generated which can be reconstructed into a 3-D image of the sample. The X and Y scanning are realized with two galvanometer mirrors driven by the computer (Figure 2.6).  20  2.2.3 OCT resolution  The lateral and axial resolutions of SDOCT are decoupled from one another. The former is a function of the optics, which is similar as the lateral resolution of MPM. The axial resolution of SDOCT can be derived from Equation 2.5. Since the convolution with a delta function simply shifts the origin of a function, Equation 2.5 can be rewritten as follows,  (2.6)  Thus, a peak in z domain can be expressed as the sum of a set of Therefore, the narrowest peak detected in z domain is  shifted by  .  itself, which is the FT of the  spectrum of the light source as mentioned above. For simplicity, we shall assume the spectrum of the light source as a Gaussian shape (as shown in Figure 2.9 (a)). Since the Gaussian is a self reciprocal function, the FT of a Gaussian function (Figure 2.9 (b)) has the same form as the original function. The full width at half maximum (FWHM) of  is  given as follows [45],  (2.7)  Here  is the center wavelength,  is the bandwidth of the light source.  is the  coherent length of the source, which is determined by the center wavelength and bandwidth of the source. The OCT axial resolution is determined by the coherence length of the source.  21  Figure 2.9: The source spectrum and its Fourier Transform. (a) The source spectrum with a Gaussian shape in k domain. (b) The Fourier transformed source spectrum in z domain also has a Gaussian shape.  2.2.4 Image depth  Notice that the period of the spectral oscillation in k domain of the measured signal is proportional to the path length difference  . A reflection interface which gives a larger  value (deeper depth) will produce a higher frequency sinusoidal spectral oscillation than an interface with a smaller  value (shallower depth). However, there is a limit of detectable  path length difference due to the Nqyuist criterion [46] that the sampling rate must be larger than twice of the maximum frequency of oscillation. Thus, the maximum of the path length difference lmax, also called as the image depth of SDOCT, is determined by the sampling rate, which in turn is determined by the spectrometer. The image depth of SDOCT can be expressed as a function of the pixel spacing of the CCD camera in the spectrometer,  (2.8)  Where l is the image depth,  is the pixel spacing of the CCD camera in z domain, N is 22  the pixel number of the CCD camera. To meet the Nqyuist criterion,  must be equal or  less than half of the axial resolution, that is,  (2.9)  From Equation 2.8 and 2.9, we get,  (2.10)  Equation 2.10 indicates a trade-off between the image depth and axial resolution. A larger image depth  will give rise to lower lateral resolution. Therefore, to reach the source  limited lateral resolution, the maximum image depth we can get is,  (2.11)  2.2.5 Dispersion effect  Dispersion is the phenomenon that the phase velocity of a wave depends on its frequency when it propagates through a dispersive media. For OCT, dispersion mismatch between the reference and sample arm will cause different frequencies to propagate at different velocities, resulting in the corruption of the resolution of the images [45, 47]. Consequently, any sharp features of the sample will be blurred in the images and the image quality will be reduced. Therefore, balancing the dispersion between the two arms is critical to achieve high image quality. Typically, there are two approaches to compensate the dispersion: One is a software method such as numerical dispersion compensation, the other is a hardware 23  technique such as adding dispersive materials in the optical path to match the dispersion between the two arms. The software compensation method is done during the image processing, which will increase the total data processing time and reduce the imaging speed of OCT. Both of the two methods will be tested in our work.  24  Chapter 3 hardware  Component selection is critical in the development of a high quality and reliable MPM/OCT system. This chapter will give a detailed demonstration of the layout of the integrated MPM/OCT system and general hardware components, including the light source, the sample and reference arms, the photon detector, the spectrometer and the data acquisition unit. Since the work of MPM part is previously done by Dr. Shuo Tang, only the reference arm and the spectrometer of OCT part need to be designed to complete the integrated system.  The schematic of the MPM/OCT system is shown in Figure 3.1. A femtosecond Ti:sapphire laser is used as the light source for both the MPM and OCT imaging. The emitted laser light passes through a dispersion pre-compensation unit which consists of two prisms. The laser beam is then split by a 50/50 ratio beam splitter into two arms, the OCT reference arm and the shared sample arm for both OCT and MPM. In the reference arm, the laser light is reflected by a reference mirror. A neutral density filter (NDF) is used to adjust the reflected power. In the sample arm, the laser beam is raster scanned by two galvanometer mirrors in an en-face mode. Afterwards, the scanned laser beam is expanded by two lenses and then focused by an objective lens onto the sample.  The focused laser beam in the sample will generate three types of signals simultaneously, TPEF and SHG for MPM imaging and the backscattered light for OCT imaging. TPEF and SHG are separated from the excitation source by a dichroic mirror and then selected by different bandpass filters to the TPEF detector and the SHG detector, respectively. The backscattering signal for OCT is recombined with the beam from the reference arm and 25  then delivered to a custom-built spectrometer which consists of a transmission grating, a pair of focusing lenses and a linescan CCD camera. The en-face mode scanning by the XY galvo mirrors is shared by both MPM and OCT. The depth scan of MPM is achieved by a piezo objective scanner driven by a signal from the computer. For SDOCT, no depth scanning is required.  Figure 3.1: Schematics of combined MPM/OCT system. F1, F2: filters, used to select the TPEF and SHG signals. NDF: neutral density filter, used to adjust the power of the beam from the reference arm.  3.1 Light source  As mentioned in chapter 2, the probability of TPEF and SHG is quite low and high photon flux density is required to generate sufficient TPEF and SHG signals. Although neodymium yttrium aluminium garnet (Nd YAG) lasers is possible for MPM, a pulsed 26  titanium-sapphire (Ti:sapphire) laser is preferred due to the short pulse duration. The Ti:sapphire laser (Fusion Pro 400, Femtolasers) used in our MPM/OCT has a center wavelength of 800 nm, a spectral bandwidth of 120 nm, and pulse width of ~10 fs.  3.2 Pre-compensation unit  The schematic of the pre-compensation unit is shown in Figure 3.2. It basically consists of two Brewster prisms. This unit is used to pre-compensate the dispersion later accumulated from the objective lens and other optics in the beam delivery path.  Figure 3.2: Illustration of the pre-compensation unit. P1 and P2 are two fused silica Brewster prisms. 3.3 Sample arm and reference arm  The sample arm (as shown in Figure 3.3) consists of two galvanometer mirrors (XY) 27  (Cambridge Technology), a beam expander and an upright commercial microscope (BX51W1, Olympus). The en-face scanning of the laser beam is achieved by the two galvanometer mirrors.  The beam expander, consisted of two lenses, is used to couple  the light to the microscope so that the light can fill the back aperture of the objective. The objective lens is mounted on a piezo scanner (MIPOS 500, Piezosystem Jena). In this system, the sample stage is fixed to reduce the motion artifact of the sample while the objective lens can move up and down when driven by the piezo scanner. Two objective lenses can be selected. For MPM imaging, a water immersion 40× objective lens (LUMPlanFL N, Olympus) with a NA of 0.8 is used to generate high resolution images, whereas a low NA objective 4× (Plan N, Olympus, NA is 0.1) or 10× (MPlanFL N, Olympus, NA is 0.3) is preferred for OCT imaging to acquire images of large field-of-view.  28  Figure 3.3: Schematics of the sample arm. L1 and L2: the lens used for the beam expander. The raster scanning beam is expanded by a beam expander consisted of two lens with different focal length. The expanded beam is then delivered to an upright commercial microscope. A piezo scanner is mounted at the bottom of the microscope to control the depth scanning of MPM. While acquiring OCT image, the piezo scanner is normally turned off.  The reference arm consists of a fixed reference mirror and a NDF, which is used to adjust the reflected power. In general, the reflected beam from the reference arm has a much higher power than the backscattering beam from the sample arm. The strong intensity from the reference arm will usually make the high sensitive CCD detector saturate. Thus, the tunable NDF in the beam path can serve to attenuate the reference power.  The sample and reference arms contain different optics which give rise to dispersion 29  mismatch between the two arms. Two possible means can be used to compensate the dispersion mismatch, software and hardware dispersion compensation. For the former, no extra hardware is required since dispersion compensation is done through data post-processing. For the later, a dispersion balancing unit (shown in Figure 3.4) consists of two triangle prisms is added in the optical path of the reference to eliminate the dispersion mismatch. The path length of light passing through the two prisms can be adjusted until the dispersion of the reference matches that of the sample arm.  Figure 3.4: Schematics of the reference arm. A dispersion compensation unit consisted of two triangle prisms is added to the optical path. Each prism is movable so that the dispersion of the reference arm can be manually adjusted to match the sample arm.  3.4 MPM photon detection  The backward signals from the sample include TPEF, SHG and residual laser light. Before reaching the detectors, a dichroic mirror (FF670-SDi01, Semrock, dichroic A in Figure 3.1) 30  is used to separate the two signals from the excitation source. Afterwards, the SHG signal centered at 400 nm is reflected by the secondary dichroic mirror (450DCXRU, Chroma ,dichroic B in Figure 3.1) and further filtered by a band pass filter (Semrock) while the TPEF signal with longer wavelength (~500-600 nm) propagates through the dichroic mirror and then selected by a band pass filter (Semrock) until reaching the detector. Two photomultiplier tubes (PMT) (Hamamatsu) are used as the photon detectors. Both TPEF and SHG signals are amplified by the PMT and then transformed to intensity information in the final image.  3.5 OCT spectrometer  A spectrometer (as shown in Figure 3.5) of SDOCT typically consists of a transmission grating (Wasatch Photonics, 1200 lines/mm), a focusing lens and a CCD camera (AViiVA SM2 CL, E2V,  1024 pixels, 14×14 μm2 pixel size). The spectrometer is a key unit in  SDOCT since it affects the axial resolution as well as the image depth. To design a high-quality spectrometer, all the three components should be carefully selected. In our system, the transmission grating has a groove density of 1200 lines/mm while the CCD camera has 1024 pixels with a pixel size of 14 µm by 14 µm. Given these two components, a focusing lens with suitable focal length needs to be determined to achieve source limited axial resolution of SDOCT.  To determine the focal length f of the focusing lens, we start from the grating equation, which is given as [48],  (3.1) 31  Here  is the incident angle,  is the diffraction angle at wavelength , g is the  groove density of the grating (as shown in Figure 3.5). Taking the derivative of Equation 3.1 with respect to  , we get,  (3.2)  As shown in Figure 3.5, the CCD is placed along the x axis with a distance of f to the focusing lens. For a small change in the diffraction angle, the corresponding change of coordination of the focus spot is,  (3.3)  Based on Equations 3.2 and 3.3, we have,  (3.4)  Since all the frequencies are linearly dispersed and distribute linearly along the CCD screen, we have,  (3.5)  Here  is the length of linear CCD detector array,  is the spectral range coverage of  the spectrometer. Given the pixel number N and pixel spacing  of CCD,  can be  expressed as, 32  (3.6)  is as a function of the pixel spacing of the CCD camera in z domain [45],  (3.7)  Here  is the pixel spacing in z domain as mentioned in chapter 2. Thus, using  to approximate  in Equation 3.4, the focal length f can be estimated, which is around 39  mm. Since it is hard to find a lens with a focal length exactly equals to 39 mm, we use a pair of lenses with an effective focal length of 37.5 mm to approach the ideal focal length.  33  Figure 3.5: Schematics of the spectrometer and first order diffraction. i: incident angle. d: diffraction angle of the first order. f: focal length of the focusing lens. L: length of CCD detector array. the shortest wavelength detected by the CCD camerathe longest wavelength detected by the CCD camerathe center wavelength of the source (800 nm)A pair of air-spacing lenses with an effective focal length of f is used to focusing the diffracted beam on the 1-D CCD screen.   3.6 Scanning control and data acquisition  The data acquisition and scanning control is achieved with a data acquisition board (PCIe-6363, National Instruments) and a frame grabber (PCI-1426, National Instruments). As shown in Figure 3.6, both of the DAQ board and the frame grabber are integrated with 34  the computer. For MPM imaging, the DAQ board generates three types of analog outputs. Outputs X and Y are used to drive the XY galvo mirrors for en-face scanning, whereas output Z is used to drive the piezo scanner for depth scanning. The analog signals are delivered to the scanners via two external interfaces (E1 and E2, National Instruments). The detected signals, including TPEF and SHG, are also delivered through the external interfaces to the DAQ board. For OCT imaging, no depth scanning is needed. The shared XY scanners are used to steer the beam across the sample to acquire multiple A-lines for OCT image reconstruction. The interference fringes are recorded by the CCD camera which is connected to the frame grabber within the computer.  Figure 3.6: Schematics of the data acquisition and scanning control units. DAQ: the data acquisition board. FB: the frame grabber board. PS: the piezo scanner. E1 and E2: the external interfaces. X, Y, Z: waveform output interface. C1 and C2: two input channels for TPEF signal and SHG signal respectively. 35  To generate high quality 3-D MPM/OCT images, the precise control of all components is of great importance. All the scanners and the data acquisition components need to be carefully synchronized to reduce the artifact in the generated images. Chapter 4 will give a detailed description of components synchronization.  36  Chapter 4 software design The user interface acts as the controller of the integrated MPM/OCT system, which controls and synchronizes all the components of the configuration. The purpose of user interface design is to make the user’s interaction as simple and efficient as possible, in terms of accomplishing user goals, including waveform generation, components synchronization, data processing and images displaying. A good design of user interface of the MPM/OCT system, not only provides basic functions to the users, but should also follow the six principles of user-centered design [49-50].  (1)  The structure principle  The user interface should be organized purposefully in a meaningful and useful way, with a clear and consistent architecture that are apparent and recognizable to users. The related things, such as the function buttons, should be putted together while unrelated things should be separated. This principle is of great concern through the process of user interface design.  (2)  The simplicity principle  The design should be simple and user-friendly, communicating information clearly and simply in the user’s own language, so that the user can easily learn how to use it.  (3)  The visibility principle  While providing all needed options and materials to the user for a given task, a well-designed user interface should also keep the extraneous or redundant information invisible to the users. For example, in our MPM/OCT system, all the algorithms for data 37  processing are kept invisible to the users in order not to overwhelm or confuse them. The only things they can see are inputs and outputs, such as imaging range and image display.  (4)  The feedback principle  All the information, including the results of actions or interpretations, changes of state or condition, and errors or exceptions, should be fed back to the users timely through clear and unambiguous language.  (5)  The tolerance principle  The design should be robust and error tolerant, reducing the cost of mistakes and misuse by providing warnings to the users or resetting the system to a correct state when errors occur.  (6)  The expandability principle  The design should be flexible and expandable so that it can easily accommodate additions, such as new functions, new inputs and outputs, to its capabilities.  Post-processing is another important part in the software of the MPM/OCT system. The goal of data post-processing is to improve the image quality or make full use of the acquired data by mining extra information from the acquisition. The post-processing techniques used in this system include 3-D reconstruction, 2-D sectioning, pseudo coloring, image merging and movie displaying. All the image processing tasks are done after the data acquisition through MATLAB without affecting the acquisition speed.  38  4.1 MPM 4.1.1 User interface 4.1.1.1 Framework of the MPM user interface  The framework of the MPM user interface is shown in Figure 4.1. The MPM user interface is basically composed of five sections:  (1)  The mode section  The MPM user interface has two kinds of working modes, the NORMAL mode and the CONTINUE mode. The NORMAL mode is used for 3-D data acquisition, in which all the en-face images acquired are saved in the memory of the computer. To work in this mode, the user needs to determine the number of en-face images in the 3-D stack. The CONTINUE mode is used for system alignment and optimization. When working in this mode, the en-face scanning will be repeated until the user stops the program. The data of en-face image will be deleted after it is displayed on the computer screen.  (2)  The control buttons section  The INITIALIZATION button is used to initialize the input parameters for the user interface. The START/CONTINUE button is used to start or continue image acquiring. The STOP button is used to stop the acquisition. The SAVE button is used for data saving.  (3)  The parameters section  Parameter X (in voltage) determines the scanning range of the X scanner. The Y scanning range is set the same as that of the X scanner by the user interface. Parameter Z determines number of en-face images to be acquired, the scanning range of piezo scanner 39  (in voltage) and the spacing between adjacent en-face images (in m). Parameter sampling time (SPT) is the integration time (in s) for each pixel of the en-face image. Variables AVE1, AVE2, MAX1, MAX2, MIN1 and MIN2 are used to display the average, maximum and minimum intensity of TPEF and SHG images in real time. These six parameters can help the user to adjust the power illuminating the sample and the gain of the PMTs.  (4)  The display setting section  The parameters MIN and MAX are used for image enhancement. The first set of radio buttons are used to determine the color scale of the TPEF/SHG images displayed in the right half of the user interface, whereas the second set of radio buttons determine the size of the images displayed.  (5)  The display section  The TPEF and SHG images will be displayed in real-time in the two square windows.  40  Figure 4.1: Diagram of the MPM user interface. The left half is the control area, including the imaging mode setting, the control buttons, the input parameters and the image display setting. The right half is the display area, including TPEF and SHG. The two images are displayed simultaneously and real-timely in the grey squares.  41  4.1.1.2 Flow chart  Figure 4.2: Flow chart of the MPM user interface operation. The rectangles in brown are controlled by the user while the ones in blue are controlled by the user interface. Input: all the parameters entered by the users to control the MPM system. Initial: initialize the parameters. Start: start imaging acquiring. Repeat: a loop function to control the MPM image acquisition, in each loop only one en-face image is acquired. Condition: a condition used to determine to continue image acquisition or stop acquiring. The XY scanning and data acquisition of TPEF/SHG are executed simultaneously. 42  The flow chart of the MPM user interface operation is shown in Figure 4.2. The rectangles in brown denote the steps done by the user while the ones in blue represent the inner steps accomplished by the user interface. In MPM imaging, the XY scanning and data acquisition are executed simultaneously. One MPM loop typically includes XY scanning, data acquiring, data processing, image displaying and Z scanning. In each loop, only one en-face TPEF/SHG image is acquired, processed and displayed. The Z scanner is then moved by a step along the Z axis before the next MPM loop. The condition (Figure 4.2) is used to determine the number of en-face images to be acquired. Figure 4.3 illustrates the procedures of MPM image acquisition and processing. An en-face image (Figure 4.3(a)) is acquired in one MPM loop. As the loop repeats for multiple times, a stack of XY images (Figure 4.3(b)) are generated and then reconstructed into a 3-D image (Figure 4.3(c)). A cross-sectional image (Figure 4.3(d)) is selected and sliced from the 3-D image in post-processing.  43  Figure 4.3: Illustration of the MPM image acquisition procedure and data processing. (a) The XY en-face image acquired in one MPM loop. (b) The acquisition is repeated multiple times to get a stack of XY images. The total number of XY images is a parameter input into the user interface by the users. (c) A 3-D image is reconstructed based on the stack of XY images. (d) A cross-sectional image (denoted by the red square) of MPM is sliced from the 3-D image.  4.1.1.3 Scanners waveforms and synchronization  The scanners waveforms generation and components synchronization are the most important parts in the user interface design. For MPM imaging, three scanners, including the X scanner, the Y scanner and the Z scanner, need to be driven and synchronized. The XY scanners control the raster scanning while the Z scanner controls the depth scanning. In addition, the three scanners should work in time with the data acquisition unit to prevent the aliasing of images caused by asynchronization between scanners and DAQ 44  components.  Figure 4.4: Scanners controlling waveforms and associated trigger signals for MPM. (a) Waveform for X scanner controlling. P stands for period. The enlarged X waveform is shown in (e) and (g). X waveform is a triangle wave, including an upwards part and downwards part. Both of two parts contain 512 steps with a step size of dx as shown in (g). (b) Waveform for Y scanner controlling. The Y waveform is also a triangle wave with 512 steps upwards and 100 steps downwards. The enlarged Y waveform is shown in (f). (c) Waveform for Z scanner controlling. The total number of steps in Z waveform denotes the number of en-face images in one stack. This value is an input parameter determined by the users. (d) The time diagram for data acquisition and data processing (DP). The DAQ is done when the Y waveform is upwards (from t1 to t2) while the DP is done when the Y waveform is downwards (from t2 to t3). The time from t1 to t3 denotes one loop of MPM data acquisition, including DAQ and DP. (h) The trigger signal generated by the DAQ board to trigger and synchronize the X and Y waveforms. 45  The design is shown in Figure 4.4. The X waveform is periodical triangular wave with a period of P (as shown in Figure 4.4(a) and (e)). One period of X waveform is composed of an upwards part and a downwards part. Both of the two parts contain 512 steps with the same step size of dx (Figure 4.4(e)) determined by the user. The time period from t1 to t3 (Figure 4.4(d)) represents one MPM loop, in which a set of X waveforms containing 256 periods is generated.  The Y waveform is shown in Figure 4.4(b) and (f). Similar to the X waveform, the Y waveform is a periodical saw tooth wave with a period equivalent to one MPM loop. It has 512 steps in upwards and 100 steps in downwards.  The synchronization of the X and Y waveforms is achieved with a hardware trigger (Figure 4.4(h)) generated by the DAQ board integrated in the computer. This trigger is also used as a timing function to control the integration time of each pixel of the en-face image. More details on XY scanners synchronization is illustrated in Figure 4.5. In Figure 4.5, each small square represents a pixel in the final en-face image of a size of 512 by 512 pixels. The red lines with arrow demonstrate the track of the focal spot. The odd rows correspond to the upwards parts in X waveform, whereas the even rows correspond to the downwards parts of the X waveform. Each down arrow in red represents one step of the Y waveform. In each MPM loop, the focal spot is moved from the starting pixel S to the ending pixel E. Afterwards, the Y scanner is driven by the downwards wave back to the original location.  46  Figure 4.5: Illustration of the movement of the focal spot in one loop of MPM data acquisition. Each small square denotes a pixel in the final en-face image with a size of 512 pixels by 512 pixels. The red lines with arrow demonstrate the track of the focal spot. The odd rows correspond to the upwards parts in X waveform, whereas the even rows correspond to the downwards parts of the X waveform. Each down arrow in red represents one step of the Y waveform. S: the starting pixel of one en-face imaging. E: the ending pixel of one en-face imaging. Figure 4.4(c) shows the Z waveform, which has a step length of one MPM loop and a step size of dz determined by the user. The generation of the Z waveform is timed with the XY waveforms by a software trigger which is generated at the end of each MPM loop.  47  Figure 4.4(d) shows the time for DAQ and data processing (DP) in one MPM loop. The synchronization of the DAQ unit and the scanners is also achieved with the hardware trigger mentioned above.  4.1.1.4 The delay shift compensation  Although the scanners and the DAQ unit are perfectly synchronized during imaging, asynchronization still happens at the beginning of each en-face image acquisition. While the DAQ unit starts working when the START button is clicked, the XY scanners wait for around 150 s before starting rotation. This delay is due to the intrinsic attributes of the galvo mirror.  Figure 4.6 gives an illustration of the delay shift of the image caused by asynchronization. A set of TPEF image of fluorescence dye solution are acquired with different integration time for each pixel. Figure 4.6(a) is the delay shift compensated image acquired with an excitation time of 50 s. Figure 4.6(b) is the same data without compensation. By comparing the enlarged views of the two (Figure 4.6(d) and (e)), we can see the edge of the sample in Figure 4.6(d) is much smoother than that of the one in Figure 4.6(e). Figure 4.6(c) is the image acquired with an excitation time of 3 s. From the enlarged view (Figure 4.6(f)), we can see the odd rows and the even rows of the sample are separated from each other, resulting in two symmetric samples.  During MPM imaging, the data of an en-face image with a size of 512 by 512 pixels is acquired and saved in a 1-D vector with a size of 1 pixel by 262144 pixels. The vector is then segmented into 512 rows of the same size. These rows will then be combined 48  vertically to form the final en-face images. In ideal case, each element in the 1-D vector matches the corresponding pixel in the final image. However, the delay of the XY scanners at the beginning of the imaging will shift the elements along the track of the focal spot shown in Figure 4.5. This delay shift will give rise to the aliasing phenomenon in the reconstructed en-face image as shown in Figure 4.6.  Figure 4.6: Illustration of the delay shit of MPM. (a) The en-face TPEF image of fluorescence dye solution with shift compensation. The integration time for each pixel is 50 s. (b) The en-face TPEF image of fluorescence dye solution without shift compensation. The integration time for each pixel is 50 s. The estimated number of pixel shift is 3. (c) The en-face TPEF image of fluorescence dye solution without shift compensation. The integration time for each pixel is 3 s. The estimated number of pixel shift is 50. (d), (e) and (f) are enlarged views of (a), (b) and (c), respectively.  49  To further verify the cause of image aliasing, a simulation with MATLAB is conducted on an artificially synthesized image (Figure 4.7). Firstly, a 1-D vector with a size of 1 pixel by 262144 pixels is designed to represent the raw data of MPM. Then all the elements of the vector are shifted towards the end by several pixels. The value of the blank elements at the beginning of the vector is set to zero. Afterwards, the shifted vector is processed with same reconstruction method in MPM to form the final image. Figure 4.7(a) is the reconstructed image without shift while Figure 4.7(b) is reconstructed with certain amount of shift. Comparing the enlarged views of Figures 4.7(a) and (b) (as shown in Figures 4.7(c) and (d)), we see similar phenomenon in the synthesized object (the blue square in each images)  50  Figure 4.7: Simulation of the delay shift with MATLAB. (a) Image without shift. (b) Image with shift. (c) Enlarged view of (a). (d) Enlarged view of (b).  The delay between the scanners and the DAQ unit can be measured by an oscilloscope, which is found to be around 150 s. Although this delay cannot be eliminated in hardware, we can still use software method to solve this problem. One solution is to shift the vector back by a certain amount of pixels according to the integration time before reconstruction. Assuming the integration time is t, the number of pixels compensated can be expressed as,  51  (4.1)  The delay shift compensation is done automatically by the user interface. The value of the blank elements at the end of the 1-D vector is set to zero.  4.1.1.5 Error tolerance and protective mechanisms  The term “error” here mainly refers to the misuse of the user interface by the users. This misuse includes entering wrong parameters into the user interface, changing the imaging mode or other settings during imaging and suddenly closing the user interface before clicking the stop button. These three kinds of misuses will give rise to errors in the waveform output, which in turn might damage the fragile galvo mirrors.  Several protective mechanisms are employed to improve the error tolerance ability of the user interface. Firstly, a restricted range is set for the input parameters, especially the voltage ranges of the three scanners. When the user enters a value exceeding the range, a dialog box will be pop up to give a warning to the user. The value of the wrong parameter will be reset to default value. Secondly, any changes to the imaging conditions, such as the display setting, during the imaging acquisition will receive a warning. Last but not least, three segments of waveforms to drive the three scanners back to the original location gradually will be executed automatically when the user interface is suddenly closed.  52  4.12 MPM image post-processing  (1)Pseudo coloring  Figure 4.8: Illustration of the improvement of image by pseudo coloring. (a) SHG image of mouse tendon in grey scale. (b) The tendon image after pseudo coloring. Pseudo-coloring is an image processing technique used to transform a grayscale image to a color image for improved visualization. By mapping each intensity value of the original image to a color according to a table or function, pseudo-coloring can help to reveal textures and qualities that may not be apparent in the original image. A good mapping function can increase the distance in color space between successive grayscale levels and make more details readily visible. For MPM, a specially designed mapping function is developed to transform the original image (grayscale, 256 levels) to a pseudo-color one to improve the image visibility. Figure 4.8 shows a comparison between the SHG images of mouse tendon before and after pseudo-coloring. As one can see, the collagen fibrils in Figure 4.8(b) are more apparent than the ones in the grayscale image (Figure 4.8(a)).  53  (2)Cross-section and 3-D reconstruction  Figure 4.9: The techniques of cross-section and 3-D reconstruction used in MPM image processing. (a) SHG en-face image of mouse tendon. (b) The cross-sectional image sliced from a reconstructed 3-D image. The dotted line in blue denotes the depth where the en-face image is acquired. (c) The 3-D reconstruction from a stack of en-face images. The image is rotated by an arbitrary angle. The scale bar is 20 µm. For MPM image processing, the techniques of 3-D reconstruction and cross-section are also employed to provide the user with more information which may not be visible in the en-face images. As illustrated in Figure 4.9, a stack of en-face images are acquired and then reconstructed into a 3-D image (Figure 4.9(c)). Afterwards, a cross-sectional image (Figure 4.9(b)) is selected and sliced from the 3-D image. The 3-D image can provide the user with the overall structure of the mouse tendon, whereas the sliced cross-sectional image allows the user to get the information of the cross-sectional tissue structure in the depth direction.  54  (3)Image fusion  Figure 4.10: Illustration of the advantages of image merging. (a) TPEF cross-sectional image of fish cornea. (b) SHG cross-sectional image of fish cornea at the same location. (c) The merged cornea image. The multi-layer structure can be clearly dissolved in the merged image.  Image fusion is another important technique in medical image processing. By combining two or more images of different modalities into a single image, a merged image with more information than any input images is generated. For MPM, the image fusion is achieved by merging the TPEF and SHG en-face or cross-sectional images after pseudo coloring (typically the TPEF is colored in red while the SHG is colored in green). As shown in 55  Figure 4.10, the TPEF image (Figure 4.10(a)) displays the fluorescence signal from the cells of epithelium and endothelium of the cornea, whereas the SHG image (Figure 4.10(b)) denotes the strong signal from the collagen fibrils of the stroma. From the merged image (Figure 4.10(c)), we can clearly resolve the multilayer structure of the fish cornea.  4.2 OCT user interface 4.2.1 Framework of the OCT user interface  The framework of the OCT user interface is shown in Figure 4.11. The OCT user interface is basically composed of six sections:  (1)  The mode section  The NORMAL mode of OCT is used for 3-D data acquisition, in which all the cross-sectional images acquired are saved in the memory of the computer. To work in this mode, the user needs to determine the number of cross-sectional images in the 3-D stack. The CONTINUE mode is for system alignment and optimization. When working in this mode, the cross-sectional scanning will be repeated until the user stops the program. The data of cross-sectional image will be deleted after it is displayed on the computer screen.  (2)  The control buttons section  The SPECTRUM button is used to display the A-line interference fringes in domain and z domain. The IMAGING button is used to display the cross-sectional images.  (3)  The background subtraction section  When selected, the background signal, which is the DC term in Equation 2.3, will be 56  removed from the interference signal by subtraction.  (4)  The parameters section  Parameter Slice defines the number of cross-sectional images in one stack. Stepsize (in millivolt) determines the spacing between adjacent cross-sectional images. Inte Time is the integration time of each A-line. The minimum value of this parameter is 18 s as limited by the camera speed.  (5)  The display setting section  The parameters Contrast and Bright are used for image enhancement. The two sets of radio buttons are the same as the ones in MPM user interface.  (6)  The display section  The spectrum of the A-line in domain and z domain and the cross-sectional image will be displayed in real-time in the two square windows. An enlarged view of the spectrum of a mirror surface is shown in Figure 4.12. The blue curve represents the interference fringes of the mirror surface in domain, whereas the black curve denotes the spectrum of the mirror surface in z domain.  57  Figure 4.11: Diagram of the OCT user interface. The left half is the control area, including the imaging mode setting, the control buttons, the background setting, the input parameters and the image display setting. The right half is the display area, including spectrum and imaging. The spectrum display is used for system alignment and optimization.  58  Figure 4.12: Enlarged view of the spectrum of a mirror. The blue curve: interference fringes of the mirror in domain. The black curve: spectrum of the mirror in z domain.  59  4.2.2 Flow chart  Figure 4.13: Flow chart of the OCT user interface operation. Input: all the parameters entered by the users to control the OCT system. Repeat: a loop function to control the OCT image acquisition, in each loop only one cross-sectional image is acquired. Condition: a condition used to determine to continue image acquisition or stop acquiring. The Y scanning and data acquisition (by the frame grabber) are executed simultaneously. The flow chart of the OCT user interface operation is shown in Figure 4.13. In OCT imaging, the Y scanning and data acquisition by the frame grabber are executed 60  simultaneously. One OCT loop typically includes Y scanning, data acquiring, data processing, image displaying and X scanning. In each loop, only one cross-sectional image is acquired, processed and displayed. The X scanner is then moved by a step along the X axis before the next OCT loop. The condition (Figure 4.13) is used to determine the number of cross-sectional images to be acquired. Similarly, the OCT cross-sectional image can also be acquired in the XZ plane and then translated in the Y direction. Figure 4.14 illustrates the procedures of OCT image acquisition and processing. A cross-sectional image composed of 512 A-lines (Figure 4.14(a)) is acquired in one OCT loop. As the loop repeats for multiple times, a stack of YZ images (Figure 4.14(b) are generated and then reconstructed into a 3-D image (Figure 4.14(c)). An en-face image (Figure 4.14(d)) is selected and sliced from the 3-D image in post-processing.  61  Figure 4.14: Illustration of the OCT image acquisition procedure and data processing. (a) The YZ cross-sectional image acquired in one OCT loop. (b) The acquisition is repeated multiple times to get a stack of YZ images. The total number of YZ images is a parameter input into the user interface by the users. (c) A 3-D image is reconstructed based on the stack of YZ images. (d) An en-face image (denoted by the red square) of OCT is sliced from the 3-D image.  62  4.2.3 Scanners waveforms and synchronization  Figure 4.15: Scanners controlling waveforms and associated trigger signals for OCT. (a) Waveform for Y scanner controlling. The enlarged Y waveform is shown in (e). The Y waveform is a triangle wave, including an upwards part and downwards part. The upwards part contains 512 steps while the downwards part only has 100 steps. The period P (from t1 to t4) of the X waveform is equivalent to one OCT loop. (b) Waveform for X scanner controlling. (c) Time for data acquisition. The data acquisition is ceaseless since t1. (d) Time for data processing, including rescaling and FT.  The design of the OCT waveforms is shown in Figure 4.15. The Y waveform is periodical saw tooth wave with a period of P (from t1 to t4). One period of the Y waveform is composed of an upwards part (512 steps) and a downwards part (100 steps). The upwards part is used for raster scanning the Y scanner to acquire 512 A-lines for one 63  cross-sectional image, whereas the downwards part is used to drive the Y scanner back to the original position gradually. Therefore, the 100 A-lines of data acquired in the downwards part are discarded. In OCT waveform generation, one period of the Y waveform is equivalent to one OCT loop. The X waveform is shown in Figure 4.15(b). The step length of the X waveform is equal to one OCT loop. Unlike MPM, OCT does not require depth scanning. Therefore the Z waveform is not used in OCT.  Figures 4.15(c) and (d) show the time used for data acquisition and processing during one OCT loop. The data acquisition in OCT is ceaseless, which means no stop between the acquisition of two adjacent cross-sectional images. Unlike MPM, the data acquisition and data processing are executed parallel. While the DAQ unit is acquiring a cross-sectional image, the data processing unit is still working on the previous cross-sectional image. This parallel technique greatly improves the imaging speed of the OCT system [22].  The synchronization of the Y scanner and the DAQ unit is achieved with a hardware trigger (Figure 4.15(f)) generated by the frame grabber board integrated in the computer. As shown in Figures 4.15(e) and (f), each step of the Y waveform represents one A-line of the cross-sectional image. The Y scanner is triggered to move by a step when the acquisition of one A-line is finished. Therefore, the trigger signal also determines the integration time for each A-line.  The synchronization of the X and Y scanners is accomplished by software. A software trigger generated at the end of the acquisition of each cross-sectional image will drive the X scanner to move to the next location.  64  4.2.4 The delay shift in SDOCT  Similar to MPM, the delay shift in OCT imaging will also result in image aliasing. Figure 4.16 gives an illustration of the effect of the delay shift on image quality. Figure 4.16(a) is a cross-sectional view of fish cornea. From the enlarged view (Figure 4.16(b)), we can see that several columns on the right edge of the cornea have been distorted. Since the scanner waits ~150 µs before starting rotation, the first a few A-lines of data are acquired from the same location, resulting in the same intensity value of these A-lines. However, due to the fact that the A-lines in OCT cross-sectional image are irrelevant with each other (they do not interweave with each other), the aliasing in OCT imaging is not serious and can be removed by discarding the first a few A-lines.  Figure 4.16: Illustration of the delay shift in OCT imaging. (a) Cross-sectional image of fish cornea. (b) Enlarged view of the right edge of the fish cornea. 65  4.2.5 OCT real-time data processing  The OCT real-time data processing includes background subtraction, numerical dispersion compensation, non-uniform fast Fourier transform (NUFFT) [22, 51]. A multi-core processing technique is applied to increase the data processing speed. Moreover, the data acquisition by the frame grabber and the data processing are designed to be executed parallel so that the frame rate of the system can be further improved. All these techniques and algorithms are developed by a former Master’s student.  66  Chapter 5 system characterization The system specifications, including resolution, imaging depth, horizontal FOV, signal to noise ratio (SNR), sensitivity fall-off, and frame rate, are important factors to determine the overall performance of the system. This chapter will present a summary of all the characterizations of this integrated system.  5.1 MPM 5.1.1 Lateral resolution  Figure 5.1: Schematic of the phantom for MPM resolution measurement. The fluorescence beads are distributed evenly in the agarose gel after shacked by a sonic machine. 0.1 m beads are used for MPM lateral resolution measurement while 0.5 m beads are used for axial resolution measurement. The lateral resolution of MPM is measured with fluorescence beads with a diameter of 67  0.1 m. The selection of the bead diameter is based on the consideration that the size of the sample must be much smaller than the theoretical resolution of MPM (~0.6 m for 40× objective lens with NA of 0.8) so that the measured bead size will be the point spread function of the optics. A phantom as shown in Figure 5.1 is prepared for the experiment. Firstly, a low concentration bead solution is prepared and then processed by a sonic machine to separate the bead aggregates. Afterwards, a certain amount of agarose powder is added to the solution. The solution is then heated gently until all the powder completely dissolves. After the solution cools down and solidifies, the beads are fixed in the agarose gel and distribute evenly.  68  Figure 5.2: Illustration of the MPM lateral resolution measurement with 0.1 m beads. (a) TPEF en-face image of 0.1 m beads with a scale of 24 m by 24 m. A bead (as shown in the red square) is selected for the resolution measurement. (b) Enlarged view of the selected bead. The center of the bead is selected manually. (c) The intensity profile of the selected bead along the X axis. The lateral resolution along the X axis is 0.586 m based on the FWHM of the profile peak. (d) The intensity profile of the selected bead along the Y axis. The lateral resolution along the Y axis is 0.577 m based on the FWHM of the profile peak. The lateral resolution of MPM is calculated as the average of the two FWHMs. 69  The MPM en-face images of the beads are shown in Figure 5.2. Figure 5.2(a) is the TPEF image of the beads. The size of the image is set as 24 m by 24 m so that more than 10 pixels can be observed in the peak of the intensity profile of a bead. A bead is randomly chosen for the measurement and its enlarged view is shown in Figure 5.2(b). The center pixel of the bead is selected manually and the corresponding intensity profiles along the X and Y axes are plotted in Figures 5.2(c) and (d) respectively. The FWHMs of the two intensity peaks of the bead along the two axes are then measured and averaged to get one measurement of experimental lateral resolution. This measurement is conducted several times with beads in different en-face images and different phantoms to get a more accurate estimation of the experimental lateral resolution.  5.1.2 Axial resolution  The measurement of the axial resolution of MPM is similar to the lateral resolution except for the beads used for the phantom. For the 40× objective lens, the theoretical axial resolution is ~1.5 m. The intensity of beads with a diameter of 0.1 m is much weaker than that of beads with a diameter of 0.5 m. This will reduce the SNR of the image and might also affect the accuracy of the measurement. For this consideration, beads with a diameter of 0.5 m are used for the axial resolution calibration.  70  Figure 5.3: Illustration of the MPM axial resolution measurement with 0.5 m beads. (a) TPEF cross-sectional image of 0.5 m beads with a scale of 40 m (Z) by 64 m (Y). A bead (as shown in the red square) is selected for the resolution measurement. (b) Enlarged view of the selected bead. The center of the bead is selected manually. (c) The intensity profile of the selected bead along the Z axis. The axial resolution is estimated around 1.6 m based on the FWHM of the profile peak.  71  Figure 5.3 shows the reconstructed cross-sectional TPEF image of the phantom. The intensity profile of a selected bead along the Z axis is plotted in Figure 5.5(c). The experimental axial resolution of MPM for the 40× objective lens can be obtained as the FWHM of the intensity profile.  5.1.3 Frame rate  The frame rate of MPM can be measured by using an oscilloscope or adding a timing function to the user interface to record the time duration for the acquisition of each image. The results show that the MPM can generate images at 0.4 frames per second.  5.2 OCT 5.2.1 Axial resolution  To measure the axial resolution of the SDOCT system, a mirror is used as the sample and placed at the focus of the probing beam. The mirror can be regarded as a sample of a single surface. The reflection beam from the mirror surface will produce a delta function after FT, which is then convoluted with the system response function to generate a narrow peak in the z domain. Therefore, the axial resolution of SDOCT can be measured as the 3 dB width of the corresponding peak in the z domain. Axial resolution at different depths can also be measured by translating the reference mirror along the optical axis. Figure 5.4 shows the result of the axial resolution calibration at different depths. The SNR, the 3 dB width of the peaks, and the sensitivity fall-off of the peaks with respect to the peak closest to the DC term, are recorded and summarized in Table 5.1. The sensitivity fall-off is defined as the SNR reduction when the sample is located deeper in depth. 72  As shown in Table 5.1, the average axial resolution is 3.7 µm (in air), which is larger than the theoretical axial resolution of 2.35 µm (in air). The reduction of the axial resolution can be caused by several factors, such as the shape of the spectrum of the source, the alignment of the spectrometer and the dispersion compensation.  Figure 5.4: Measurement of axial resolution, imaging depth, SNR and sensitivity fall-off. The different peaks are measured by a mirror surface sample translated at different depths.  73  Table 5.1: The measurement of SNR, sensitivity fall-off, axial resolution of OCT Peak  PL (pixel)  SNR (dB)  SF (dB)  AR (µm)  1  42  50.81  0.00  3.57  2  81  50.18  0.63  3.67  3  116  50.51  0.30  3.22  4  155  50.02  0.79  3.34  5  208  48.68  2.13  3.46  6  269  46.88  3.93  3.93  7  307  45.40  5.41  4.05  8  358  43.70  7.11  4.05  9  419  41.27  9.54  4.05  SF: sensitivity fall-off. AR: axial resolution. PL: location of the peak in the OCT cross-sectional image. The unit of PL is pixel. 5.2.2 Imaging depth  The imaging depth of SDOCT can be calculated based on the reference mirror locations RL (in µm) and the corresponding peak locations PL (in pixels) in Figure 5.4,  (5.1)  Where N is the number of the pixels of the CCD camera, s is the spacing between two adjacent pixels along the z axis in the SDOCT cross-sectional image. The equation of the spacing s is given as,  74  (5.2)  Where  and  are the reference mirror locations in Z domain for the ith and jth  interference peaks, and pixel domain. Here  and  are the peak locations of the ith and jth peaks in the  can be measured from the micrometer on which the reference  mirror is mounted while PL can be measured directly from Figure 5.4 as the pixel number of the peaks. Generally, the peak closest to the DC term and the peak closest to the end are used for calibration to increase the accuracy of the measurement. In this way, the experimental imaging depth is measured to be ~610 µm, which is in good agreement with the theoretical imaging depth, which is 602 µm.  5.2.3 SNR and sensitivity fall-off  SNR and sensitivity fall-off [22] are two important factors to characterize the performance of a SDOCT system. As shown in Table 5.1, the peak closest to the DC term has the maximum SNR, which is 50.81 dB. Based on Table 5.1, the overall sensitivity fall-off of the OCT system can be calculated as the difference of the SNR of the first peak and the last peak, which is equal to 9.54 dB.  5.2.4 Lateral resolution  The lateral resolution is measured by edge detection with a piece of silicon wafer [52]. As shown in Figure 5.5, a piece of silicon wafer with a sharp edge is selected as the sample and is placed at the focus of the probing beam. A cross-sectional image is generated (Figure 5.5(b)) and the intensity profile of the wafer (Figure 5.6) is plotted. The width of 75  the wafer edge in the image is then measured as the lateral distance of P2 and P3, whose intensity are 90% and 10% of the maximum intensity (P1 minus P4) respectively. Therefore, the experimental lateral resolution for the 4× objective lens, which is equivalent to the width of the wafer edge, is 5.4 µm. Similar measurements are done with 40× and 10× objective lenses, and the results are 0.69 µm and 2.2 µm respectively.  Figure 5.5: Illustration of the OCT lateral resolution measurement with wafer. (a) A piece of silicon wafer with a sharp edge is used for OCT lateral resolution measurement. The OCT data is collected across the sharp edge. (b) The cross-sectional OCT image of the wafer. The objective lens is 4×.  76  Figure 5.6: Illustration of the OCT lateral resolution measurement with wafer using the 4× objective lens. (a) The cross-sectional OCT image of the wafer. The size is 600 m (in Z) by 160 m (in Y). The red dotted line is selected manually to plot the intensity profile of the wafer. (b) Intensity profile of the wafer. P1: The point where the intensity is the maximum. P2: The point where the intensity fall by 10%. P3: The point where the intensity fall by 90%. P4: The point where the intensity is the minimum. The lateral resolution of OCT is calculated as the lateral distance between P2 and P3.  5.2.5 Frame rate  The measurement of the frame rate of OCT is similar as that of MPM. Due to the absence of depth scanning, OCT can image much faster than MPM. The frame rate is measured to be 78 frames per second with software dispersion compensation and 83 frames per second with hardware dispersion compensation.  77  5.3 Field of view of MPM/OCT  The lateral FOV of the MPM/OCT system depends on the objective lens used and the scanning range of the galvo scanners. The lateral FOV can be measured with a resolution target with periodic line pairs (200 lines/mm). Two OCT cross-sectional images of the target with different driving voltages are generated and shown in Figure 5.7. Figure 5.7(a) is taken with a driving voltage of 2 volts by using the 40× objective lens. Figure 5.7(b) is taken with a driving voltage of 4 volts by using the same objective lens. The lateral FOV for different driving voltages can be calculated by counting the number of lines in the images. For the 40× objective lens, five measurements with different driving voltages are shown in Table 5.2. The ratio FOV/Input voltage is then calibrated as the slope of the linear fitting result of the five measurements (Figure 5.8), which is about 80 µm/V for the 40× objective lens. Similar measurements are done with 4× and 10× objective lenses, and the results are 800 µm/V and 320 µm/V respectively. MPM has the same FOV as OCT when sharing the same objective lens and scanning voltage.  78  Figure 5.7: Illustration of the lateral field of view measurement for 40× objective lens. This measurement is done with OCT system. (a) OCT cross-sectional image of a resolution target (200 lines /mm). The input voltage for Y scanner is 2 volts. (b) A larger view of the target with an input voltage of 4 volts. Table 5.2: FOV vs input voltages Input 0.5  1  1.5  2  3  4  40  80  120  160  240  315  voltage (V) FOV (µm)  79  Figure 5.8: Linear fitting of the measurement. The X axis is the input voltage for the scanner while the Y axis is the FOV.  In the axial direction, the scanning range of MPM is limited by the piezo scanner which is 400 µm at maximum. However, in reality, the imaging depth will be limited by the transparency of tissues.  5.4 Summary of the overall performance of the MPM/OCT system  Table 5.3: Summary of the MPM/OCT system performance MPM OCT Lateral resolution 0.6 µm (40×) 0.69 µm (40×), 2.2 µm (10×), 5.4 µm (4×) Axial resolution 1.6 µm (40×) 3.7 µm (in air) Field of View 400 µm (40×, lateral) 4 mm (4×, lateral) 400 µm (axial) 610 µm (axial) Frame rate 0.4 frames/s 78 frames/s (software) 83 frames/s (hardware) SNR -----50.81 dB Sensitivity fall-off -----9.54 dB (for 600 µm) 80  Chapter 6 system capability and image demonstration  The capability of the integrated MPM/OCT system in scalable multimodality imaging will be demonstrated in this chapter. The OCT and MPM images of tissues at the same location are acquired and then registered. By matching the OCT cross-sectional image with the reconstructed MPM cross-sectional image, the multilayer structure of biological tissues can be easily resolved. Moreover, the system can be applied to study cell activities in collagen gel. Last but not least, the OCT part of the integrated system can be easily modified to acquire scalable optical coherence microscopy (OCM) images with objective lenses of different NAs.  6.1 Scalable multimodality imaging  With the 4× objective lens, the OCT is capable of cross-sectional imaging over millimeter FOV and 600 m in depth at a fast speed. With 40× water immersion objective lens, the MPM is able to acquire high-resolution en-face images with sub-micron resolution. OCT and TPEF/SHG images of a sample at the same location can be acquired and registered. Using the image processing techniques introduced in chapter 4, a cross-sectional image of MPM can be sliced from a reconstructed 3-D image and then matched with the OCT cross-sectional image. The en-face images of TPEF and SHG will also be pseudo colored and merged. The two cross-sectional images along with the merged en-face MPM images can clearly display the multilayer structures of biological tissues.  81  Figure 6.1: MPM/OCT images of onion. The OCT image is acquired with the 4× objective lens, whereas the MPM images are generated with the 40× objective lens. (a) OCT cross-section image of onion. The red square denotes the area corresponding to the MPM cross-sectional. (b) The reconstructed TPEF image of onion. No SHG signal is observed for onion. From (c) to (g): En-face TPEF images of onion at depths of 37 m, 60 m, 85 m, 124 m, 171 m. The scale bar is 50 m.  82  Figure 6.1 shows the MPM/OCT images of the onion epidermal cells. OCT image is acquired with 4× objective lens while MPM images are generated with 40× water immersion objective lens. The OCT cross-sectional image (Figure 6.1 (a)) displays a large view of the onion epidermis with a size of 600 m (depth) by 1600m (lateral). Based on the high penetration OCT image, more than four layers of onion cells can be resolved. The reconstructed TPEF cross-sectional image corresponding to the red square in Figure 6.1 (a) is shown in Figure 6.1 (b). No SHG signal of the onion epidermis is observed. The corresponding TPEF en-face images of the onion cells at different depths are shown in Figures 6.1 (c) ~ (g). Therefore, the OCT and MPM cross-sectional images can display the multilayer structure and cell distribution of the onion epidermis while the high resolution MPM en-face images are able to show the clear structure of individual cells at difference locations within the epidermis.  83  Figure 6.2: MPM/OCT images of mouse ear skin. The OCT image is acquired with the 10× objective lens, whereas the MPM images are generated with the 40× objective lens. (a) OCT cross-section image of mouse ear skin. ED: epidermis. D: dermis. CT: cartilage. The red square denotes the area corresponding to the MPM cross-sectional. (b) The Merged TPEF/SHG cross-sectional image of mouse ear sin. From (c) to (f): Merged en-face images of mouse skin at depths of 12 m, 30 m, 55 m, and 68 m. TPEF is in red while SHG is in green. The scale bar is 50 m.  Figure 6.2 shows the MPM/OCT images of mouse ear skin. The OCT image shows the cross section of the mouse ear skin for up to a few hundred microns. Multiple layers can be observed in the OCT image, which include epidermis, dermis and cartilage layers. The MPM images have higher resolution but lower penetration for about ~70 µm depth. The 84  epidermis is composed of cells and the dermis of collagen fibers.  6.2 Cells and collagen interaction  Cell biology research with microfluidic chips is an emerging research area that studies cell structure, physiological properties, life cycles and interactions with their environment. Microfluidic chip is a powerful tool which can provide a highly controlled microenvironment to the cells. The following experiments on microfluidic chips prove the great potential of applications of our imaging system in this area.  A microenvironment is made by injecting breast cancer cells and collagen gel into a specially designed microfluidic chip to study the interaction between the cells and the collagen gel. The interaction is imaged by the combined MPM/OCT system on day zero, day one and day four after the cells are embedded in the chip. The results are shown in Figure 6.3.  The OCT cross-sectional image of the day-zero chip (Figure 6.3(a)) shows that the channel (the layer under the membrane layer) of chip is filled with collagen gel, which is in good agreement with the MPM cross-sectional image, where the channel is filled with collagen (green) with cells (red) scattered near the top of the channel. The corresponding MPM en-face image is shown in Figure 6.3(c), in which the cells are sparse and separated from each other. The result of the day-one chip is shown in Figures 6.3(d) to (f). From Figures 6.3(d) and (e), we can see the collagen gel has shrunk due to the influence of the cells, resulting in a gap between the collagen and the membrane. With cell proliferation, the cells start to form cell clusters (Figure 6.3(f)). On day four, the size of the gap as well as the size of the cell clusters are getting larger. By recording the size of the gap (thickness) on 85  consecutive days, the interaction between the cells and the collagen gel can possibly be quantified.  86  Figure 6.3: The interaction between breast cancer cells and collagen on microfluidic chips. (a) OCT cross-sectional image of the chip acquired on the day when the cells and collagen get were injected into the chip. L1: the layer of membrane. L2: the layer of channel. (b) Merged MPM cross-sectional image of the zero-day chip. TPEF is in red and SHG is in green. (c) Merged MPM en-face image of the zero-day chip. (d) to (f) OCT cross-sectional image, merged MPM cross-sectional image and merged MPM en-face image of the one-day chip. (g) to (i) OCT cross-sectional image, merged MPM cross-sectional image and merged MPM en-face image of the four-day chip. The scale bar is 50 m.  87  To further verify that the shrinkage of the collagen gel is due to the influence of the cells, a control experiment is conducted on a chip filled with collagen gel but without cells. The chip is filled with collagen gel and kept in the same environment as the chips with cells. The imaging results are shown in Figure 6.4. Comparing the MPM/OCT cross-sectional images of the chip on day zero and day four, no gap is detected between the collagen gel and the membrane. Therefore, the shrinkage of the collagen gel is highly likely due to the effect of the cells.  Figure 6.4: Microfluidic chips with collagen gel, no cells. (a) OCT cross-sectional image of the zero-day chip. L1: the layer of membrane. L2: the layer of channel. (b) Merged MPM cross-sectional image of the zero-day chip. TPEF is in red and SHG is in green. (c) Merged MPM en-face image of the zero-day chip. (d) to (f) OCT cross-sectional image, merged MPM cross-sectional image and merged MPM en-face image of the four-day chip. The scale bar is 50 m. 88  6.3 OCM  OCM is a variation of OCT, where the images are displayed in en-face mode with high resolution. For a SDOCT system, OCM images can be reconstructed from a stack of OCT cross-sectional images. As shown in Figure 4.14(d), the OCM en-face image can be sliced from the reconstructed 3-D image of a stack of cross-sectional images. Compared to the MPM system, OCM can provide images with deeper penetration. Furthermore, since there is no depth scanning in the OCM system, it can acquire images at a much faster speed than the MPM system, which makes it possible for in-vivo imaging.  Experiments on onion skin and fish cornea are used to show the capability of the OCM system. Figure 6.5 shows the results of the onion experiment. Figure 6.5(a) is the OCT cross-sectional image of the onion acquired with the 4× objective lens. The OCM image (XY direction) acquired with the 10×objective lens corresponding to the red dotted line in Figure 6.5(a) is shown in Figure 6.5(b), where we can clearly resolve the cells within the epidermis of the onion. A higher resolution OCM view of the cells is shown in Figure 6.5(c), which is acquired with the 40× objective lens and corresponding to the blue dotted line in Figure 6.5(a). With the high NA objective, the structure of the epidermal cells of the onion can be readily resolved.  89  Figure 6.5: OCT/OCM images of an onion. (a) OCT cross-sectional image of the onion acquired with the 4× objective lens. (b) Reconstructed OCM image of the onion acquired with the 10× objective lens. (c) Reconstructed OCM image of the onion acquired with 40× objective lens. The red dotted line in (a) denotes the location where (b) is acquired, whereas the blue dotted line denotes the location where (c) is acquired.  The results of the fish cornea are shown in Figure 6.6. Figure 6.6(a) is the OCT cress-sectional image of the cornea acquired with the 4× objective lens. The corresponding OCM images (XY direction) at different depths are shown in Figures 6.6(b) to (g). These OCM images provide higher resolution and more details of the sample which is not apparent in the OCT cross-sectional image.  Overall, the results on biological samples show that the multimodal system is capable of providing multiple contrasts, including TPEF, SHG and scattering from MPM and OCT. It also allows a large FOV with OCT and high-resolution zoom-in imaging with MPM. The multimodal imaging system allows acquisition of more information from samples than a single-modality imaging system.  90  Figure 6.6: OCT/OCM images of fish cornea acquired with the 4× objective lens. (a) OCT cross-sectional image of the cornea. (b) to (g) Reconstructed OCM images of the cornea at depths of 54 m, 169 m, 319 m, 375 m, 413 m, and 420 m. The scale bar is 100 m.  91  Chapter 7 application in refractive index and thickness measurement This chapter presents an application of MPM/OCT for measuring the refractive index (RI) and thickness of biological tissues. The RI and thickness are determined by analyzing the co-registered OCT and MPM images. The precision of this method is evaluated through three standard samples which are water, air and cover-glass. Accuracy of within 0.5% error compared to literature values is obtained. The capability of the method to measure RI and thickness of multilayered tissues is demonstrated on fish corneas.  7.1 Introduction  The RI of biological tissue is a key parameter for characterizing the light-tissue interactions [23-24]. Knowledge of the tissue RI plays an important role in many biomedical applications. For example, in optical diagnostics, malignant tissues can be distinguished from normal tissues by measuring and comparing their RI [25-26]. In laser-eye treatments, the RI and thickness of cornea are important indicators of cornea states, such as cornea hydration and intraocular pressure, before and after laser surgery [27-30].  One method of determining the RI of biological tissues is focus-tracking [23, 53-56]. This method employs low-coherence interferometry, such as optical coherence tomography (OCT), combined with translation stages. RI is obtained by taking the ratio between the optical pathlength measured by OCT and the focus shift resulted from translating the focus of an objective lens inside biological tissues [23]. However, this method can only measure the RI along one axial line in tissues at a time, which makes it difficult to determine the 92  distribution of RI within tissues. Moreover, this method requires manual operation to focus the laser beam onto the front and rear surfaces of a sample, which reduces the accuracy of the result and makes the measurement of multi-layered samples complicated. Another method is based on total internal reflection [57-59]. This method uses a semicylindrical lens to determine the critical angle and therefore the RI of biological tissues. However, the fact that it only works for single-layered homogeneous samples and requires for complicated sample preparation makes it non-ideal for biological tissues. Dirckx et al. devised a method based on a confocal microscope by measuring the optical thickness which is related to the RI and the physical thickness of a single-layered homogeneous sample simultaneously [60]. The limitation of this method is also the complicated sample preparation. Kim et al. reported simultaneous measurement of RI and thickness by combining low coherence interferometry and confocal optics [61]. This method only uses the confocal optics to determine the focus shift within a sample, which was similar to the focus-tracking method. Moreover, confocal optics has some limitations compared to MPM, such as shallow penetration depth, need for confocal apertures, and lacking biochemically specific contrast. Fluorescence confocal microscopy relies on exogenous staining. For unstained tissues, only reflectance confocal can be used which detects the reflected light. Therefore, the reflectance confocal contrast is nonspecific. In contrast, TPEF can be detected from intrinsic sources such as nicotinamide adenine dinucleotide phosphate (NADPH) in cells while SHG can be detected from non-centrosymmetric molecules such as collagen. The combination of these two biochemically specific contrasts in MPM can provide more details of tissues and allows for better distinguishing of the multi-layer structure of tissues.  For cornea RI measurement, Kim et al. [62] and Lin et al. [63] measured the overall RI of cornea by focus tracking. Meek et al. measured the RI change in stroma when cornea 93  swells [64] and Patel et al. measured the RI change in stroma before and after laser eye surgery [28] with Abbe refractometer. Abbe refractometer is based on the measurement of refraction angle and requires direct contact of cornea with two prisms. In all the above references, only the average value of cornea RI or a single layer such as the stroma of cornea RI has been reported. To the best of our knowledge, there has been no report on the characterization of RI variations among multiple layers of a multi-layered tissue structure such as the cornea.  This chapter will present a new method which can measure the RI and the thickness of multiple tissue layers simultaneously in thick biological tissues noninvasively based on the combined MPM/OCT system. In our experiment, OCT cross-sectional images and MPM 3-D images of tissues at the same sample location are taken successively. Optical pathlength and physical thickness can be derived from those images, respectively. The RI is obtained by taking the ratio between the optical pathlength and the physical length. This method requires no sample preparation and is non-contact. It can measure the thickness and RI distribution of multiple tissue layers through a single measurement. Furthermore, the technique also provides visualization of cellular and extra-cellular matrix structures for characterizing multiple layers of biological tissues.  7.2 Principles  Based on OCT imaging principles, optical pathlength (OPL) is the product of the geometric length of the path light travels through, and the index of refraction of the medium. Therefore, the thickness of a sample (as shown in Figure 7.1) obtained from OCT cross-sectional images is the optical pathlength, which is the product of the physical 94  thickness of the sample and its RI. Assuming the sample has a thickness t and a refractive index n, the corresponding optical pathlength Lp is:  (7.1)  Figure 7.1: Illustration of the relationship between the optical pathlength and geometric thickness of a sample in OCT imaging. (a) A homogeneous sample a geometric thickness of t and RI of n. (b) OCT cross-sectional image of the sample. The thickness of the sample obtained from the image is Lp.  In MPM imaging, the depth scanning is achieved with a piezo scanner. However, the thickness of the sample obtained from the reconstructed MPM cross-sectional image is neither the physical thickness due to the multi-layer refractions in the sample arm. Figure 7.2 shows how refraction happens in a multi-layered sample preparation. The sample (e.g. fluorescent dye solution) is held between two glass plates and the objective is immersed in water. Refraction occurs at the interfaces between the immersion water, the top glass plate 95  and the sample.  Objective Lens Glass plate  Immersion  no  medium  n  t  Sample Figure 7.2. Illustration of the multi-layer refractions in the sample arm. t and n are the physical thickness and RI of the sample, no is the RI of the immersion medium.  Figure 7.3 shows the ray optics diagram for a multi-layered refraction case. The beam is first focused onto the top surface of the sample (point e). Then the objective is moved downward by a distance of Lo so that the beam is focused onto the bottom surface of the sample (point g). Here Lo is referred as the optical thickness of the sample which is related to RI. From Snell’s law, we have:  no sin 1  n1 sin  2  n sin  3  NA (7.2)  Here no, n1 and n are the RI of the immersion medium (water), the glass plate and the sample, respectively. The NA is the numerical aperture of the objective lens, θ1 is the 96  incident angle on the boundary between immersion medium and glass plate, θ2 is the refractive angle on the boundary between immersion medium and glass plate, θ3 is the refractive angle on the boundary between glass plate and the sample. By analyzing the triangle relationships in Figure 7.3, we can derive:  tan 1 ac / cf ch   (7.3) tan  2 ac / ch cf  tan 1 bc / cd ce   tan  2 bc / ce cd  (7.4)  We can combine Equation 7.3 and Equation 7.4 into:  tan 1 ch ce ch  ce eh     (7.5) tan  2 cf cd cf  cd df  Similarly, we have:  tan  2 eg  tan  3 eh  (7.6)  Multiply Equation 7.5 with Equation 7.6:  tan 1 eg t   tan 3 df Lo  (7.7)  Notice that: 97  tan1   sin 1 sin 1 (7.8)  cos 1 1  sin 2 1  and  t a n3   s i n3 s i n3  c o s3 1  s i n2 3  (7.9)  By combining equations from Equation 7.7 to Equation 7.9, we can get the relationship among n, t and Lo:  n 2  NA Lo  t  o2 2 n  NA 2  (7.10)  Based on the two relationships (Equation 7.1 and Equation 7.10), the physical thickness and the RI of the sample can be calculated:  n  t  NA2  NA4  4[no2  NA2 ]L2p / L2o 2  2 Lp  NA2  NA4  4[no2  NA2 ]L2p / L2o  (7.11)  (7.12)  In our experiment, a water immersion 40× objective (NA=0.8, n0=1.33) is used for the MPM imaging. The optical thickness Lo can be measured from the MPM cross-sectional 98  image and the optical pathlength Lp can be measured from the OCT image. From Lp and Lo, the thickness t and refractive index n can be determined.  Figure 7.3. The multi-layer refraction of laser beam in sample arm. t and n are the physical thickness and RI of the sample. no is the RI of the immersion medium (water), n1 is the RI of the glass plate, θ1 is the incident angle, θ2 and θ3 are the refractive angles. Lo is the distance of the objective lens moved during imaging.  99  7.3 Experiment and results  In the experiment, firstly, a cross-sectional OCT image as in Figure 4.14(a) is obtained. Based on the OCT cross-sectional image and the imaging depth calibrated in chapter 5, Lp can be measured. Secondly, a stack of several hundred MPM en-face images are acquired and then reconstructed into a 3-D matrix. Afterwards, the MPM cross-sectional image is sliced from the 3-D volume as shown in Figure 4.3(d). The dimension of the MPM image in the Z direction is determined by the number of frames in the image stack and the step size between the frames which is controlled by the piezo objective scanner. From the MPM cross-sectional image, Lo is derived. Finally, the RI and thickness are calculated based on Equation 7.11 and Equation 7.12. The OCT and MPM cross-sectional images are co-registered and represent slightly different information about the optical pathlength and the optical thickness of the tissue respectively [32].  7.3.1 System validation  Three standard samples, water, air and cover glass (Fisher, 12-542C), are used to evaluate the performance of this technique. The measurement for the RI of water is given as an example. A sandwiched phantom, as shown in Figure 7.2, is prepared for the experiment. Fluorescent dye solution of very low concentration is used instead of pure water to generate TPEF contrast. The thickness of the solution layer is designed to be around 250 m so that it is neither too thin to increase the measurement error nor too thick to go beyond the image depth of OCT (600 m) and MPM (400 m). Firstly, an OCT cross-sectional image of the phantom is obtained with the 10× objective. Then the objective is switched to the 40× objective without touching the phantom. Next, a stack of 400 MPM images are acquired at 1 µm step size. Based on the data volume, 3-D visualized 100  MPM image is reconstructed and a MPM cross-sectional image corresponding to the OCT image is generated.  Figure 7.4. MPM/OCT images of the phantom. (a) Cross-sectional view of OCT. (b) The intensity profile along the central line of OCT cross-sectional image. (c) Reconstructed cross-sectional view of MPM. (d) The intensity profile along the central line of MPM cross-sectional image. Scale bars are 50 m  Figure 7.4(a) shows the OCT image of the phantom. The two bright lines denote the top and bottom boundaries of the solution layer. The intensity profile of the OCT image along the centre line is plotted in Figure 7.4 (b). The optical pathlength of this layer can be 101  directly acquired by measuring the distance between the two peaks in Figure 7.4 (b). Since the fluorescent dye solution generates no SHG signal, only the TPEF image is obtained and shown in Figure 7.4 (c). The bright area denotes the solution layer while the dark area shows the top and bottom glass plates. Similarly, the central intensity profile of the TPEF image is plotted and shown in Figure 7.4 (d). The optical thickness of the solution layer in MPM is then acquired by getting the full width half maximum (FWHM) of the peak in Figure 7.4(d). With these two parameters, the physical thickness and RI of the solution can be obtained according to Equation 7.11 and Equation 7.12. Assuming the low concentration fluorescent dye contributes little change in the optical refraction of water, the RI of water can be determined from this experiment.  RI measurements for air and cover glass are also performed. For the air measurement, we designed a phantom similar to the one shown in Figure 7.2 but replaced the dye solution with air. Since cover glass can generate weak TPEF signal, this contrast is used to detect the top and the bottom surfaces of the air gap. For the glass measurement, the TPEF signal from the glass itself is used as the contrast. All the experiments are conducted at room temperature and are repeated three times.  Table 7.1: The measurement of RI of standard samples Meas  Ref  Err  Water  1.336±0.003  1.332[65]  0.3%  Air  1.005±0.003  1.0003[65]  0.5%  1.576±0.002  1.52-1.62  _  Cover glass Meas: measured value (average ± standard deviation); Ref: reference value; Err: error with respect to the reference value. 102  The experimental results, the reference values from literature and the errors with respect to the reference values are summarized in Table 7.1. The measured results are presented in the form of average value plus standard deviation. For water, our measured RI is 1.336 ±0.003. The small standard deviation indicates that the measurement is highly repeatable. Compared to the reference value of 1.332 (wavelength 632.8 nm, temperature 20 °C, atmospheric pressure) [65], the error is only 0.3 %. For air, the measured RI is 1.005 ± 0.003 with an error of 0.5% regarding to the reference value 1.0003 (wavelength 800 nm, temperature 15°C, pressure 101.325 kPa) [65]. For cover glass, our result is 1.576 ±0.002. Cover glasses from different manufacturers have slightly different RIs which typically fall within the range of 1.52~1.62. Our measurement of the RI of the cover glass fits within the typical range. From above, the results show that the technique is highly accurate and repeatable. Thus the measurement of the standard samples proves the efficacy of this technique.  7.3.2 Measurement of biological tissues  The measurement of biological tissues is demonstrated on corneas. The cornea is the transparent outer layer of the eye which transmits and focuses light into the eye. Along with the sclera, it works as a protective film against dirt, germs and other particles that can harm the eye. Moreover, the cornea plays an important part in the eye’s vision ability by refracting the light enters the eye. This powerful refracting surface contributes two-thirds of eye’s optical power.  The cornea is a fairly complex structure that has five layers (as shown in Figure 7.5), including the corneal epithelium, the Bowman’s membrane, the stroma, the Descemet’s membrane and the corneal endothelium. Among the five layers, the epithelium, the 103  stroma and the endothelium are three main layers of the cornea. The epithelium is the outmost layer of the cornea which is only 5-6 cell layers thick. The stroma is the thickest layer which mainly consists of collagen fibrils that run parallel to each other. The endothelium is only a single layer of cells.  The cornea is vulnerable to many kinds of diseases, such as infections, degenerations and disorders. Several biomedical imaging techniques have already been employed in cornea imaging. Cornea imaging to show clear structures will serve as a great help in cornea disease diagnosis and surgery treatment.  Figure 7.5: Illustration of the structure of human eye and cornea. (a) Anatomy of human eye [66]. (b) Histological structure of cornea [67], including corneal epithelium, Bowman’s membrane, stroma, Descemet’s membrane and corneal endothelium.  The RI and thickness of fish cornea is measured with the multimodal MPM/OCT system. Black tilapia fish used in our experiment are obtained from a local vendor and fish eyes are 104  resected and imaged under the microscope. The OCT cross-sectional image of fish cornea is acquired and shown in Figure 7.6(a). After that, a stack of en-face MPM images are acquired and the representative images at different depths are shown in Figure 7.6 (c)-(g). The reconstructed MPM cross-sectional view of the cornea is shown in Figure 7.6 (b). In Figure 7.6, the MPM images are color coded with TPEF in red and SHG in green.  105  Figure 7.6 MPM/OCT images of fish cornea. (a) Cross-sectional view of OCT. (b) Reconstructed cross-sectional view of MPM. (c)-(g) MPM en-face view at depths 80, 200, 316, 370 and 387 µm, respectively. TPEF is in red and SHG in green. Scale bars are 50 m.  The co-registered MPM and OCT cross-sectional images clearly display the multi-layer structure of the fish cornea. From Figure 7.6 (a) and 7(b), three layers can be distinguished 106  in the fish cornea, which are denoted as L1, L2 and L3. The fine structures of each layer are provided by the high resolution en-face MPM images. In Figure 7.6 (c), the clearly visible cellular structures which are shown by the TPEF contrast indicate that the top layer is the epithelium. Figure 7.6 (d)-(f) show mainly SHG signals from collagen fibers with different orientations and densities, which is an indication of the stroma layer. In Figure 7.6 (d) and (e) the collagen fibers are organized in a mesh structure with high density while the SHG intensity decreases significantly in Figure 7.6 (f). Figure 7.6 (g) shows the end of the cornea where weak autofluorescence is detected, which indicates the endothelium layer. By the analysis of the en-face MPM images, we can identify that L1 is the epithelium, L2 is the first layer of stroma and L3 is the second layer of stroma (possibly including the endothelium layer). The cells of the endothelium might have already degenerated during the process of imaging. Therefore, we did not see any cells of the layer of endothelium in the MPM en-face images. However, we still observed a thin layer in OCT image (the white curve at the bottom of the cornea in Figure 7.7(a)) and MPM cross-sectional image (the red curve at the bottom of the cornea in Figure 7.7(b)) which might be the layer of endothelium.  Once the MPM and OCT images of fish cornea with clear structures are acquired, the thickness and RI can be calculated. Clearly, this method can not only measure the overall RI of the whole cornea, but also measure the RI of each layer for a multi-layered sample as cornea. Moreover, from the two co-registered cross-sectional images, the thickness variation and the RI distribution along the transverse direction of a few hundred micrometers can also be obtained.  Two sets of data acquired from different specimens are shown in Table 7.2. For each layer of the cornea, the RI and thickness at five different locations are calculated and averaged. 107  The overall RI and thickness of the whole cornea are also computed. For specimen one, the refractive indexes for epithelium (L1), stroma one (L2) and stroma two (L3) are 1.448 ± 0.015, 1.345 ± 0.002 and 1.436 ± 0.009 respectively. For specimen two, the refractive indexes for the respective layers are 1.446 ± 0.011, 1.372 ± 0.005 and 1.392 ± 0.012. Although there are some variations between the two specimens, they have two common features. First, the epithelium layer (L1) has the highest RI among the three layers. From the high-resolution MPM images, the epithelium is observed to be composed of multiple layers of cells. The membrane structures of cells and cell nuclei have high RI because of their lipid and DNA compositions. Second, the posterior layer of the stroma (L3) has higher RI than that of the anterior layer of the stroma (L2). From the SHG en-face images, we found L2 (Figure 7.6 (d)-(e)) is composed of large collagen fibers while L3 (Figure 7.6 (f)) is made of small collagen fibers. The L2 layer also has higher SHG signal intensity than the L3 layer. The difference of the RI of L2 and L3 seems to correlate with the collagen fiber composition and the SHG signal intensity. The variations between the two specimens are possibly due to the individual differences and the hydration levels of the corneas [30, 64]. The thickness of each layer of the cornea is also listed in Table 7.2.  Table 7.2: RI and thickness of fish cornea Specimen one  Specimen two  Refractive  Average  Refractive  Average  index  Thickness [m]  index  Thickness [m]  43.8.7508  1.446±0.011  333. 544  Layer Epithelium (L1) 1.448±0.015 Stroma (L2)  1.345±0.002  252.9  1.372±0.005  219.0  Stroma (L3)  1.436±0.009  41.2  1.392±0.012  39.9  Overall  1.370±0.004  337.9  1.386±0.005  292.4  108  In the literature, the average RI of human cornea is found to be ~1.37-1.39 [63, 68]. The overall RI for the two specimens are 1.370 ± 0.004 and 1.386 ± 0.005 respectively, which are close to the reported number for human cornea. Furthermore, to the best of our knowledge, this is the first time that the RI of individual layers inside a multi-layered cornea structure is reported. Our technique can identify multiple layers in tissues and measure the thickness and RI of individual layers simultaneously.  Figure 7.7: Endothelium of the fish cornea. (a) OCT cross-sectional image of the fish cornea. (b) Reconstructed MPM cross-sectional image of the fish cornea. The scale bar is 100 m.  7.4 Discussion  The axial resolutions of OCT and MPM as well as the physical thickness of a sample are important factors in determining the measurement accuracy of this method. To estimate the error, we can rewrite Equation 7.11 in the form of: 109  n  f ( Lp , Lo )   NA2  NA4  4[no2  NA2 ]L2p / L2o 2  (7.13)  According to the error propagation theory [69], the relative error of n is: n f  ln f  ln f    Lp   Lo (7.14) n Lp Lo f  Here △Lp and △Lo are the measurement errors of Lp and Lo respectively. Thus, we get:  n Lp Lo (7.15)   n 2 Lp 2 Lo  Based on Equation 7.15, there are two ways to reduce the relative error. One is to increase the optical pathlength Lp and thickness Lo. Since Lp and Lo are proportional to the physical thickness t, the increase of t can give rise to the increase of Lp and Lo. In our experiments, the phantoms are specially designed so that they can make full use of the imaging depth and minimize the relative error. In tissue measurement, the sample thickness will be limited by the nature of the tissues. The other way is to reduce the measurement errors ΔLp and ΔLo, which are determined by the OCT and MPM axial resolutions. In Figure 7.6, the optical pathlength Lp and the optical thickness Lo are measured from the intensity profile of the images. Since the detection of the boundary of the layers directly depends on the system resolving ability, a higher axial resolution will provide better measurement accuracy. In our system, the OCT axial resolution (3.7 m in air) can potentially be improved to reach the source limited resolution (2.35 m in air). To further improve the OCT resolution, a source with shorter center wavelength and/or broader bandwidth will be needed. For MPM, a 110  higher axial resolution can potentially be achieved by using objectives with higher NA.  The experiment on fish corneas demonstrates the advantages of the multimodal MPM/OCT system. Based on the tissue structures determined by OCT and MPM cross-sectional images, the RI and tissue thickness can be obtained. Furthermore, the high resolution en-face TPEF and SHG images show the compositions of each tissue layer which can provide information on why RI varies among the different layers. This method works well for multi-layered inhomogeneous samples. With a single measurement, the distribution of the RI and thickness within a sample can be obtained.  111  Chapter 8 conclusions and future work 8.1 Conclusions  We have developed a scalable multimodality imaging system by combining MPM and SDOCT into a single platform. The spectrometer of OCT is well designed to approximate the source limited resolution. Two specially designed user interfaces are employed to control and synchronize the MPM and the OCT data acquisition respectively. By combining the TPEF signal from intrinsic sources, such as proteins, and the SHG signal from non-centrosymmetirc molecules, such as collagen, MPM is capable of imaging cells and extracellular matrix structures with high resolution. Based on the back scattering light, OCT can provide cross-sectional anatomical imaging of layered tissue structures over a few millimeters deep and wide at a high speed. By complementing the advantages of MPM and OCT, this integrated system is capable of performing large field-of-view OCT and high resolution MPM imaging on the same sample. The performance calibration results and the high quality images shown in chapters 6 and 7 demonstrate the capability of this system in biomedical imaging.  The application of MPM/OCT is demonstrated on simultaneously measuring the RI and the thickness of biological tissues. The RI and the thickness of the sample can be obtained from the co-registered OCT and MPM cross-sectional images. For standard samples, high precision measurements (less than 0.5% error) have been achieved. This method also demonstrates its capability in determining the RI of multi-layered inhomogeneous tissues. From the co-registered images, the structure of tissues is clearly resolved and the two parameters (n, t) at different locations can be calculated based on a single measurement. 112  Our multimodal MPM/OCT system can provide structural and functional information about tissue morphology and bio-chemical compositions. Furthermore, it can also provide quantitative information about tissue layer thickness and RI. Therefore, the multimodal MPM/OCT system can potentially have important applications in cancer detection and laser surgery.  8.2 Future work  The demonstration on fish cornea shows our system has the potential for in vivo measurement of RI and thickness noninvasively. However, two problems must be solved to allow further development of this integrated system for in vivo measurement. The first problem is the short penetration depth of the system, which is limited by the imaging depth of MPM. For highly scattering tissues, such as skin, only one layer can be detected in full width. To reduce scattering within tissues and increase the imaging depth, a source with longer wavelength is preferred. Another problem is imaging speed. Although the OCT frame rate can go up to 80 frames/s, the overall imaging speed is limited by MPM, whose frame rate is only 0.4 frames/s. For in vivo imaging, high speed is necessary to reduce motion artifact which might blur the images. High speed can possibly be achieved by using multifocal MPM system or using high speed scanners.  Moreover, this thesis only provides a limited discussion and demonstration on the OCM modality, which can be further developed and expanded. Since OCM has no requirement for depth scanning, it can generate 3-D images at a much faster speed than MPM does. With high NA objective lenses, OCM is capable of acquiring high resolution en-face images at a high speed. Therefore, by combing the high-resolution OCM and MPM modalities, the scattering, TPEF and SHG signals can be acquired simultaneously, whereas 113  the OCT modality can be used for the acquisition of large field-of-view cross-sectional images.  114  Bibliography [1] K. Doi, “Diagnostic imaging over the last 50 years: research and development in medical imaging science and technology”, Physics in Medicine and Biology, 51, R5–R27 (2006). [2] S. David, L. Stephane, "Medical Imaging and Registration in Computer Assisted Surgery", Clinical Orthopaedics and Related Research, 354, pp. 17-27 (1998). [3] W.R. Hedrick, D.L. Hykes, D.E. Starchman, Ultrasound Physics and Instrumentation, 4th ed., St.Louis:Mosby (2005). [4] E.M. Haacke, R.F. Brown, M. Thompson, R. Venkatesan, Magnetic resonance imaging: Physical principles and sequence design, 1st ed. New York: J. Wiley & Sons (1999). [5] G.T. Herman, Fundamentals of Computerized Tomography: Image Reconstruction from Projections, 2nd ed., Springer (2009). [6] J.B. Pawley and B.R. Masters, "Handbook of Biological Confocal Microscopy", Journal of biomedical optics, 13, pp. 029902-029903 (2008). [7] Z. AM, N. FT, O. AL, M. DL, B. SA, "Optical coherence tomography: a review of clinical development from bench to bedside", Journal of biomedical optics, 12, pp. 051403 (2007). [8] G.M. Maria, "Über Elementarakte mit zwei Quantensprüngen", Annalen der Physik, 401, pp.273-294 (1931). [9] W. Kaiser and C.G. Garrett, "Two-Photon Excitation in CaF2: Eu2+", Physical Review Letters, 7, pp. 229-231 (1961). [10] W. Denk and K. Svoboda, "Photon Upmanship: Why Multiphoton Imaging Is More than a Gimmick", Neuron, 18, pp. 351-357 (1997). [11] F. Helmchen and W. Denk, "Corrigendum: Deep tissue two-photon microscopy", Nature Methods, 2, pp. 932–940 (2005). 115  [12] J.C. Jung and M.J. Schnitzer, "Multiphoton endoscopy", Optics Letters, 28, pp. 902-904 (2003). [13] P.A. Franken, A.E. Hill, C.W. Peters, and G. Weinreich, "Generation of Optical Harmonics", Physical Review Letters, 7, pp. 118-119 (1961). [14] R. Hellwarth and P. Christensen, "Nonlinear Optical Microscope Using Second Harmonic Generation", Applied Optics, 14, pp. 247-248 (1974). [15] I. Freund, M. Deutsch, and A. Sprecher, "Connective tissue polarity. Optical second-harmonic microscopy, crossed-beam summation, and small-angle scattering in rat-tail tendon", Biophysical Journal, 50, pp. 693-712 (1986). [16] N. Prent, R. Cisek, C. Greenhalgh, A. Major, B. Stewart, and V. Barzda, "Real-time studies of muscle cell contractions with second harmonic generation microscopy", Conference on Lasers and Electro-Optics (2009). [17] D. Huang, E.A. Swanson, C.P.Lin, J.S. Schuman, W.G. Stinson, W. Chang, M.R. Hee, T. Flotte, K. Gregory, C.A. Puliafito, and J.G. Fujimoto, "Optical coherence tomography",Science, 254, pp. 1178-1181(1991). [18] A.F. Fercher, C.K. Hitzenberger , W. Drexler, G. Kamp, H. Sattmann, "In vivo optical coherence tomography", American Journal of Ophthalmology, 116, pp. 113–114(1993). [19] E. A. Swanson, J. A. Izatt, M. R. Hee, D. Huang, C. P. Lin, J. S. Schuman, C. A. Puliafito, and J. G. Fujimoto, "In vivo retinal imaging by optical coherence tomography", Optics Letters, 18, pp. 1864-1866 (1993). [20] W. Drexler, J.G. Fujimoto, Optical Coherence Tomography, 1st ed., Springer (2008). [21] H. Zhilin and M.R. Andrew, "Fourier domain optical coherence tomography with a linear-in-wavenumber spectrometer",Optics Letters, 32, pp. 3525-3527 (2007). [22] K.K.H. Chan, "Spectral Domain Optical Coherence Tomography System Design: sensitivity fall-off and processing speed enhancement", Master's Thesis, University of British Columbia (2010). 116  [23] G. J. Tearney, M. E. Brezinski, J. F. Southern, B. E. Bouma, M. R. Hee, and J. G. Fujimoto, “Determination of the refractive index of highly scattering human tissue by optical coherence tomography,” Optics Letters, 20, pp. 2258-2260 (1995). [24] B. C. Wilson, “Modeling and measurement of light propagation in tissue for diagnostic and therapeutic applications,” Laser Systems for Photobiology and Photomedicine, 252, pp 13-27 (1991). [25] M. Motamedi, S. Rastegar, G. LeCarpentier, and A. J. Welch “Light and temperature distribution in laser irradiated tissue: the influence of anisotropic scattering and refractive index,” Applied Optics, 28, pp. 2230-2237 (1989). [26] S. R. Arridge and J. C. Hebden, “Optical imaging in medicine: II. Modelling and reconstruction,” Physics in Medicine and Biology, 42, pp. 841-853 (1997). [27] Edmondo Borasio, Julian Stevens, and Guy T. Smith, “Estimation of true corneal power after keratorefractive surgery in eyes requiring cataract surgery: BESSt formula,” Journal of Cataract & Refractive Surgery, 32, pp. 2004-2014 (2006). [28] S. Patel, J. L. Alio, and A. Artola, “Changes in the refractive index of the human corneal stroma during laser in situ keratomileusis,” Journal of Cataract & Refractive Surgery, 34, pp. 1077-1082 (2008). [29] C. P. Lohrnann and J. L. Guell, “Regression after LASIK for the treatment of myopia: the role of the corneal epithelium,” Seminars in Ophthalmology, 13, pp 79-82 (1998). [30] S. Patel, J. L. Alio, and J. J. Perez-Santonja, “Refractive index change in bovine and human corneal stroma before and after LASIK: a study of untreated and re-treated corneas implicating stroma hydration,” Investigative Ophthalmology and Visual Science, 45, pp. 3523-3530 (2004). [31] W. R. Zipfel, R. M. Williams, and W. W. Webb, "Nonlinear magic: multiphoton microscopy in the biosciences," Nature Biotechnology, 21, pp. 1369-1377 (2003). [32] S. Tang, Y.F. Zhou, K.K.H. Chan, and T. Lai, “Multiscale multimodal imaging with 117  multiphoton microscopy and optical coherence tomography,” Optics Letters, 36, pp. 4800-4802 (2011). [33] S. Tang, T.B. Krasieva, Z. Chen, B.J. Tromberg, "Combined multiphoton microscopy and optical coherence tomography using a 12-fs broadband source", Journal of biomedical optics, 11, pp. 020502 (2006). [34] A. Esposito, F. Federici, C. Usai, F. Cannone, G. Chirico, M. Collini, A. Diaspro, "Notes on theory and experimental conditions behind two-photon excitation microscopy", Microscopy Research and Technique, 63, pp. 12-17 (2004). [35] S.J. Mulligan and B.A. MacVicar, "Two-Photon Fluorescence Microscopy: Basic Principles, Advantages and Risks", Modern Research and Educational Topics in Microscopy, 1, pp. 881-889 (2007). [36] J.Aleksander, "Efficiency of Anti-Stokes Fluorescence in Dyes" Nature, 131, pp. 839-840 (1933). [37] R.M. Williams, D.W. Piston, W.W. Webb, "Two-photon molecular excitation provides intrinsic 3-dimensional resolution for laser-based microscopy and microphotochemistry", The FASEB Journal, 8, pp. 804-813 (1994). [38] P.J. Campagnola, M. Wei, A. Lewis, and L.M. Loew, "High-Resolution Nonlinear Optical Imaging of Live Cells by Second Harmonic Generation", Biophysical Journal, 77, pp. 3341-3349 (1999). [39] Z.E. Sikorski, Chemical and Functional Properties of Food Proteins, 1st ed., CRC Press (2001). [40] P. Fratzl, Collagen: Structure and Mechanics, 1st ed., Springer (2008). [41] G. Said, M. Guilbert, E. Millerot-Serrurot, L.V. Gulick, C. Terryn, R. Garnotel, P. Jeannesson, "Impact of carbamylation and glycation of collagen type I on migration of HT1080 human fibrosarcoma cells", International Journal of Oncology, 40, pp. 1797-804 (2012). 118  [42] M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, 7th ed., Cambridge University Press (1999). [43] J.M. Schmitt, "Optical Coherence Tomography (OCT): A Review", IEEE Journal of selected topics in Quantum Electronics, 5, pp. 1025-1215 (1999). [44] N. Nassif, B. Cense, B. Park, M. Pierce, S. Yun, B. Bouma, G. Tearney, T. Chen, and J. de Boer, "In vivo high-resolution video-rate spectral-domain optical coherence tomography of the human retina and optic nerve", Optics Express, 12, pp. 367-376 (2004). [45] M. Wojtkowski, V. Srinivasan, T. Ko, J. Fujimoto, A. Kowalczyk, and J. Duker, "Ultrahigh-resolution, high-speed, Fourier domain optical coherence tomography and methods for dispersion compensation", Optics Express, 12, pp. 2404-2422 (2004). [46] C. E. Shannon, "Communication in the presence of noise", Proceedings of the Institute of Radio Engineers, 37, pp. 10–21 (1949). [47] D.L. Marks, A.L. Oldenburg, J.J. Reynolds, and S.A. Boppart, "Autofocus Algorithm for Dispersion Correction in Optical Coherence Tomography", Applied Optics, 42, pp. 3038-3046 (2003). [48] C. Palmer, Diffraction Grating Handbook, 5th ed., Richardson Grating Laboratory (2002). [49] J. Spolsky, User Interface Design for Programmers, 1st ed., Apress (2001). [50] S. Ambler, "User Interface Design Tips, Techniques, and Principles", [online] avaliable: http://www.ambysoft.com/essays/userInterfaceDesign.html. [51] K. Wang, Z. Ding, T. Wu, C. Wang, J. Meng, M. Chen, and L. Xu, "Development of a non-uniform discrete Fourier transform based high speed spectral domain optical coherence tomography system", Optics Express, 17, pp. 12121-12131 (2009). [52] A. Dubois, L. Vabre, A. C. Boccara, and E. Beaurepaire, "High-Resolution Full-Field Optical Coherence Tomography with a Linnik Microscope", Applied Optics, 41, pp. 805-812 (2002). 119  [53] M. Haruna, M. Ohmi, T. Mitsuyama, H. Tajiri, H. Maruyama, and M. Hashimoto, “Simultaneous measurement of the phase and group indices and the thickness of transparent plates by low-coherence interferometry,” Optics Letters, 23, pp. 966-968 (1998). [54] A. Knuttel and M. Boehlau-Godau, “Spatially confined and temporally resolved refractive index and scattering evaluation in human skin performed with optical coherence tomography,” Journal of Biomedical Optics, 5, pp. 83-92 (2000). [55] H. Maruyama, S. Inoue, T. Mitsuyama, M. Ohmi, and M. Haruna “Low-coherence interferometer system for the simultaneous measurement of refractive index and thickness,” Applied Optics, 41, pp. 1315-1322 (2002). [56] X. Y. Wang, C. P. Zhang, L. S. Zhang, L. L. Xue, J. G. Tian, “Simultaneous refractive index and thickness measurements of bio-tissue by optical coherence tomography,” Journal of Biomedical Optics, 7, pp. 628-632 (2002). [57] F. P. Bolin, L. E. Preuss, R. C. Taylor, and R.J. Ference, “Refractive index of some mammalian tissues using a fiber optic cladding method,” Applied Optics, 28, pp. 2297-2303 (1989). [58] J. C. Lai, Z. H. Li, C. Y. Wang, and A. He “Experimental measurement of the refractive index of biological tissues by total internal reflection,” Applied Optics, 44, pp. 1845-1849 (2005). [59] H. F. Ding, J. Q. Lu, K. M. Jacobs, and X. H. Hu, “Determination of refractive indices of porcine skin tissues and intralipid at eight wavelengths between 325 and 1557 nm,” J. of Optical Society of America A, 22, pp. 1151-1157 (2005). [60] J. J. Dirckx, L. C. Kuypers and W. F. Decraemer, “Refractive index of tissue measured with confocal microscopy,” Journal of Biomedical Optics, 10, pp. 44014, 1-8(2005). [61] S. Kim, J. Na and M. J. Kim, and B. H. Lee “Simultaneous measurement of refractive index and thickness by combining low coherence interferometry and confocal optics,” 120  Optics Express, 16, pp. 5516-5526 (2008). [62] Y. L. Kim, J. T. Walsh Jr, T. K. Goldstick, and M. R. Glucksberg, “Variation of corneal refractive index with hydration,” Physics in Medicine and Biology, 49, pp. 859-868 (2004). [63] R. C. Lin, M. A. Shure, A. M. Rollins, J. A. Izatt, D. Huang, “Group index of the human cornea at 1.3-microm wavelength obtained in vitro by optical coherence domain reflectometry,” Optics Letters, 29, pp. 83-85 (2004). [64] K. M. Meek, S. Dennis and S. Khan, “Changes in the refractive index of the stroma and its extrafibrillar matrix when the cornea swells,” Biophysical Journal, 85, pp. 2205-2212 (2003). [65] David R. Lide, Handbook of Chemistry and Physics, 93th ed., CRC Press (2012). [66]  Anatomy  of  the  eye  [online],  Available:  http://www.fightforsight.org.uk/anatomy-of-the-eye. [67]  Histological  structure  of  the  eye  [online],  Available:  http://simple-med.blogspot.ca/2008/07/histology-of-cornea.html. [68] S. Patei, “Refractive index of the mammalian cornea and its influence during pachometry,” Ophthalmic and Physiological Optics, 7, pp. 503-506 (1987). [69] Stuart L. Meyer, Data Analysis for Scientists and Engineers, 1st ed., Peer Management Consultants (1992).  121  

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
United States 13 1
Canada 3 0
France 2 0
China 2 0
India 1 1
Netherlands 1 0
Germany 1 10
City Views Downloads
Wilmington 5 0
Unknown 3 11
Ashburn 3 0
Vancouver 2 0
Seattle 2 0
Shenzhen 1 0
Redmond 1 0
Durham 1 1
Enschede 1 0
Burnaby 1 0
Bickenbach 1 0
University Park 1 0
Beijing 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0073160/manifest

Comment

Related Items