UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Spectral domain optical coherence tomography system design : sensitivity fall-off and processing speed… Chan, Kenny K. H. 2010

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2010_fall_chan_kenny.pdf [ 3.78MB ]
Metadata
JSON: 24-1.0071121.json
JSON-LD: 24-1.0071121-ld.json
RDF/XML (Pretty): 24-1.0071121-rdf.xml
RDF/JSON: 24-1.0071121-rdf.json
Turtle: 24-1.0071121-turtle.txt
N-Triples: 24-1.0071121-rdf-ntriples.txt
Original Record: 24-1.0071121-source.json
Full Text
24-1.0071121-fulltext.txt
Citation
24-1.0071121.ris

Full Text

Spectral Domain Optical Coherence Tomography System Design: Sensitivity Fall-off and Processing Speed Enhancement by Kenny K.H. Chan B.A.Sc. The University of Toronto, 2007  A THESIS SUBMITED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  Master of Applied Science  in  THE FACULTY OF GRADUATE STUDIES  (Biomedical Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (VANCOUVER)  August, 2010 © Kenny K.H. Chan, 2010  Abstract Spectral domain optical coherence tomography (SD-OCT) is an imaging modality that provides cross-sectional images with micrometer resolution. One major drawback of SDOCT, however, is the depth dependent sensitivity fall-off by which image quality rapidly degrades in regions corresponding to deeper locations of the sample. This disadvantage is due to the finite spectral resolution of the hardware as well as the software reconstruction method that is used.  SD-OCT employs a broadband light source for illumination and a spectrometer for signal detection. This system uses diffraction grating to separate spectral components by wavelengths (λ), which are then detected by a CCD array. The sensitivity fall-off is dependent on the spot size shining on the CCD, with respect to the pixel size of the CCD array. This hardware contribution to the fall-off can be minimized by careful design of the spectrometer. The software reconstruction is based mainly on the discrete Fourier transform (DFT) of the measured spectral data, which can be performed quickly using the widely accepted fast Fourier transform (FFT) algorithm, provided that the input is sampled uniformly in the wavenumber (k) domain. Due to the inverse relationship between k and λ, the data must be resampled to achieve a uniform spacing in k. Accuracy of the resampling method is important for the reconstruction, since the performance of the interpolation algorithm tends to degrade as the signal approaches the Nyquist sampling rate. This also causes a sensitivity fall-off for signals originating at greater depths, which corresponds to a higher modulation fringe frequency in the k domain.  The goal of this thesis is to outline the development of a real-time SD-OCT imaging system that can deliver high quality images. The aim is to solve two major problems of current state-of-the-art SD-OCT systems, namely the depth dependent sensitivity fall-off and the image reconstruction time limitation. An SD-OCT system is demonstrated using a new reconstruction approach based on non-uniform fast Fourier transform (NUFFT). Using parallel computing techniques, our system can produce high quality images at over 100 frames per second with less than 12.5dB sensitivity fall-off over the full imaging range of 1.7mm. ii  Table of contents Abstract ......................................................................................................................................................... ii Table of contents.......................................................................................................................................... iii List of tables .................................................................................................................................................. v List of figures ............................................................................................................................................... vi Acknowledgements .................................................................................................................................... xiii Chapter 1 Introduction and background ................................................................................................... 1 1.1 Brief OCT history .............................................................................................................................. 2 1.2 Overview of OCT operation.............................................................................................................. 3 1.2.1 Time domain optical coherence tomography ............................................................................. 5 1.2.2 Frequency domain optical coherence tomography .................................................................... 6 1.3 Problem statement and motivation .................................................................................................. 7 1.4 Outline of project and collaboration ................................................................................................ 8 1.5 Organization of the thesis................................................................................................................ 11 Chapter 2 Principles of optical coherence tomography .......................................................................... 13 2.1 Michelson interferometer................................................................................................................ 13 2.2 Spectral domain OCT with a low-coherence light source ............................................................ 15 2.3 Imaging range .................................................................................................................................. 19 2.4 Sensitivity fall-off ............................................................................................................................. 23 2.5 Dispersion effect............................................................................................................................... 27 Chapter 3 System design part 1: interferometer, optics and control ..................................................... 29 3.1 Light source...................................................................................................................................... 30 3.2 Interferometer.................................................................................................................................. 33 3.3 Sample arm ...................................................................................................................................... 34 3.4 Reference arm .................................................................................................................................. 39 3.5 Data acquisition and control........................................................................................................... 40 3.5.1 Camera ...................................................................................................................................... 41 3.5.2 Frame grabber........................................................................................................................... 42 3.5.3 Galvanometer control using analog waveform (data acquisition board) ................................ 43 3.5.4 Summary of control flow and trigger ....................................................................................... 46 Chapter 4 System design part 2: spectrometer design ............................................................................ 47 4.1 Configuration and setup.................................................................................................................. 47 4.2 Theory of sensitivity fall-off ............................................................................................................ 48 4.2.2 Fall-off due to spot size (guassian function)............................................................................ 53 4.3 Simulation of interference fringe generation and sensitivity fall-off modelling ......................... 54 4.4 Detector............................................................................................................................................. 60 4.5 Grating.............................................................................................................................................. 60 4.6.1 Selection of collimation optics .................................................................................................. 61 4.6.2 Aberration correction on focusing optics and spot size minimization..................................... 64 4.6.2.1 Seidel aberration coefficient ............................................................................................................. 67 4.6.2.2 Field curvature .................................................................................................................................. 68  4.8 Quantitative verification of simulation .......................................................................................... 77 4.9 Alignment of CCD camera.............................................................................................................. 79 4.10 Final design .................................................................................................................................... 82 Chapter 5 System design part 3: data processing.................................................................................... 83 5.1 SD-OCT data processing................................................................................................................. 83 5.2 Conversion from wavelength to wavenumber............................................................................... 85  iii  5.2.1 Spectrometer calibration.............................................................................................................. 86 5.2.2 Linear interpolation ................................................................................................................... 88 5.2.3 Cubic spline interpolation.......................................................................................................... 88 5.2.4 Non-uniform discrete Fourier transform (NDFT)...................................................................... 89 5.2.5 Non-uniform fast Fourier transform (NUFFT).......................................................................... 90 5.3 Sensitivity fall-off with different reconstruction method ............................................................. 94 5.4 Numerical dispersion compensation .............................................................................................. 99 5.6 Complex full range OCT............................................................................................................... 105 Chapter 6 System characterization and image demonstration............................................................. 108 6.1 Sensitivity ....................................................................................................................................... 108 6.2 Sensitivity fall-off ........................................................................................................................... 109 6.3 Axial resolution .............................................................................................................................. 109 6.4 Imaging range ................................................................................................................................ 110 6.5 Processing speed............................................................................................................................. 111 6.6 Overall performance ..................................................................................................................... 112 6.7 Image demonstration..................................................................................................................... 112 Chapter 7 Ultrasound and optical coherence tomography ................................................................... 118 7.1 Synchronization ............................................................................................................................. 118 7.2 Alignment ....................................................................................................................................... 119 7.3 Co-registered images ..................................................................................................................... 120 Chapter 8 Conclusion............................................................................................................................... 122 8.1 Significance of work ...................................................................................................................... 122 8.2 Future work and improvements ................................................................................................... 123 Bibliography.............................................................................................................................................. 125 Appendix A Regarding the use of animal tissues................................................................................... 134  iv  List of tables Table 4.1: Beam diameter resulting from the usage of different focal length optics. The diffraction grating has an aperture opening of 20.4mm and therefore the beam collimated by the 150mm lens is too large. Part of the beam will be blocked and the resulting beam diameter will be the same as the grating aperture of 20.4mm..........................................................................................63 Table 4.2: Summary of the total Seidel aberration coefficient at 845nm. A number closer to zero indicates a smaller aberration for the optical system. ...............68 Table 4.3: Sensitivity fall-off at initial and final tilt angles ..............................................82 Table 6.1: Comparison of SD-OCT system with the 14x14µm2 camera at similar wavelength. Spectral resolution is inversely proportional to axial resolution in the images. ................................................................................109 Table 6.2: Processing speed of comparable SD-OCT systems using specialized acceleration ....................................................................................................112  v  List of figures Figure 1.1:  Resolution vs penetration depth of high resolution imaging modalities..... 1  Figure 1.2:  Left - Michelson interferometer; Right, interference fringes with different coherence lengths ..........................................................................4  Figure 1.3:  Time domain optical coherence tomography.............................................. 5  Figure 1.4:  Fourier domain optical coherence tomography. The reference mirror is stationary and the light backscattered from the different depth of the sample is collected simultaneously. This results in a much faster acquisition than TD-OCT, but requires a computationally intensive Fourier Transform........................................................................................6  Figure 2.1:  Typical Michelson interferometer setup ................................................... 14  Figure 2.2:  Detection method in SD-OCT. Light returning from the reference and sample path are directed at a diffraction grating. The grating disperses the light into different direction based on their wavelength and is ultimately detected by a CCD array. Interference occurs on the CCD pixel and produces a pattern that contains the depth information of the reflector. .................................................................................................... 16  Figure 2.3:  SD-OCT signal reconstruction via Fourier transform with related axial resolution parameter.................................................................................. 18  Figure 2.4:  Left, spectrum of Ti:Sapphire Laser [41]; Right, spectrum for SLD [42] ................................................................................................................... 18  Figure 2.5:  Depth effect on SD-OCT signals of a single reflector. Higher frequency oscillation in the k domain corresponds to reflections at deeper locations. ....................................................................................... 19  Figure 2.6:  Relationship between spectral sampling and imaging range. Due to the Fourier transform theorem, a larger spectral range ∆Λ sampled will convert to a smaller bin spacing, ∆pz¸ in the z domain. Top: The detected signal range is wider than the source bandwidth, which results in a shallower imaging depth but higher spectral resolution. Middle: detected signal range is similar to source bandwidth, a balance between imaging range and resolution. Bottom: detection bandwidth is less than the source spectrum, imaging depth increases at the expense of axial resolution. .................................................................................... 21  vi  Figure 2.7:  Axial profile of two closely spaced reflectors. The source coherent function is convoluted with the delta function representing the reflective surfaces. The two surface can only be distinguish from each other if the spacing of the pixels in the z domain is greater than ∆z ..... 22 2  Figure 2.8:  Illustration of the effect of depth dependent sensitivity fall-off. With a mirror acting as the sample, the reflected power is kept constant while varying the mirror location. Mirror positions presenting deeper locations produced smaller amplitudes in the detected reflectivity, even though the reflected powers are the same. ....................................... 24  Figure 2.9:  Modulation transfer function. Higher spatial frequency in the object space will result in decreased intensity contrast in the image space......... 25  Figure 2.10:  The effect of different focusing optics on the detected interference modulation. Left: an ideal case where the focus spot is small and is contained within a pixel. Right: The large focal spot results in a loss of light and spectral cross talk between pixels.............................................. 27  Figure 2.11:  Dispersion in SD-OCT system. Top: Interference modulation with dispersion. Notice the uneven periods in the signal. Middle: Interference modulation without dispersion. Bottom: The reconstructed axial profile using the above interference signal. Note the broadened width (lowered resolution) of the signal containing dispersion. ................................................................................................. 28  Figure 3.1:  SD-OCT system setup: SLD - superluminescent diode, 50/50 FC – fused fiber coupler, PC – polarization controller, CL1/2 – 15mm collimation lens, NDF – neutral density filter, FL1/2 – 30mm achromatic focusing lens, CL3 – 75mm achromatic collimation lens, ASL – 4 element 100mm air-spaced lens, DAQ – data acquisition board. ........................................................................................................ 30  Figure 3.2:  Absorption spectrum in the near-infrared wavelength of typical components of biological samples [47]. ................................................... 31  Figure 3.3:  Left: Superlum SLD-371 spectrum with FWHM bandwidth and central wavelength indicated [42]. Right: SLD spectrum measured with ANDO AQ6135A optical spectrum analyzer, measured center wavelength = 844.68nm, measured FWHM bandwidth = 45.5nm........... 32  Figure 3.4:  Sample arm setup...................................................................................... 35  Figure 3.5:  Sample arm optics showing Gaussian beam size and lens specification .. 37  Figure 3.6:  Gaussian beam shown with its Gaussian waist and depth of focus. ......... 38 vii  Figure 3.7:  Reference arm optics, the components within the dash box are mounted on the sample micrometer stage to allow for simultaneous movement.................................................................................................. 39  Figure 3.8:  A-line acquisition and triggering signals. Top: Linescan configuration in the frame grabber, Bottom: Altered 2D configuration ......................... 43 Galvanometer controlling waveform and its associated trigger. Positive voltage denotes an anti-clockwise rotation and negative voltage symbolizes a clockwise rotation. ...............................................................44  Figure 3.9:  Figure 3.10: Control signals of the SD-OCT system. The camera produces a synchronization pulse after each exposure that is redirected to the DAQ board by the frame grabber. The DAQ board uses this triggering signal as an update signal for the galvanometer controlling voltage waveform. ..................................................................................................46 Figure 4.1:  Spectrometer layout for the SD-OCT system. There are four important components: collimation lens, diffraction grating, focusing lens and the CCD camera.........................................................................................48  Figure 4.2:  Effect of pixel width and Gaussian beam width on signal fall-off; the red cosine modulation is Fourier transformed into red peaks in the z domain; the rect function transforms into a sinc function and the Gaussian transforms into another Gaussian in the z domain. The falloff effects have been emphasised in this figure. ........................................50  Figure 4.3:  Sensitivity fall-off of sinc component for a 1024 pixel camera capturing a spectral range of 101.3nm centered at 845nm. .......................53  Figure 4.4:  Sensitivity fall-off due to the Gaussian factor for a range of average spot size using a 14x14µm2 pixel CCD .....................................................54  Figure 4.5:  Graphical interpretation of the PSF. The spectrum is detected by a linear array of finite sized CCD pixels. Each pixel integrates the light within its area. PSF is the point spread function of the beam with wavelength focused at the center of the CCD pixel...................................56  Figure 4.6:  Simulated fringe amplitude with different spot size. Blue represents FWHM spot size of 14µm (equivalent to pixel size) and red represents FWHM spot size of 28 µm (equivalent 2x pixel size) ...............................59  Figure 4.7:  Simulated depth dependent sensitivity fall-off; the legend shows the spotsize to pixel size ratio. As expected the fall-off is worst with large ratio. ...........................................................................................................59  viii  Figure 4.8:  Spectral response of the E2V Aviiva SM2 1014 camera, the 14x14µm2 version was used in the SD-OCT system of this project. ..........................60  Figure 4.9:  FWHM spot size at the CCD plane with a 100mm focusing lens and varied collimation lens. Note that the change in spot size is largely due to the curved focal plane and physical location of the beam. The offcenter wavelength are actually out of the focus on the CCD plane. The grating is positioned at the back focal length of the lens and the camera is positioned at the lens’ front focal distance.............................................64  Figure 4.10: Four lens configurations are considered for the focusing optics...................66 Figure 4.11: Field curvature of lens design; a) Singlet 100mm lens; b) 100mm achromatic doublet lens; c) Rapid Rectilinear lens d) Custom design lens. Note the change in the scale of the axis between lens design, both the rapid rectilinear lens and the 4 lenses custom design shows a much flatter focal plane. ..................................................................................... 69 Figure 4.12: Modulation transfer function; a) Singlet 100mm lens; b) 100mm achromatic doublet lens; c) Rapid Rectilinear lens d) Custom design lens ............................................................................................................ 71 Figure 4.13: x dimension of spot vs wavelength. The positioning of the lens was optimized in zemax to give the smallest spot size. ................................... 72 Figure 4.14: Y dimension of spot vs wavelength. The positioning of the lens was optimized in zemax to give the smallest spot size. ................................... 73 Figure 4.15: Spot profile over the full wavelength range. a) Singlet 100mm lens; b) 100mm achromatic doublet lens; c) Rapid Rectilinear lens d) Custom design lens, Note the scale of the picture is not equal, the illustration is meant to shown the shape and relative size in the x-y dimensions........... 73 Figure 4.16: CCD camera setup; the red arrow shows the direction of movement used when verifying the focal curvature. Note the curved focal surface and the flat CCD plane. ................................................................................... 75 Figure 4.17: Detected intensities at the lens focus of the three laser diodes at 808, 850 and 903nm combined into a single plot. Top – intensity detected with achromatic lens, note the relatively small signal at 808nm, indicating that it is not focused on the CCD pixel. Bottom - intensity detected with rapid rectilinear lens, note the more evenly distributed intensity indicating all three wavelengthss were focused on to the CCD................ 76  ix  Figure 4.18: Contour plots of the detected signals of the three laser diodes. The yaxis represents the distance of the CCD from the focusing lens, the xaxis is the CCD pixel number. The intensity is presented in false color with red corresponding to the highest reading and blue being the lowest. Top – doublet achromatic lens, bottom – rapid rectilinear lens. .. 77 Figure 4.19:  Experimental data of sensitivity fall-off: Top – Achromatic doublet lens, Bottom – Rapid rectilinear lens. Both are measured across the full imaging range of the system............................................................... 78  Figure 4.20:  Alignment of the camera and its associated optics ................................... 79  Figure 4.21:  Geometry of the spectrometer alignment.................................................. 80  Figure 4.22:  Schematic of the spectrometer .................................................................. 82  Figure 5.1:  Data processing steps for SD-OCT........................................................... 84  Figure 5.2:  NUFFT algorithm ..................................................................................... 91  Figure 5.3.  Resampling into equally spaced bins using the Gaussian interpolation kernel. The blue circles are the original unevenly sampled data. A Gaussian function is convolved with each original data point, spreading its power over a few adjacent bins. Each bin accumulates the power from nearby points via addition. The evenly distributed bins can be Fourier transformed by FFT................................................................. 92  Figure 5.4:  Sensitivity Fall-off using different reconstruction methods (with rapid rectilinear lens).......................................................................................... 95  Figure 5.5:  (a) Typical point spread function with a single partial reflector: Linear interpolation, cubic spline interpolation, NDFT, and NUFFT are represented with blue, red, black, green respectively. .............................. 96  Figure 5.6:  Ex-vivo OCT image of the eye of a squid processed using a) Linear Interpolation + FFT, b) Cubic spling interpolating + FFT, c) NDFT, d) NUFFT; scale bars are 0.5mm. ................................................................ 97  Figure 5.7:  Analysis of corneal images, highlighting the difference at the anterior surface. a) Liner interpolation with FFT, d) NUFFT. Shown on the left is a representative A-line (number 241) of the zoom in images. NUFFT produced an image with higher intensity. The red arrow indicates the location of the artifact due to the shoulder effect in reconstructions with linear and cubic interpolations. ............................... 98  x  Figure 5.8:  Analysis of corneal images showing difference on the posterior surface. a) Liner interpolation with FFT, d) NUFFT. Shown on the left is a representative A-line (number 175) of the zoomed in images. NUFFT produced an image with higher intensity. The red arrow indicates the location of the artifact due to the shoulder effect in reconstructions with linear and cubic interpolations.......................................................... 99  Figure 5.9:  512 A-line frame processing time with numerical dispersion compensation. Platform: Intel Core 2 Duo E4500 at 2.2Ghz. Frame rate in frames per second is denoted in brackets..................................... 102 Sequence of control in SD-OCT system. a) single-threaded control, where the system is only performing one task at a time; b) Multithreaded control, the system makes use of idle time that is otherwise wasted. .................................................................................................... 103  Figure 5.10:  Figure 5.11: Acquisition and processing sequence......................................................... 104 Figure 5.12:  512 A-line frame processing time with Intel Core 2 Quad Q9400 at 2.66Ghz. Frame rate in frames per second is denoted in brackets.......... 105  Figure 5.13:  Illustration of the offset (s) needed for complex full range OCT. f denotes the focal length of the lens......................................................... 106  Figure 5.14:  Reconstructed axial profile using complex SD-OCT showing the conjugate mirror suppression of 7dB...................................................... 107  Figure 6.1:  Axial Resolution with different processing methods.............................. 110  Figure 6.2:  In-vivo OCT image of the human distal phalanx at the palmar surface (finger tip) ............................................................................................... 113  Figure 6.3:  In-vivo OCT image of the human distal phalanx at the dorsal surface .. 113  Figure 6.4:  In-vivo OCT image of the human finger nail bed, showing the transition from nail to skin...................................................................... 114 Figure 6.5: Ex-vivo image of bovine omasum ............................................................. 115  Figure 6.6: Ex-vivo image of chicken skin .................................................................. 115 Figure 6.7: OCT image of onion; Some cellular structure can be observed ................ 116 Figure 6.8: OCT image of a lettuce leaf ....................................................................... 116 Figure 6.9: Ex-vivo lateral scan image of tiger shrimp across the 2nd and 3rd abdominal segments (tergum).................................................................... 117  xi  Figure 6.10: Ex-vivo image of tiger shrimp with shell removed ................................... 117 Figure 7.1: Synchronization scheme in the combined HF-Ultrasound SD-OCT system ........................................................................................................ 119 Figure 7.2: Left – 3D view of the alignment phantom; right – an ultrasound image of the phantom [Courtesy of Narges Afsham] ............................................... 120 Figure 7.3: Ex-vivo OCT image of bovine cornea, 48 hours post-mortem, taken at 50µs exposure time .................................................................................... 120 Figure 7.4: OCT and Ultrasound images of an ex-vivo bovine eye; Bottom – coregistered result of the two modalities; both axis represent the pixel number [Courtesy of Narges Afsham]....................................................... 121  xii  Acknowledgements I would like to thank my supervisor, Dr. Shuo Tang, for her continual guidance. I am grateful to have been offered the opportunity to work in her lab and given the project ownership to develop a state-of-the-art SD-OCT imaging system.  I would also like to thank the past and present member of the Biophotonics lab for their expertise, assistance and support. I also want to express my gratitude to Andrew Robinson for volunteering to edit this thesis.  Finally, I would like to thank my family and friends for their unconditional support through the course of my studies.  Kenny Chan University of British Columbia April 2010  xiii  Chapter 1 Introduction and background Medical imaging is an indispensable tool used by medical professionals for disease diagnosis, treatment planning, and surgical guidance. Imaging technologies such as ultrasound, magnetic resonance imaging, and computed tomography have allowed the investigation of structures in the human bodies at the organ level, with resolution ranging from tens of micrometers to millimeters [1-6]. However, for many diseases including those of carcinoma and atherosclerosis, higher resolution is needed to study the sample in-situ at tissue and cellular levels [7,8]. For histological examinations of such potential aliments, excisional biopsies of the tissues are typically performed, followed by staining and observation from microscopy. For in-situ screening, however, an alternative method of imaging must be developed to match the resolution of the “gold standard” provided by traditional biopsy.  Figure 1.1: Resolution vs penetration depth of high resolution imaging modalities  1  Imaging technologies are often bounded by the resolution versus penetration depth tradeoff. Figure 1.1 provides an overview to the resolution and penetration depth of typical imaging modalities. Clinical ultrasound employs acoustic waves between 3-40 MHz and provides a resolution of 0.1-1 mm [1-2]. The comparatively long wavelengths are not attenuated significantly in biological tissues, thereby offering deep imaging of the body. Clinical and research prototypes of high frequency ultrasound (HFUS) used commonly in intravascular ultrasound (IVUS) typically possess resolutions of 15-20 µm with the use of frequencies of up to 100 MHz [9]. These high frequencies, however, suffer greater attenuation and are limited to a few millimetres of penetration.  On the other hand, microscopy using the confocal technique is a high resolution modality [10]. The resolution generally reaches one micrometer and is restricted only by the diffraction limit of light. But the penetration depth is severely limited by scattering in biological samples, which reduces contrast as well as the signal-to-noise ratio (SNR). With a useful imaging range of a few hundred micrometers, it is not suitable for in-situ imaging where malignant structures are located deeper in the body.  Optical Coherence Tomography (OCT) is an imaging technique that falls in between ultrasound and confocal microscopy in terms of resolution and penetration depth. It can typically acquire images of structures a few millimeters deep within a sample with a resolution of less than 10µm. This combination makes it a great candidate for in-vivo and in-situ imaging for epithelial structures, and could possibly replace excisional biopsy as a non-invasive alternative.  1.1 Brief OCT history  The first OCT images were demonstrated by Huang et al. [11] in 1991. The ex-vivo images were of the human retina and coronary arteries. The images confirmed the ability of OCT to image in transparent as well as highly scattering materials. The images were taken at 830nm center wavelength and resulted in a resolution of 15µm. The published results attracted the attention of many researchers and accelerated OCT development. By  2  1993, the first in-vivo images of the retina were captured independently by Fercher et al. [12] and Swanson et al. [13]. The development and acceptance of OCT in ophthalmology was rapid and by 1996, the first commercial ophthalmic OCT instrument was introduced by Carl Zeiss Meditec [14].  Imaging in tissues less transparent than the eye became possible after recognizing that a longer wavelength near 1300nm allowed for reduced scattering and improved penetration depth [15]. In the past decade, applications in OCT have expanded into other medical fields such as gastroenterology [16], gynaecology [17], pulmonology [18], urology [19, 20] and cardiology [21]. The most common usage is to screen for early stages of neoplasia in the epithelium, which is a surface lining located within OCT’s imaging range. Flexible probes and endoscopes were the key to the success of in-vivo OCT imaging, allowing access to the various lumens of the body through light transmitted via a single mode optical fiber housed in a protective sheath. At the distal end of the fiber, the light is focused and redirected radially outward by a graded-index lens and a micro-prism. The OCT image is generated by a rotational scan of the light beam, resulting in a crosssectional representation of the luminal structure. This instrument has been commercialized by LightLabs Imaging and it has just recently been cleared by the FDA for use in interventional cardiology [22].  1.2 Overview of OCT operation  OCT is a modality that fills the gap between HFUS and confocal microscopy. OCT can provide cross-sectional images with a resolution of several micrometers, which is ~10 times finer compared to HFUS. Unlike other imaging modalities, the resolution of OCT does not have an inverse relation with penetration depth. Higher resolution requires a broader optical bandwidth which is typically provided with a Femtosecond laser or a superluminescent light emitting diode (SLD). The penetration depth, however, does relate to the central wavelength of the light source, which is typically chosen to be within the tissue imaging optical windows of 800nm or 1310nm [14].  3  The operation of OCT is very similar to that of ultrasound imaging. Ultrasound transmits acoustic waves into a sample and measures the reflected waves. By recording the delay time and amplitude of reflections, an axial profile at a single transverse location in the sample can be produced. Instead of sound waves, OCT uses light waves. Light, however, travels at a speed much greater than sound waves. The response time of current photodetectors are much slower than the return time, therefore it cannot be measured directly by electronic means. The measurement is accomplished using a technique called low-coherence interferometry and is commonly performed using a Michelson interferometer as depicted in figure 1.2. Light from a source is divided and directed towards a reference path and a sample path by a beam splitter. The backscattered light from the sample arm and the reflection from the reference mirror are reflected back along the incident path. The two waves merge once again at the bean splitter and are directed towards a detector. The light waves from the two arms will recombine and produce interference fringes on the photodetector. For a monochromatic light source, the interference can be seen over a wide range of pathlength differences ∆z between the two arms. However with the use of a low-coherence broadband source, the interference modulation only appears when the pathlength mismatch is within the coherence length.  Figure 1.2: Left - Michelson interferometer; Right, interference fringes with different coherence lengths  4  1.2.1 Time domain optical coherence tomography  To obtain a cross-sectional view of the sample, the beam in the sample arm must be scanned across the surface of the sample as shown in figure 1.3. This is accomplished by the use of a scanning mirror. At each transverse location, the scanning mirror is held stationary while the reference mirror is translated over a range of ∆z to obtain an axial scan (A-line) [23]. For each reflective surface, a peak is created in the axial profile. The process is repeated across the sample and by placing the A-lines side by side with their amplitudes representing the strength of reflections, an OCT image is formed as shown in the inset of figure 1.3. Since the axial data is collected by translating the reference mirror with a time-varying location, this method is called time domain optical coherence tomography (TD-OCT).  Figure 1.3: Time domain optical coherence tomography  5  1.2.2 Frequency domain optical coherence tomography  In recent years, Fourier domain optical coherence tomography (FD-OCT) experienced a large increase in attention due to its advantages in imaging speed as well as signal-tonoise ratio over TD-OCT [24]. FD-OCT has a stationary reference mirror and measures all the reflected light in the sample simultaneously. Its setup is illustrated in figure 1.4. It calculates the delay echo time by a Fourier transform (FT) of the interference spectrum of the light. There are two established ways of realizing an FD-OCT system. Swept-source OCT (SS-OCT) uses a frequency tuneable laser and a point source detector. The laser is rapidly swept across its frequency range for each sample location. Its detector records the interference at each wavelength individually. Spectral domain OCT (SD-OCT), on the other hand, employs a broadband light source together with a spectrometer for detection. Both methods result in data sets that represent intensity distribution as a function of wavelength. These data are then further processed to create depth profiles of tissue reflectivity.  Figure 1.4: Fourier domain optical coherence tomography. The reference mirror is stationary and the light backscattered from the different depth of the sample is collected simultaneously. This results in a much faster acquisition than TD-OCT, but requires a computationally intensive Fourier Transform.  6  1.3 Problem statement and motivation  The focus of this thesis is the development of an SD-OCT system with a potential for future integration and co-registration with other imaging modalities. Different imaging methods use different contrast mechanisms, which would provide complementary information to the user. One possible candidate is Multiphoton microscopy, in which the excitation wavelength is similar to that used in OCT. This will allow for simultaneous imaging with both modalities [25]. Other applications also include co-registered imaging of corneas with HFUS and OCT, for which both modalities have been used separately in clinical ophthalmic applications [26]. Enhancement of OCT penetration and axial resolution by ultrasound modulation has also been demonstrated [27, 28]. Therefore the combination of HFUS and OCT can simultaneously increase system performance and provide extra information to the user.  SD-OCT systems commonly suffer from what is known as axial depth dependent sensitivity fall-off [29], which is absent in TD-OCT. Even without any absorption or scattering from a sample, the sensitivity decreases at deeper depths (relative to the reference). In other words, reflected light waves with identical intensities originating from different depths will result in different detection signal amplitudes. Reflections from deeper surfaces will appear to be weaker reflections, causing the image quality at deeper locations to be degraded. A deep reflective surface will also tend to be blurred by unwanted artifacts. This disadvantage of SD-OCT reduces its useful imaging range and limits its ability to display morphology of deep internal structures. The manifestation of this fall-off is due to two major factors: the non-ideal optics in the hardware design of the spectrometer and the inaccuracy of the numerical calculations in the software reconstruction method. Part of this thesis will focus on reducing the fall-off in efforts to increase the useful imaging range of SD-OCT.  SD-OCT has also demonstrated speed and SNR advantages over traditional TD-OCT [30]. Without the mechanically scanned reference mirror, SD-OCT can acquire images at over 100x the speed of traditional TD-OCT. The fast imaging acquisition speed reduces  7  motion artifacts caused by the movement of the sample [31]. It also opens up opportunities for 3D-imaging [32] as well as Doppler flow measurement [33]. Processing of SD-OCT data to form an image, however, is more complex than TD-OCT and is typically the limiting factor in SD-OCT display. In real time SD-OCT systems, a common scheme to reduce processing time is to use a simple, yet less competent reconstruction algorithm that deteriorates sensitivity fall-off. As such, one can observe a compromise between speed and quality, which are both important considerations for invivo imaging. Higher quality images could be produced in an offline mode using a complex algorithm that has better sensitivity [34]. But the lack of real-time display precludes its use for in-vivo diagnostic or surgical guidance, where positioning of a location of interest is needed immediately.  To alleviate the speed problem, some systems use dedicated hardware such as field programmable grid arrays (FPGAs) [35] or digital signal processing (DSP) [36] modules for reconstruction. However, specialized hardware has limitations in compatibility and expandability. In addition, future integration with other systems will be more complicated due to a limited number of input/output ports available for synchronization.  The goal for this thesis is to develop a real-time SD-OCT imaging system that can deliver high quality images without the use of specialized processing hardware. The aim is to solve two major problems of current state-of-the-art SD-OCT systems, namely axial depth dependent sensitivity fall-off and the image reconstruction time limitation. The system will be based on a workstation computer platform, for which future integration with other imaging modalities should be relatively simple due to an extensive array of input/output ports and expansion PCI/PCI-E slots.  1.4 Outline of project and collaboration  The development of the SD-OCT will include the following steps. Some sections were performed in parallel and the list is not strictly chronological. Collaboration and assistance from others are listed in their respective sections.  8  •  Investigate the applications of OCT along with background information and theory. Coursework and literature review provide the necessary knowledge to define project parameters and specifications. Essential knowledge includes biophotonics, optics, electronics, data processing and programming.  •  Understand the flow chart of OCT data acquisition and processing. A visual C++ program from previous students was used as a basic reference for design of the data acquisition software. Knowledge on the specification of data acquisition electronics, frame grabber interface and camera were necessary to operate the component programmatically for data capture. Knowledge of user interface design techniques using Microsoft MFC was also a requisite for implementing the program’s front end.  •  Develop a prototype SD-OCT system from individual components to study and gain insight to its real life operation. Together with another graduate student, Sunny Yuen, a rudimentary SD-OCT system was assembled in August 2008 to acquire A-lines. Hardware setup was completed by Sunny Yuen, while the software for processing and control was implemented by Kenny Chan.  •  Investigate the system performance of the prototype SD-OCT system and perform a comparison to other systems in the literature. The results were analysed and specific criteria for improvement were pinpointed for further analysis. Components were upgraded as much as possible to match the performance of state-of-the-art systems in the literature. Mechanical mounts for the galvanometer and the enclosure of controller board were constructed by undergraduate reseach student Arthur Cheung.  •  Optimize the spectrometer to obtain source limited axial resolution and to reduce sensitivity fall-off. Research into physical and geometric optics was accomplished through a literature study. Knowledge on optical aberration and non-ideal effects  9  were essential in designing a good spectrometer. Optics used in the spectrometer designs were then simulated using Zemax and Matlab. An optimum design was chosen and implemented on the SD-OCT system. •  Adjust the alignment of the system to achieve maximum coupling efficiency. The sensitivity of the system is highly dependent on its coupling efficiency, and any misalignment could attenuate the signal of interest. Proper calibration of the spectrometer is also needed to accurately reconstruct the image without artifacts. The calibration procedure required the use of three narrow width laser diodes, which were selected and coupled into a fiber by undergraduate student Tamer Mohamed.  •  Study the processing methods used to reconstruct SD-OCT images. Several common methods are used to form an image and they have individual advantages and disadvantages. The effect of these methods on the sensitivity fall-off were analyzed and compared. A technique new to the OCT community was implemented for effectiveness in OCT reconstruction.  •  Improve the processing speed limitation of SD-OCT. Determining the bottleneck of the SD-OCT system will enable the developer to reduce the processing time via algorithm optimization. Processing can also be accelerated by multiprocessing with a Quad-Core workstation.  •  Integration of OCT with HFUS through collaboration with Dr. R. Rohling. The alignment of the two systems was adjusted with the aid of a calibration phantom. The resulting hybrid was used to image ex-vivo bovine eyes as a proof-of-concept. Experimental apparatus was developed by Leo Pan and Kenny Chan. HFUS control software and calibration phantom were made by Leo Pan. The Synchronization interface between the two systems was designed by Kenny Chan. The iterative calibration software and image co-registration algorithm was written by Narges Afsham.  10  1.5 Organization of the thesis  Chapter two: The theory of OCT, starting from the fundamental of interferometry to SD-OCT imaging, will be examined. System specifications pertaining to design parameters such as axial resolution, imaging depth, sensitivity falloff and dispersion will be discussed.  Chapter three: General hardware components and a layout of the SD-OCT system will be presented. The reader will discover the selection criteria for the light source, the interferometer arrangement, and arrangement of the reference arm as well as the design of the sample arm scanning optics.  Chapter four: This chapter will focus on the design of the spectrometer with specific attention to minimizing sensitivity fall-off. Zemax optical simulation is presented to aid the selection and optimization of the spectrometer optics. CCD choice is also discussed along with diffraction grating theory.  Chapter five: Processing of OCT data is analyzed in this chapter. Traditional approaches combine interpolation and the fast Fourier transform (FFT) algorithm for signal reconstruction. The sensitivity fall-off resulting from various reconstruction methods are compared and a novel processing algorithm using Non-uniform fast Fourier transform (NUFFT) is presented. Acceleration in processing provided by multiprocessing is also discussed.  Chapter six:  Performance characterization of the developed system is presented along with image demonstrations. Comparison to other OCT systems in the literature is presented with discussion on limitations of the current system.  Chapter seven: This chapter will demonstrate the combined OCT/HFUS imaging of exvivo bovine eye. The system setup as well as methods of synchronization  11  are examined. The method of calibration is briefly examined and a coregistered image is presented to the reader.  Chapter eight: The final chapter will conclude with a prospective discussion on future work and directions.  12  Chapter 2 Principles of optical coherence tomography A solid background in OCT is necessary for the design and optimization of the system. OCT is based on the theory of interferometry [23], in which patterns of interference due to superposition of multiple waves are studied. These distinctive patterns allow for the determination of the location where light is reflected back. Using this knowledge, one can construct a depth profile as well as a cross sectional image of an object of interest. The mechanism by which these profiles and images are captured and reconstructed is called OCT. The physics and theory behind OCT are the main focus of this chapter and gaining their familiarity is the first step in the development of an OCT system.  Each system can be characterized by a set of typical parameters such as axial resolution, lateral resolution, imaging range and the sensitivity fall-off. They are used to gauge system performance and act as a platform for intersystem comparisons. These parameters along with their dependence will be studied in order to gain insight into OCT design. This will allow the developer to evaluate current performance and set a reasonable and reachable target. Possessing sound background knowledge, one can systematically improve these parameters and versify them in experiments.  2.1 Michelson interferometer  A common configuration in interferometry is the Michelson interferometer [37] as shown in figure 2.1. The interferometer consists of a light source, a beam splitter, two mirrors and a detector. Light emitted from the source is divided by the beam splitter between the two arms of the interferometer. Waves reflecting back from the sample and reference arms of length ls and lr respectively, recombine at the beam splitter and propagate towards the detector. The superimposed waves create an interference pattern on the detector surface, creating the data set that is to be analysed.  13  Figure 2.1: Typical Michelson interferometer setup  The detected signal can be given by [23]:  I ∝ Er + Es  2  (2.1)  where Es and Er are variables describing the reflected electrical field from the sample and reference arm respectively. For a monochromatic source, equation (2.1) can be rewritten as:  I ∝ Ar e − j ( 2 klr −ωt ) + As e − j ( 2 kls −ωt )  2  (2.2)  k represents the angular wavenumber and ω is the angular frequency of the wave. Expanding the magnitude square,  [  {  }  {  }] [  I ∝ Ar2 + As2 + Re E r E s* + Re E s E r* = Ar2 + As2 + 2 Ar As cos( k∆l )  ]  (2.3)  14  The third term in this expression is the cross-correlation term [38,39], and it depends on the path length mismatch between the sample arm and reference arm given by ∆l. The intensity reflected back from a real tissue is usually much smaller than the reflection from the reference arm. Ignoring the very small term As2 and subtracting the measurable term Ar2 from equation 2.3, only the third term remains, which is the cross-correlation term containing the interference information. This interference has a frequency which is determined by the path length difference ∆l. A larger path length difference will produce a higher frequency modulation in the angular wavenumber domain. This allows for the determination of the path length difference, which is essential in locating reflectivity changes in a sample of interest.  2.2 Spectral domain OCT with a low-coherence light source  When a low-coherence source of a finite bandwidth is used in conjunction with the Michelson interferometer, the detected signal can be written as a sum of contributions from all the monochromatic waves reflected from the sample [38,39],   I (k ) = s (k )  Rr + ∑ Ri + 2 Rr ∑ Ri cos(k ∆li ) + 2∑∑ Ri R j cos(k ∆lij )  i i i j ≠i    (2.4)  In this expression, s(k) is the spectral intensity distribution of the light source. Rr is the reflectivity of the reference arm mirror. Ri and Rj are the reflectivity in the ith and jth layers of the sample; ∆li is the optical path length difference of the ith layer compared to the reference arm and similarly ∆lij is the path length difference between the ith and jth sample layers. The third term in equation 2.4 encapsulates the axial depth information in the sample which appears as interferences of light waves. I(k) in equation 2.4 is the intensity as a function of the angular wavenumber k, which could be measured by separating the different components using a diffraction grating as illustrated in figure 2.2. The diffraction grating redirects light of different wavelengths to different directions, allowing the CCD pixels to detect the intensity value at particular wavelengths.  15  Figure 2.2: Detection method in SD-OCT. Light returning from the reference and sample path are directed at a diffraction grating. The grating disperses the light into different direction based on their wavelength and is ultimately detected by a CCD array. Interference occurs on the CCD pixel and produces a pattern that contains the depth information of the reflector.  The depth profile of the sample is retrieved from the detected signal by performing a Fourier transform (FT) from the k to z domain, resulting in the following equation [38,39],  Rrδ (0) + ∑ Riδ (0) + 2 Rr ∑ Ri δ ( z ± ∆li ) +    i i FTk−→1 z [ I (k )] = Γ( z ) ⊗   2∑∑ Ri R j δ ( z ± ∆lij )   i j ≠i   (2.5)  Here Γ(z), the FT of the source spectrum, represents the envelope of its coherence function. The variable z = lr - ls represents the path length difference between the reference arm and the depth location of the reflection. The first and second term in the bracket of equation 2.5 are non interferometric, and contribute to a DC term at z=0. The  16  third term contains the axial depth information related to the reference path as mentioned above. The final term corresponds to autocorrelation noise between layers within the samples, which is usually small and is also located near z=0.  As portrayed in figure 2.3, the cosine term is created by the rapid oscillation of the carrier signal and the envelop term degrades quickly as the path length difference increases. Assuming the source spectrum s(k) to be Gaussian shaped, it will remain to be a Gaussian after applying the Fourier transform. The cosine however will transform into two delta functions symmetrical about the zero path length difference at z=0. The envelope term is convoluted with the deltas to create the signal. Therefore the envelope determines the full width half max (FWHM) resolution of the OCT system, which is dependent on the center wavelength and spectral bandwidth of the system [40]:  ∆z =  2 ln 2 λ2o π ∆λ  (2.6)  The assumption of a Gaussian shaped spectrum does not always hold true in real OCT systems. Femtosecond sources such as the Titanium:Sapphire laser have spectral shapes close to a Gaussian, but there are other sources such as a superluminescence diodes, that will not have a spectrum resembling a Gaussian as shown in figure 2.4. Nevertheless, equation 2.6 is a useful guidance equation for preliminary design work.  17  Figure 2.3: SD-OCT signal reconstruction via Fourier transform with related axial resolution parameter.  Figure 2.4: Left, spectrum of Ti:Sapphire Laser [41]; Right, spectrum for SLD [42]  18  2.3 Imaging range  SD-OCT images are constructed using multiple axial profiles placed adjacent to each other. Each axial profile will contain the information of reflectivity for each transverse sample location. As seen from the Fourier relationship between equation 2.4 and equation 2.5, the depth location (z) where the reflection originated is deduced from the cosine as a function of angular wavenumber (k). Low frequency oscillation in the signal measured in the k domain represents a reflection from a shallow location. Similarly, a high frequency oscillation corresponds to a deeper location. For real samples, however, reflections could originate from a number of locations and the corresponding signal is the sum of all oscillations as described by equation 2.4. The relation between the signal oscillation and the location is summarized in figure 2.5.  Figure 2.5: Depth effect on SD-OCT signals of a single reflector. Higher frequency oscillation in the k domain corresponds to reflections at deeper locations.  As one can predict, there will be a depth limit to the SD-OCT when the axial profile can no longer be reconstructed. This will occur when the spectral sampling rate is less than twice the maximum frequency of oscillation. Much like electrical signals measured in  19  time domain, the sampling rate must satisfy the Nqyuist criterion and any information above the Nyquist frequency is lost. Ideally increasing the sampling rate is beneficial; however there is a trade-off in SD-OCT systems. The SD-OCT signal is detected by a spectrometer prior to processing. Spectrometers, as illustrated in figure 2.6, have a limited spectral range ∆Λ due to the finite CCD array size [40]. With a limited number of CCD pixels, increasing the spectral sampling rate will result in a smaller spectral bandwidth detected. If the detected spectral range ∆Λ is too small, the full spectrum of the source is not detected and the axial resolution will be inferior to the theoretical limit. The spectral bandwidth of the system would then be limited by the detection electronics and ∆λ in equation 2.6 will be reduced. Since the CCD camera has a finite number of elements (N), the sampling density ∆Λ  N  will be lower if ∆Λ is too large [40]. This will  result in a decrease in imaging range with no improvement on the axial resolution as it is now limited by the source. Therefore, if one optimizes the design to achieve a source limited axial resolution, the imaging range will be governed by the number of sampling points, which is equivalent to the number of pixels on a CCD camera. Thus, excluding the absorption and scattering of the sample, the deepest imaging depth of an SD-OCT system is determined by the spectrometer design.  The spectral range of a spectrometer is related to pixel spacing or bins in the axial spatial domain (z) by [40],  ∆Λ =  1 λo2 2 ∆p z  (2.7)  where ∆pz is the pixel spacing in z domain and λo is the center wavelength of the spectrum. This equation shows the inverse relationship between spectral range and the pixel spacing in z domain. Choosing a spacing equivalent to half of the theoretical axial resolution ∆z will allow reflections separated by ∆z to be resolved as shown in figure 2.7. The coherent function of the source Γ(z), cannot be distinguish from neighbouring pixel  20  if the pixel spacing is less than ∆z . Therefore substituting ∆z for pixel spacing in 2 2 equation 2.7 maximizes imaging range while maintaining source limited resolution,  Figure 2.6: Relationship between spectral sampling and imaging range. Due to the Fourier transform theorem, a larger spectral range ∆Λ sampled will convert to a smaller bin spacing, ∆pz¸ in the z domain. Top: The detected signal range is wider than the source bandwidth, which results in a shallower imaging depth but higher spectral resolution. Middle: detected signal range is similar to source bandwidth, a balance between imaging range and resolution. Bottom: detection bandwidth is less than the source spectrum, imaging depth increases at the expense of axial resolution.  21  π 1 λo2 1 λ2o ∆Λ = = = ∆λ 2 ∆p z 2 ∆z 2 ln 2 2  ( )  (2.8)  Therefore, if a finite element CCD or photo-diode array was used in the spectrometer with the above spectral range, the axial range of measurement is given by,  lmax =  ∆z N ln 2 λ2o = N 2 2 2π ∆λ  (2.9)  The division of N by two is due to the conjugate symmetry of the Fourier transform of a real spectrum; only half of the pixel will contain unique information. It can also been seen through the above derivation that there is a trade-off between axial resolution and imaging range if the detector array remains unchanged. This is an important design parameter for the spectrometer detection arm.  Figure 2.7: Axial profile of two closely spaced reflectors. The source coherent function is convoluted with the delta function representing the reflective surfaces. The two surface can only be distinguish from each other if the spacing of the pixels in the z domain is greater than ∆z  2  .  22  2.4 Sensitivity fall-off  One of the main disadvantages of SD-OCT is the depth dependent sensitivity fall-off [29] that is depicted in figure 2.8. Equal optical power returned by the same reflector positioned at different depths will produce a different signal magnitude post processing. As the reflector is positioned further away from the zero path length difference, its representative signal in the z space is reduced. This phenomenon is named sensitivity fall-off and limits the useful imaging range of SD-OCT systems. The attenuation of the signal is primarily due to the interference fringe washout or “spectral cross talk” at large path length differences and is dependent on the spectral bandwidth integrated by individual pixels as well as the spectrometer optics [43]. Further attenuation due the reconstruction method is expected and will be presented in Chapter 5.  To analyse the sensitivity fall-off due to the spectrometer design, let us assume a single reflector for equation (2.4), I (k ) = s (k ) ⋅ cos ( k ∆l )  (2.10)  The light reflected from this surface will interfere with the reference beam, generating an interference pattern of intensity as a function of k. To distinguish the intensity contributions from different wavelengths, they must be physically separated and detected by a photodetector. In order to rapidly produce an A-line, the different intensity is acquired simultaneously using a linear CCD array as seen in figure 2.2. Depending on the wavelength of the incident light, a diffractive grating will diffract the light into different directions.  The efficiency of the grating also depends on the incident beam diameter. The amount of diffracted energy increases as the number of illuminated grooves increases. Thus an efficient spectrometer setup should have a large incident beam. The beam size remains the same after diffraction and needs to be focused to a CCD pixel for detection. The focusing is accomplished by the use of an optics system, which transfers the information  23  Figure 2.8: Illustration of the effect of depth dependent sensitivity fall-off. With a mirror acting as the sample, the reflected power is kept constant while varying the mirror location. Mirror positions presenting deeper locations produced smaller amplitudes in the detected reflectivity, even though the reflected powers are the same.  from the object field to the image field. The “object” in the case of an SD-OCT system will be the oscillation in k space, and the imaging plane is the CCD array. The ability of an optical system to transfer a spatial modulation of intensity is called the modulation transfer function. In the case of SD-OCT, the oscillation in k space is distributed into spatial locations by the diffraction grating. The modulation transfer function is defined as [44,45]:  MTF =  image modulation object modulation  (2.11)  24  Figure 2.9 illustrates the principle of the MTF. When a modulation exists in the object space, it is transferred to the image space by an optical system. Since the optical system is non-ideal, infinitely small points in the object field will be represented by a diffraction limited Gaussian image. When two points in the object are too close together, the resulting Gaussians will blend into each other rendering them indistinguishable. This occurs when the spatial modulation frequency is too high, causing maxima and minima to be spatially close. The peak to peak amplitude of the oscillation will decrease as the spatial frequency increases, leading to a smaller signal magnitude after a Fourier transform.  For SD-OCT, the oscillation frequency in k space increases as the location of the reflector increases in depth. This will correspond with a higher spatial frequency. Although the amplitude of the oscillation is equal for the same power reflected, the resulting amplitude in the image on the CCD plane is smaller. Hence, the sensitivity for photons scattered back from deeper within a sample is lowered.  Figure 2.9: Modulation transfer function. Higher spatial frequency in the object space will result in decreased intensity contrast in the image space.  25  To show the MTF effect on the SD-OCT system, two illustrative cases are presented in figure 2.10. The cosine oscillation due to interference of different wavelengths is focused onto the CCD plane. Depending on the optical system, the resulting focal size of the beam will be different and hence will change in the MTF. Typically for smaller focal beam size, the MTF is greater and the intensity contrast is maintained. The left hand side of figure 2.10 shows a near ideal optical system. The Gaussian beam width is smaller than a detector pixel and its intensity is contained within one pixel. This type of optical system has a high MTF and the intensity modulation contrast is retained. The right hand side shows the same intensity modulation caused by a reflector at the same depth. The optical system in this case however, has a focused beam size larger than the pixel size. The power of the Gaussian is not fully captured by one pixel. Some intensity is lost in the vertical direction and has spread into neighbouring pixels in the horizontal direction. The outcome of the detected modulation has a much lower amplitude. Detailed calculation will be derived in chapter four. However, it can be seen through this qualitative analysis that the design of the focusing optics is critical to minimizing the sensitivity fall-off in SD-OCT systems.  26  Figure 2.10: The effect of different focusing optics on the detected interference modulation. Left: an ideal case where the focus spot is small and is contained within a pixel. Right: The large focal spot results in a loss of light and spectral cross talk between pixels.  2.5 Dispersion effect  Dispersion within the OCT system will cause different frequencies to propagate with different velocities. This will broaden the interferometric autocorrelation if it is not balanced between the reference and sample arm [40]. Figure 2.11 shows an interference modulation with and without dispersion and its respective axial profile. The signal containing dispersion has oscillatory periods that are not equal and hence its Fourier transform broadens due to an increase in the frequency component. Therefore dispersion must be compensated by hardware or software technique in order to achieve the best resolution. The method of accurately determining the dispersion must be established as it is an important step for numerical compensation using software. A rapid algorithm for compensation is also needed in designing a real-time SD-OCT system.  27  Figure 2.11: Dispersion in SD-OCT system. Top: Interference modulation with dispersion. Notice the uneven periods in the signal. Middle: Interference modulation without dispersion. Bottom: The reconstructed axial profile using the above interference signal. Note the broadened width (lowered resolution) of the signal containing dispersion.  28  Chapter 3 System design part 1: interferometer, optics and control In the development of a high quality SD-OCT system, the selection of each component becomes critical. Individual modules are cascaded in a sequential order, each contributing a transfer function that will combine to form the final system response. The final system performance is generally degraded by the components due to non-ideal physical realization, incompatibility, a variable spectral response as well as misalignment. The goal of this part of the project is to select the most suitable components available to realize the best image quality.  As the first part of three in system design, this chapter includes the general interferometer design, light source selection, and optical assemblies as well as computer control for synchronization of each component. The optics in the system will generally need to accommodate a wide spectral range, without attenuating particular wavelengths, which could reduce image quality. The design should also be able to deliver and collect light from the samples over a specific range of lateral scanning. Therefore, careful selections of components are paramount to implementing a SD-OCT system. Control and synchronization should also be well organized to accomplish tasks in the process of producing an SD-OCT system. The aim of this chapter is to discuss and provide insight to the choice of each component. Simulation techniques, as well as the tools used for verification are presented when appropriate. Spectrometer design will be the focus of chapter 4 and processing techniques will be presented in chapter 5.  A schematic of the SD-OCT system is shown in figure 3.1. The source is a broadband superluminescent diode (SLD). A 50/50 coupling ratio Michelson interferometer configuration was used to deliver light to the two arms. A neutral density filter was used in the reference arm to adjust for the reflected power. A galvanometer actuated mirror was placed between the collimation and focusing optics for scanning in the sample arm. The detection is accomplished by a custom-built spectrometer and a computer was used to process the data and control the data acquisition. 29  Figure 3.1: SD-OCT system setup: SLD - superluminescent diode, 50/50 FC – fused fiber coupler, PC – polarization controller, CL1/2 – 15mm collimation lens, NDF – neutral density filter, FL1/2 – 30mm achromatic focusing lens, CL3 – 75mm achromatic collimation lens, ASL – 4 element 100mm air-spaced lens, DAQ – data acquisition board.  3.1 Light source  An important part of the OCT system is the light source that generates the probing beam for imaging. As mentioned in chapter 2, OCT imaging requires a source with a broad bandwidth and a short low-coherence length that produces micrometer resolution. Another aspect to be considered is the center wavelength, which governs penetration depth.  In general, penetration depth of light is proportional to the wavelength. A longer wavelength penetrates a sample deeper than its shorter counterparts. It is however, also important to consider the absorption spectrums of the samples being investigated. Biological samples are the focus of most OCT systems, in which water is a main constituent of the cellular matrix and extracellular fluid. As well, hemoglobin makes up a  30  large part of the blood in the circulatory system that oxygenates many human organs [46]. Therefore it is important to consider these factors when choosing the center wavelength for in-vivo imaging. Shown in figure 3.2 is a plot of the absorption versus wavelength of several common components of the human tissue including de-oxy and oxy hemoglobin, water and lipid. In terms of overall absorption, the plot is minimal at around 800-850nm and this imaging window is one of the commonly chosen ranges for biological imaging [14].  Figure 3.2: Absorption spectrum in the near-infrared wavelength of typical components of biological samples [47].  Another center wavelength common for OCT imaging is 1310nm. Although water absorption is higher in this wavelength range compared to 800nm, the penetration depth is much deeper due to reduced scattering. In addition, a wide range of optical components are readily available due to the development and use of this wavelength range for telecommunication. OCT system using 800nm are typically used to image the retina, where absorption due to the water component of the vitreous in the eye is dominant. For other samples, 1310nm is usually preferred due to the reduced scattering which overcomes the effect of absorption, allowing light to penetrate deeper. The objective of this project, as stated in chapter 1, is to develop a SD-OCT that could potentially be integrated with multiphoton microscopy that utilizes wavelengths near 800nm. Therefore, the light source chosen is a broadband source near the 800nm range.  31  SLD is one approach to generating broadband high power light into a single spatial mode. SLD combines laser-diode-like output power with the broad bandwidth of a light emitting diode (LED) [42]. SLD consists of a PN junction, and an optical wave guide with a very high gain medium [42]. Unlike traditional lasers, SLDs do not have a resonance cavity and ideally have no feedback at the end of the active region. SLD emits light through amplified spontaneous emission; photons are released at the PN junction and experience gain through the gain medium. Since SLDs have high optical gain, small reflection from the end facet can cause parasitic Fabry-Perot modulation in the optical spectrum or cause damage to the SLD [42]. Typically the output of the SLD is coupled to a fiber by the manufacturer. The position of the fiber is angled to avoid Fresnel reflection at the fiber interface. For optical systems with large optical feedback, the addition of an optical isolator will be needed to avoid SLD damage or operational lifespan reduction [42].  A SLD from Superlum was chosen for its turnkey operation advantage with minimal required user intervention such as alignment and tuning. It produces an optical output of up to 5mW, with a center wavelength of 845nm and a spectral FWHM bandwidth of 45nm as shown in figure 3.3. Using equation (2.6) the source limited axial resolution is calculated to be ~7 µm in air.  Figure 3.3: Left: Superlum SLD-371 spectrum with FWHM bandwidth and central wavelength indicated [42]. Right: SLD spectrum measured with ANDO AQ6135A optical spectrum analyzer, measured center wavelength = 844.68nm, measured FWHM bandwidth = 45.5nm  32  3.2 Interferometer  Based on interferometry, SD-OCT reconstructs the depth profile of a sample from interference fringes. The Michelson interferometer can be constructed in free space or with the use of a fiber coupler. The fiber based version has the advantage of being ready to use, where extra alignment would be needed for the free space alternative that is implemented with a beam splitter. Alignment would also be a factor if the interferometer is repositioned or relocated to accommodate integration with other modalities. The fiber based Michelson interferometer offers much more mobility and flexibility over the free spaced version. Positioning of the interferometer can be altered very easily because light follows the path of the fiber. Since the SLD is already fiber coupled, it is natural to select the fiber version for these advantages.  The coupling configuration was chosen to be 50/50. The light is split evenly between the reference arm and sample arm by a fiber coupler. Reflected lights from the two arms recombine in the coupler and 50% is directed to the spectrometer arm, while the other 50% is transmitted back to the SLD where it is blocked by the optical isolator to prevent damage and feedback to the SLD. The fiber coupler was chosen to have a flat broadband response to reduce any attenuation of wavelengths in the bandwidth of interest. The fiber coupler used in the configuration has a center wavelength of 850nm and an operating bandwidth of 80nm. It uses a single mode fiber and has a mode field diameter of 5.4µm, cladding diameter of 125µm and a numerical aperture of 0.10-0.14.  Interference can only occur for components of polarization that are parallel. In order to maximize the interference effect in the Michelson interferometer, polarization states of the reference and sample beams must be matched. Birefringence in fiber optics and the sample can change the polarization state of the electrical field, therefore fiber polarization controllers were added to both arms to control the polarization. Alternatively one can employ polarization maintaining fibers, but these types of fibers are not commonly available in the 800nm wavelength range and they cannot accommodate a broad bandwidth which is required by OCT.  33  The polarization controller utilizes stress induced birefringence [48]. The controller consists of three independent spools or loops in which the fiber sits. By applying a pressure to the fiber, the birefringence properties are altered. By inserting these controllers in the interferometer arms, one can adjust the polarization to ensure a good match between the two fields to create a high quality interference fringe.  3.3 Sample arm  The sample arm contains the transverse scanning mechanism and focusing optics. It is responsible for transmitting and receiving light between the sample and the system. Therefore it is important to choose components that will provide the necessary scanning range, transverse resolution, and scan speed. The current design will support scanning in only one direction (x), which will provide data for a cross sectional image of the sample. With the use of optics symmetrical about both the x and y axes, the system is easily modifiable to two axis scanning. Figure 3.4 shows the schematic for the sample arm optics. In the sample arm, light emerging from the fiber is collimated prior to scanning. A collimated beam is easier to manipulate, redirect and focus as compared to a diverging beam. A scanning mirror redirects the beam into different spatial locations before it is focused onto the sample by a focusing lens.  34  Figure 3.4: Sample arm setup  The scanning mirror is mounted on a galvanometer actuated axis. The mirror size and galvanometer is chosen to give a reasonably fast and repeatable scanning speed. A large mirror will act as a heavy load on the galvanometer, increasing settling time and lowering scan speed. The mirror, however, must be large enough to accommodate the collimated beam. A larger collimated beam can be focused to a smaller spot on the sample which translates into a better transverse resolution.  For a fast scanning system with a good transverse resolution, the scanning mirror size and the focal length of the focusing lens should both be minimized. The size of the mirror will need to accommodate the incident beam size, which is determined by the collimated lens. From catalogues of off-the-shelf optics, the shortest focal length available was 15mm for a standard 12.7mm (half inch) diameter lens. Using equations from Gaussian optics [49], the collimated Gaussian beam diameter is determined to be 2.98mm as illustrated in figure 3.5.  35  wo' =  λo 845nm 15mm = 1.4945mm f = woπ 2.7 µ m ⋅ (π )  (3.2)  wo is the Gaussian beam waist (radius) of the beam, λo is the center wavelength and f is the focal length of the lens. The Gaussian beam waist is taken to be the distance from the peak center to where the intensity of the beam has decreased to 1  e2  of its maximum  intensity at the peak. The power contained within the circle of radius w contains 86% of the beam power. The commonly used FWHM width of a beam can be found by converting using the equation  w = 0.84932 ⋅ FWHM  (3.3)  The FWHM width of the collimated beam is therefore 1.759mm.  A galvanometer is used to actuate the mirror for scanning the beam over the sample. The mirror used to deflect the beam should be larger than the beam diameter to avoid clipping and the loss of optical energy. The orientation of the mirror is set at 45° with respect to the incident beam as shown in figure 3.5 with a rotation of ±10° mechanical, corresponding to an angle range of 35° - 55°. The ±10° mechanical angle was chosen as recommended by the manufacturer for a fast scan cycle. The incident beam on the mirror will become an elliptical shape due to this tilting. Using simple trigonometry and the Gaussian beam size, it is calculated that the elliptical beam is 4.22mm at 45° and obtains a maximum size of 5.21mm at the two extremes. Since this calculation is based on a Gaussian beam size that only contains 86% of the energy, the mirror is chosen to have a slightly larger standard size at 7mm. This ensures the full beam is contained within the mirror for scanning.  36  Figure 3.5: Sample arm optics showing Gaussian beam size and lens specification  The next step is to choose an appropriate focusing lens to deliver the beam to the sample. Unlike other imaging modalities, the lateral and axial resolutions of SD-OCT are independent. The axial resolution is a function of the source bandwidth and the lateral resolution is dependent on the focal length of the lens. However, since SD-OCT obtains full axial depth structures simultaneously, it is still important to consider the focal range of the Gaussian beam. The schematic of a Gaussian beam and the relationship between the lateral resolution and depth of focus is shown in figure 3.6. It is apparent that a narrower beam waist will also have a decreased focal range. Some researchers have developed algorithms in post-processing with decovolution to reduce this effect [50,51], while others have tried to improve the optical setup with dynamic focusing [52] or special lens design [53, 54]. Most SD-OCT systems in the literature, however, remain using simple optics for focusing because the aforementioned methods increase system complexity and reduce imaging speed in general. Although the depth of focus is generally shorter than the full imaging range, SD-OCT, using a simple Gaussian probing beam, can still provide a relatively reasonable image.  37  Figure 3.6: Gaussian beam shown with its Gaussian waist and depth of focus.  In consideration of the lateral scanning, the lens must have a large enough aperture to capture the beam. It should also be chosen properly to give a reasonable transverse resolution. Taking into account the incident and reflected angles, the ±10° mechanical rotation converts to a ±20° optical degree. The aperture size requirement will depend on the focal length of the lens and the ±20° degree deflection. The aperture size as a function of focusing length is given as:  Aperature diameter = 2 ⋅ ( f tan θ )  (3.4)  where f is the focal length and θ is the angular offset of the center position of the lens. A longer focal length will result in a wider transverse scanning range and would require a larger lens aperture. Due to the broad bandwidth of the light source, an achromatic lens with chromatic correction would be a good candidate. An achromatic lens with a short focal length and aperture combination that can accommodate the scanning was found using off-the-shelf optics catalogues. A large 25.4mm diameter lens with a focal length of 30mm from Thorlabs was chosen. This configuration gives a transverse scanning range  38  of 21.83mm using equation 3.4. This design also yields a probing beam diameter of ~11µm and a focusing range of ~217µm based on the geometry presented in figure 3.6.  3.4 Reference arm  The reference arm is used to provide a path length reference. All the subsequent calculations and image reconstruction processes are based upon this frame of reference. Therefore the ability to fine tune and adjust the reference path length is very important. Light diverging from the fiber must first be collimated into a parallel beam that does not diverge or converge when the propagation distance is changed. Since the size and the diameter of this beam are not important, the same lenses installed in the sample arm were used to avoid dispersion mismatches. Therefore diverging light from the pigtail fiber end of the coupler is collimated by a lens of 15mm focal length, resulting in a beam of 2.9mm diameter.  Figure 3.7: Reference arm optics, the components within the dash box are mounted on the sample micrometer stage to allow for simultaneous movement.  39  The collimated beam needs to travel some distance before directed back into the fiber and this distance should be adjustable to accommodate any changes in the sample arm as well as the sample size. Rudimentary SD-OCT systems would employ a simple flat mirror for this purpose. However, to eliminate the need for numerical dispersion compensation for dispersion imbalance of the two arms, a focusing lens identical to the one used in the sample arm was placed in the path of the reference beam as shown in figure 3.7.  In order to change the reference path length between samples, the silver mirror and focusing lens (for dispersion balance) are mounted on a Newport linear stage with a Vernier graduation of 1µm. The path length adjustment is done on the collimated section of the beam, which ensures that the focus of the beam does not change location. The reference beam power returned to the spectrometer is typically much larger compared to the sample arm. In most cases, the intensity from the reference arm can saturate the sensitive CCD detector array. Therefore, a continuously variable neutral density filter was added in the beam path for adjustment of the reference power.  3.5 Data acquisition and control  The synchronization and precise control of all components is key to an artifact-free OCT image. In order to construct a two dimensional cross sectional image of the sample, the probing beam must be steered across the sample to acquire multiple A-lines. During the integrating period when the spectrometer gathers data for an A-line reconstruction, the scan mirror must remain stationary, allowing for the capture of reflected photons from the sample. Any movement will affect the number of photons integrated by the CCD, specifically mixing the reflected signal from adjacent positions as well as reducing the amount of photons integrated from the intended A-line [31]. Hence, the movement of the scanning mirror and acquisition of the CCD camera must be coordinated to avoid degradation of the lateral resolution and the reduction of SNR.  There are specifically two hardware modules that the SD-OCT controller must be able to manage - the acquisition and the lateral movement. The first is accomplished by the use  40  of a CCD camera which must be linked to the computer by an interface. Lateral scanning movement, on the other hand, is accomplished by a galvanometer actuated mirror. It is controlled by a pre-calibrated controller board in a closed-loop fashion. The desired position of the mirror corresponds to an analog voltage provided to the controller board. Thus, aside from the usual, input and output devices such as a keyboard, mouse and monitor, the SD-OCT workstation will need the ability to output a analog voltage waveform and receive the acquired data from a CCD camera.  3.5.1 Camera  An important part of a high quality spectrometer is the photodetector. Specifically, the CCD must have a high responsivity in the same spectral region as the light source. As well, the pixel size of the camera must be chosen to match the spectral sampling rate of the spectrometer. These parameters will be discussed in more detail in the next chapter. For the purpose of this chapter, the camera must be able to interface with the computer and have the capability to transfer the data at a fast rate. A 1x1024 pixel linescan CCD camera (SM2 CL1014) from Atmel/E2V was chosen. The camera consists of a 12bit analog to digital convertor (ADC) that digitizes the analog signal to 4096 levels. The maximum line rate for this camera is 53 kHz which can be converted directly to 53000 Alines acquired per second. This speed is obtained by setting the camera integration time to its minimum of 18µs in free run mode; longer integration time will result in a longer cycle and slower line rate.  For the purpose of synchronization, the camera can generate two trigger signals called horizontal synchronization (HSYNC) and vertical synchronization (VSYNC). The HSYNC signal is asserted after every line of an image and the VSYNC is pulled up at the end of a full 2D frame. However since the camera used in SD-OCT is a linescan camera, the image will only consist of a single line. Therefore the VSYNC signal is undefined for the linescan camera.  41  3.5.2 Frame grabber  To programmatically control the camera and to save data, the camera must be connected to a computer via a frame grabber. The frame grabber is an expansion card that fits into the computer chassis. Since the workstation only has two PCI slots available, a PCI version of the frame grabber board was chosen at the time of purchase. For ease of synchronization, both the frame grabber board and analog output board for the galvanometer control were chosen from National Instruments. All National Instruments boards come with a Real Time System Integration (RTSI) port that allows for the communication and synchronization of multiple boards via a ribbon cable. The data flows from the camera to the memory via a chain of components with different transfer protocols. Therefore it is vital to consider each stage to determine the bottleneck that limits the bandwidth.  Due to the user friendly control designed by National Instruments, there is no direct control over how and when the frame grabber transfers the data from its onboard memory to the system memory. It is usually transferred when the buffer is full or when a frame is done. In the case of the current system setup, the data is transferred after each A-line (one A-line per frame). In this transfer scheme, it was experimentally determined that the Aline period is limited by the PCI transfer to 120µs per A-line. This is due to the overhead of each individual PCI transfer, which must assert the transfer signal and wait for the shared PCI to free up [55]. The data blocks transferred were also too small to utilize the full potential of the PCI bus. The resulting transfer rate is approximately 8.3kHz and is significantly less than the PCI burst rate limit as well as the camera specification.  To solve this problem, the camera file was altered to “trick” the frame grabber board into recognizing the linescan camera as a traditional 2D camera. The frame grabber setting was tuned to receive a 1024x1024 pixel array of data from a 2D camera. This caused the frame grabber to accumulate two frames (2x512 A-lines), a total of 2MB of data, before transferring over to the system memory. With this configuration, the A-line speed reached its limit of 53 kHz as specified by the camera manufacturer. It is important to  42  note that with this organization, the HSYNC is asserted after every A-line and a Frame Start is asserted after each frame of 512 A-lines. This produces two triggering signals that can be used for synchronization with the analog output module. The resulting transfer scheme is summarized in figure 3.8. The new modified transfers scheme reduces the number of transfer and frees up the PCI bus for use by other peripherals.  Figure 3.8: A-line acquisition and triggering signals. Top: Linescan configuration in the frame grabber, Bottom: Altered 2D configuration  3.5.3 Galvanometer control using analog waveform (data acquisition board)  The control of the galvanometer actuated mirror is accomplished through the use of a voltage waveform input to its controller board. The 677xx single axis control board is a closed-loop control system that uses the angular orientation of the galvanometer as feedback. The controller is pre-calibrated from the manufacturer (Cambridge Technologies), each voltage inputs between ±10V is converted to ±10 mechanical degrees of rotation of the galvanometer by a linear relationship. Therefore by exporting a triangular voltage waveform with miniature steps, the galvanometer will scan through its range in a linear manner. The goal of this discussion is to make sure the movement and acquisition is synchronized.  43  Figure 3.9 illustrates a typical waveform controlling the galvanometer. The scanning range is user defined in a graphical interface and is converted to individual voltage steps based on the number of A-lines. Initially, the galvanometer is driven slowly from its origin of a zero degree offset to a negative minimum voltage before an image is taken. This prevents a large abrupt change of position, protecting the galvanometer from being damaged. Two frames are captured during one triangular period: a forward scan starting from the negative to positive angular position and a backward scan that returns the waveform to its minimum negative value.  Figure 3.9: Galvanometer controlling waveform and its associated trigger. Positive voltage denotes an anticlockwise rotation and negative voltage symbolizes a clockwise rotation.  The specification of the analog output board must meet the scanning requirements of the SD-OCT system. It should have the ability to generate a waveform that can scan the beam over a 21.83mm (±10° mechanical) range with incremental movements smaller than the Gaussian beam width. Without the capability to generate this resolution and range, the SD-OCT scan mechanism will be limited.  44  The S series PCI 6115 board was selected from the National Instruments. It has great potential for future expansion with two analog output channels and four high speed independent analog inputs. The onboard 12bits digital-to-analog convertor (DAC) converts the output range of ±10V to a resolution of 4.9mV following the relation:  Resolution =  Voltage range 20V = 12 = 0.0049V DAC levels 2  Recall from the previous section that the 1  e2  (3.5)  lateral resolution of the Gaussian beam is  approximately 11µm. Using small angle approximation in the geometry of figure 3.4, the mechanical angle of rotation required to move the beam by 11µm can be calculated by  θ mechanical =  θ optical 2  =  1  11µm  tan −1   = 0.0105° 2  30mm   (3.6)  Since the mechanical rotational position is directly proportional to the voltage input, 0.0105° would correspond to 10.5mV. The result confirms that the DAQ board can steer the beam at increments smaller (0.5x) than the required minimum. However, for some applications and extensions of OCT such as complex full range OCT, it is beneficial to oversample in transverse sample location. In other words, it would be good to obtain a finer voltage resolution that allows the scan increments to be smaller. An external voltage divider was developed by fellow undergraduate student Arthur Cheung to allow for the smaller step size. The divider has a continuous output-input ratio from 0.2 to 0.5 thus allowing for a resolution down to approximately 1mV (0.001° mechanical or 0.002° optical) or a scan increment of 1.047µm.  The synchronization of the voltage output and the camera acquisition is coordinated using the RTSI platform from National Instruments. The RSTI cable allows for direct routing of signals between multiple peripherals without the use of the PCI bus. The frame start trigger from the frame grabber initiates the waveform output. A new discrete voltage step is generated for each HSYNC trigger and remains unchanged during the integration time  45  of one A-line. The waveform has an overall shape of a periodic triangle with fine steps as seen in figure 3.9.  3.5.4 Summary of control flow and trigger  Installed on the computer are two National Instruments cards, namely the PCI-1426 frame grabber and the PCI-6115 DAQ illustrated in figure 3.10. The boards are connected through an RTSI ribbon cable for synchronization. Acting as the master, the frame grabber generates two triggering signals for use by the DAQ in producing the galvanometer controlling voltage waveforms. One trigger produced at the start of the frame initiates the analog voltage output, and any subsequent updates to the voltage are activated by a signal generated at the end of each A-line. A frame consists of a variable number of lines, ranging from 1 to 512.  Figure 3.10: Control signals of the SD-OCT system. The camera produces a synchronization pulse after each exposure that is redirected to the DAQ board by the frame grabber. The DAQ board uses this triggering signal as an update signal for the galvanometer controlling voltage waveform.  46  Chapter 4 System design part 2: spectrometer design SD-OCT is based on two beam interferometry, where the interference fringes are collected in the spectral domain by the use of a spectrometer. The most common configuration of a spectrometer is to use a dispersive element to separate the wavelength component in a predefined manner in 1D space. The separated components are detected by a photodetector and the spectrum is mapped out with its corresponding intensity. The design of the spectrometer is considered one of the most important objectives in an SDOCT system. Each of its parameter can have a dramatic effect on overall system performance. The axial resolution, imaging range and sensitivity fall-off are all dependent on the spectrometer’s design.  The optics of the spectrometer will determine its spectral sampling rate which affects the imaging range and axial resolution. Another design parameter that must be considered is the sensitivity fall-off, which is caused by the inability of a lens system to transfer a modulation in the object space to the image space. This transfer is generally described by the modulation transfer function (MTF) as discussed in chapter two. The MTF can be improved by reducing the optical aberration and the focal spot size of the spectrometer, which is the main design problem discussed and analysed in this chapter.  4.1 Configuration and setup  Most OCT systems in the current literature are implemented using the refractive optics Czerny-Turner spectrometer. At the detection arm, the light must be collimated from a diverging beam emanating from the single mode fiber. To distinguish between the intensities contributed by different wavelengths, they are physically separated by the grating and sensed by a photodetector. In order to rapidly produce an A-line, different intensities are acquired simultaneously using a linear CCD array as seen in figure 4.1.  47  Note that this figure shows the use of the transmission diffraction grating, where the diffracted beams are emitted at the side opposite to the incident beam.  There are four main components in the spectrometer: the collimation lens, diffraction grating, the focusing lens and the CCD array. Each of these components can be designed and tuned to accommodate specific needs in SD-OCT. The goal of this part of the project is to select the most suitable components available to realize the best image quality.  Figure 4.1: Spectrometer layout for the SD-OCT system. There are four important components: collimation lens, diffraction grating, focusing lens and the CCD camera  4.2 Theory of sensitivity fall-off  From the theory of two beam interferometry, the interference fringe is a function of the wavenumber k. The measurable oscillation, assuming a single reflector is given as I (k ) = s (k ) ⋅ cos ( k ∆l )  (4.1)  48  where s(k) is the spectral intensity distribution of the light source and ∆l is the path length difference between the reference mirror and the reflector. The fringe visibility, or the amplitude of the cosine oscillation, directly determines the signal strength in the z domain after a Fourier transform. As defined by equation 4.1, the cosine amplitude should not change as ∆l is varied. However, experimental results show that as the cosine frequency increases (increase in ∆l and hence the increase in the argument of the cosine), the fringe visibility decreases. This leads to a decrease in sensitivity to waves reflecting from deeper within the sample (greater path length difference). This phenomenon is due to the physical implementation of the spectrometer.  As depicted in figure 4.1, the cosine term is made measurable by physically separating and deflecting different wavelength components in spatially defined way. The focusing lens focuses the light and allows the CCD to sample the cosine in the k domain by a finite number of pixels. This process can be thought of as passing the light through two systems with different impulse responses. Consider a single wavelength, the focusing lens converts the single point (a delta function) into a Gaussian shape of finite width, which is due to the diffraction limit of the optical system. The CCD pixel then integrates the intensity of light over its receiver area, which can be thought of as imposing a rect function on each Gaussian. The effect on the signal fall-off after the Fourier transform into the z domain is depicted in figure 4.2. Each sample point of the red interference fringe is degraded into a spot with a Gaussian profile before being integrated by the pixel of width ∆x, which is effectively a convolution with a rectangular function Π. After the Fourier transform, the narrow Gaussian response of the focusing lens converts into a wide Gaussian spreading over the full imaging range and the pixel’s rect function transform into a sinc function. Both of these will contribute to the sensitivity fall-off in SD-OCT systems as discussed previously in chapter two.  49  Figure 4.2: Effect of pixel width and Gaussian beam width on signal fall-off; the red cosine modulation is Fourier transformed into red peaks in the z domain; the rect function transforms into a sinc function and the Gaussian transforms into another Gaussian in the z domain. The fall-off effects have been emphasised in this figure.  Assuming the spectrum is distributed evenly in k space across the CCD array, an analytical expression of the sensitivity fall-off relating pixel size, PSF width and the dispersion of grating in a spectrometer is summarized in the relation [39, 57],  a2P2z2 sin(δxPz ) Rspectormeter ( z ) = exp − δxPz  4 ln 2      (4.2)  where R is the sensitivity fall-off factor and δx represents the size of the CCD pixel in the dimension of spectral dispersion, P is the reciprocal linear dispersion and a is the FHWM size of the focused beam. The sinc function in equation 4.2 is the Fourier transform of the square pixel shape, and the Gaussian is the result of the shape of the focused spot on the CCD. Since δx, P and a are fixed once the spectrometer is designed, this function is dependent only on the path length difference z for a specific system. Note that the sinc expression is a function of z but the Gaussian is a function of z2, implying that the Gaussian function dominates as z increases.  50  The reciprocal linear dispersion P in equation 4.2 can be derived from the grating [49], written here with the 1st order (m=1), d (sin θ d + sin θ i ) = λ  (4.3)  where d = 1 is the spacing between adjacent grooves on the grating, θd is the diffracted g angle, θi is the incident angle and λ is the wavelength. Replacing d with the grove density (g) and taking the derivative with respect to θd : cos(θ d ) dλ = dθ d g  (4.4)  For small angle change dθd, the change in x coordinate can be approximated as, dx = f ⋅ dθ d  (4.5)  Substituting dx into equation 4.4 and the reciprocal linear dispersion is expressed as:  p=  dλ cos(θ d ) = dx fg  Here,  (4.6)  dλ is the change in wavelength for a change in the x direction of the CCD plane, dx  and f is the effective focal length of the focusing optics.  51  4.2.1 Fall-off due to pixel size (sinc function)  Analyzing equation 4.2, the dependence of sensitivity fall-off on each parameter can be extracted. Note that the sensitivity fall-off is defined to be the decrease in sensitivity as a function of increased path length difference in the z domain. Considering the sinc function, its argument varies with respect to the pixel width and reciprocal linear dispersion. As either pixel width or reciprocal linear dispersion decreases (increase in linear dispersion), the sinc function decreases slower with respect to z. Consequently it is beneficial to have a small pixel size and a large linear dispersion. However, recall that linear dispersion is already defined for a spectrometer designed to implement a source limited axial resolution SD-OCT system. The equation for the detectable spectral range from equation 2.8 is written as,  ∆Λ =  π 2 ln 2  ∆λ  (4.7)  in which ∆λ is the source FHWM bandwidth. Thus the linear dispersion must be designed to spread the spectral range ∆Λ over the CCD array, for which the dimension is directly related to the pixel width. The array size of the CCD is given as (δx·N), where δx is the pixel width and N is the number of pixels in the CCD array. Assuming the same number of pixels, an increase in pixel size will correspond to a need to increase the linear dispersion. Therefore the reciprocal linear dispersion is inversely proportional to pixel size. The choice of pixel size therefore will not alter the sinc factor in sensitivity fall-off if the system is designed to capture the same spectral range.  During the time of purchase in 2008, the most suitable CCD camera has 1024 pixels of either 10x10µm2 or 14x14µm2 size. The effect of the sensitivity fall-off is plotted in figure 4.3 by calculating the needed linear dispersion for each pixel size. A source limited axial resolution design using either camera results in the same fall-off.  52  Figure 4.3: Sensitivity fall-off of sinc component for a 1024 pixel camera capturing a spectral range of 101.3nm centered at 845nm.  4.2.2 Fall-off due to spot size (guassian function)  The second component in the fall-off equation is a Gaussian function that varies with the FWHM diameter of the focused spot size as well as the reciprocal linear dispersion. To minimize the effect on fall-off, both of these variables should be reduced. A focusing lens with a shorter focal length will produce a smaller spot size and is beneficial to reducing fall-off. The focal length of the focusing optics is, however, inversely related to the reciprocal linear dispersion as described by equation 4.6. In order to minimize both variables, a grating with a high density grove number g should be used. It should also be noted that a in equation 4.2 is the averaged spot size for each wavelength over the CCD array. Non-ideal aberrations in the optics can introduce distortion and increase the spot diameter. This will cause the MTF to decrease and hence reduce the fringe visibility as oscillation frequency increases.  The spot size can also be altered by using a longer focal length collimation lens. This will result in a large beam which can be focused down to a smaller spot according to Gaussian 53  optics. It is important, however, to design a beam size that fits within the aperture window of the grating to avoid any loss of light and subsequently a decrease in spectrometer efficiency.  In equation 4.2, it can be seen that the sensitivity fall-off due to the Gaussian factor, unlike the sinc component, is not restricted by another system parameter. Therefore most of the design work in a SD-OCT spectrometer is concentrated on increasing the MTF by reducing the spot size and any associated aberrations. Plotted in figure 4.4 is the sensitivity fall-off due to the Gaussian factor for a range of common spot sizes.  Figure 4.4: Sensitivity fall-off due to the Gaussian factor for a range of average spot size using a 14x14µm2 pixel CCD  4.3 Simulation of interference fringe generation and sensitivity fall-off modelling  The expression given in equation 4.2 is an analytical equation derived from experimental data. It is a simplified version with the assumption of evenly sampled k values. However, grating diffracts lights into different directions based on its wavelength. This further affects the sensitivity fall-off that is not addressed by equation 4.2. In order to obtain a  54  more accurate representation of sensitivity fall-off, the generation of the detected interference fringes is needed. The interference fringes can also be used for the comparison of processing methods, which was not possible with the result of the analytical equation.  Consider the relationship between wavelength and the angle of diffraction in a grating: d (sin θ m + sin θ i ) = mλ  (4.8)  where m is an integer representing the order number, d is the spacing between adjacent groove on the grating, θm is the diffracted angle of the mth order, θi is the incident angle and λ is the wavelength. This means that the interference fringe will not be linearly distributed in the k domain, as there is an inverse relationship between the two variables:  k=  2π  (4.9)  λ  Therefore the oscillation is not sampled at evenly spaced intervals. In order to see the effect of the non-evenly distributed spectrum on the sensitivity fall-off, another model must be used for evaluation. Considering the light integrated by the CCD element xj of the detector array as illustrated in figure 4.5, this would result in the expression [43]: ∞ A  I ( x j ) = ∫ ∫ h( x, y, k )  s (k ) ⋅ cos ( k ∆l ) dAdk  (4.10)  0 0  Recalling from the previous section that s(k) is the amplitudes of the reflected electrical fields and ∆l is the path length mismatch between the two arms of the interferometer. Each single wavelength can be thought to be focused to a 2D point spread function (PSF), h(x,y,k), on the detector element as seen on figure 4.5. The component x and y are the spatial location on the pixel array. The variable for integration is defined by dA=dxdy and A is the integration over the area of the one pixel.  55  Figure 4.5: Graphical interpretation of the PSF. The spectrum is detected by a linear array of finite sized CCD pixels. Each pixel integrates the light within its area. PSF is the point spread function of the beam with wavelength focused at the center of the CCD pixel.  The diffractive gratings used in spectrometers distribute the spectrum in the x dimension. Assuming the spectrum is aligned to be at the center of the pixel in the y direction, a PSF centered at wavenumber ki is integrated by pixel i of the array. Readers should note that “spectral crosstalk” occurs when the PSF has finite size. The PSF of a single wavelength may not fit into a single pixel area and its intensity contribution could potentially spread to neighbouring pixels. A relationship between k and x, which is the distribution of the spectrum over the plane of the CCD can be represented by [43]:    2πg πg   2πg πg  x(k ) = f sin −1  −  − sin −1  −  k k k kc  c    o   (4.11)  where g is the groove density of the grating, ko is the first wavenumber detected at the zero coordinate, kc is the center wavelength, and f is the effective focal length of the focusing optics. Therefore the contribution of ki at an arbitrary pixel j in the expression  56  for a Gaussian beam can be given as a normalized Gaussian distribution by replacing ki with xi:  h( x, y, xi ) =  4 ln 2 −( 4ln 2 a 2 ) ( x − xi )2 + y 2  e π a2  (4.12)  where a is the FWHM diameter of the focused beam PSF. This FWHM diameter is not constant over the full spectral range due to optical aberration of the focusing optics and the wavelength dependent diffraction limit. Hu and Rollins, however, showed that this variation can be numerically represented by a constant average [43]. In the following derivation, the FWHM diameter is assumed to be a constant for simplicity. The integral of h over the area of a single illuminated pixel j can be written as [43],  A  ∫ h( x = x j , y = y j , xi )dA = 0  ( x j +δ x /2) ( +δ y /2)  4 ln 2 −( 4ln 2 a2 ) ( x − xi )2 + y 2  dydx ∫ −δ∫y /2 π a 2 e ( ) δ x − x /2 ( j )  (4.13)  with δx and δy being the pixel width and height respectively. Evaluating the integral of the Gaussian, the expression becomes [43],  A  1   δ y ln 2   × a    ∫ h( x = x , y = y , x )dA = 2 Erf  j  0  j  i    (δ x − 2 xi + 2 x j ) ln 2    Erf      a        (δ x − 2 xi + 2 x j ) ln 2      + Erf   a     The error function is defined as Erf ( x) = 2  π∫  x  o  (4.14)  e −t dt . As the parameter x approaches 2  positive and negative infinity, the error function tends to 1 and -1 respectively. The first error function comes from the integral in the y direction. The second and third error  57  function come from the x integral and represent the effect of the ith PSF on the jth pixel from the positive and negative x direction. It can also be seen as the convolution of the Gaussian PSF with the pixel represented by a rectangular function. This expression represents the optical resolution of the spectrometer based on the contribution of the finite pixel size as well as the optical PSF.  Examining equation 4.14, the effect of the integral of h(x,y,k) in equation 4.10 would be eliminated if the integral evaluates to 1. As the parameter of the error function tends to infinity, the error function becomes 1 and the equation 4.14 becomes unity. This can be achieved by reducing the FWHM size of the PSF in the denominator.  This model which includes the effect of non-evenly sampled k values was simulated in Matlab. Given the pixel size of 14x14µm2, the FWHM of the Gaussian beam was set at 14 µm and 28 µm (twice the size of the pixel) and the simulation was done over 1024 pixels. Figure 4.6 reveals that at the same depth location, apparent from the identical oscillation frequency, the visibility of the interference fringe is smaller when the Gaussian spot size is bigger. The effect of the final Fourier transformed result is plotted in figure 4.7. By varying the focal spot size of the Gaussian with respect to the pixel size, the effect of the spot size can be analyzed. As can be seen in figure 4.7, larger spot sizes will cause the sensitivity, as a function of depth, to drop more rapidly than smaller spot sizes.  Aside from the spectral crosstalk as described earlier, the spectral dispersion of the grating also affects the sensitivity fall-off. The inverse relationship between the wavenumber and wavelength is given by equation 4.9. Inherent to the inverse relationship, the high frequencies (short wavelengths) are sampled more sparsely than the low frequencies (long wavelengths). This means that the high frequency components could experience aliasing while the lower frequencies still remain under the nyquist limit. Also note that spectral bands integrated by the CCD pixels are unequal in bandwidth, due to the inverse relationship between k and λ. This will further degrade the oscillation amplitude.  58  Figure 4.6: Simulated fringe amplitude with different spot size. Blue represents FWHM spot size of 14µm (equivalent to pixel size) and red represents FWHM spot size of 28 µm (equivalent 2x pixel size)  Figure 4.7: Simulated depth dependent sensitivity fall-off; the legend shows the spotsize to pixel size ratio. As expected the fall-off is worst with large ratio.  59  4.4 Detector  The detector array was chosen to have good spectral response corresponding to SLD spectrum. Fortunately, fast silicon based CCD are widely available in this wavelength range. The CCD camera selected for use in the spectrometer was a 12bit, 1024 element, 53kHz line rate high speed camera (E2V Aviiva SM2 1024) with 14µm square pixels. This pixel size was chosen over 10µm as it is easier to align and would capture more light in the y direction. Figure 4.8 shows the spectral response of the silicon based CCD. It covers a wide range of wavelengths that encompass the SLD spectrum.  Figure 4.8: Spectral response of the E2V Aviiva SM2 1014 camera, the 14x14µm2 version was used in the SD-OCT system of this project.  4.5 Grating  To separate light into its different wavelength components, a one inch diameter transmission grating (Wasatch, USA) with 1200 lines/mm was selected. Since SD-OCT employs a broadband source, a large range of wavelengths will be diffracted by this grating. As such, the performance of the grating in the intended spectral range is also an important parameter. A flat spectral response with a high efficiency is desired. The chosen grating has a relatively flat response across a wide range of wavelengths and has an efficiency of over 70% in both S and P polarizations.  60  4.6 Optics  A typical spectrometer consists of reflective optics (mirrors) for collimation and focusing. Reflective elements can eliminate the chromatic dispersion otherwise caused by refractive lenses. However, since reflective mirrors require off axis incident or off axis reflected paths, they are very difficult to align. Also, reflective optics will usually introduce astigmatism unless they are compensated with multiple stages [58]. Most of the SD-OCT systems reported in the literature use refractive optics because they are simpler to align and modify.  Lenses, however, suffer from aberrations due to their refractive properties [14, 58]. Offthe-shelf optics typically only compensate for chromatic and spherical aberrations using a flint and glass, which are combined to create an achromatic doublet lens. Other aberrations such as coma, curvature of field and oblique astigmatism must be corrected with a more complex, multi-element design. These other aberrations can increase the focused spot size on the CCD plane, increasing the sensitivity dependence with depth as seen in equation 4.2. Aberrations in general can be corrected by changing the curvature of the lens, by adjusting the index of refraction by using different types of lens materials, and by the use of positive (convex) and negative (concave) elements to balance aberration effects [44, 45, 59, 60]. They can also be improved by the use of a combination of lenses with designer defined spacing. In this project, the requirement was to use off-the-shelf optics and hence the lens material and curvature are fixed to manufacturing specifications. Therefore, effort was placed into choosing the right focal length and element combination as well as the intra-lens spacing.  4.6.1 Selection of collimation optics  To achieve a source limited axial resolution of ~7µm, the spectrometer must be able to capture a spectral bandwidth of 101.9nm according to equation 2.8,  61  ∆Λ ==  π 2 ln 2  (45nm) = 101.9nm  (4.15)  which results in an imaging range of 1.792mm based on equation 2.9,  lmax  ln 2 (845nm) 2 1024 = 1.792mm = 2π 45nm  (4.16)  The effective focal length needed for the focusing optics is based on the spectral range ∆Λ and can be found using equation 4.6. Substituting ∆Λ for dλ and the CCD array size for dx and solving for f,  f =  cos(θ d ) dx = 101.4mm g dλ  (4.17)  The closest standard focal length is 100mm, which is used as a starting point for the lens design. While restricting the focusing lens to remain at 100mm focal length, the focal length of the collimating lens was varied to achieve a small focal spot size across the full range of CCD pixels. To maintain a high efficiency, the beam size was also designed to be smaller than the transmission grating diameter thus avoiding the blockage of light. The collimate beam size is found using the following Gaussian beam equation,  wo' =  λo f woπ  (4.18)  where wo, and wo' are the Gaussian beam waist (radius) of the diverging beam and collimated beam, respectively λo is the center wavelength and f is the focal length of the collimation lens. The results of the beam diameters using various lenses of standard focal lengths are summarized in table 4.1.  62  Focal length of collimation lens (mm)  1  e2  Beam diameter  % of optical power  FWHM focused  through grating  spot size (µm)  (mm)  at optical axis  50  9.96  99.98%  6.35  75  14.94  97.6%  4.27  100  19.924  87.7%  3.17  150  29.885  60.63%  3.09  Table 4.1: Beam diameter resulting from the usage of different focal length optics. The diffraction grating has an aperture opening of 20.4mm and therefore the beam collimated by the 150mm lens is too large. Part of the beam will be blocked and the resulting beam diameter will be the same as the grating aperture of 20.4mm.  The design was simulated in Zemax to determine spot sizes on the CCD pixels with a focusing lens of 100mm and the result is plotted in figure 4.9. No software optimizations on the location and placement of the optics were performed. A longer focal length collimation lens produces a larger beam diameter, and its diffraction limited spot size after refocusing is smaller for paraxial beams. However, with the diffraction grating placed in between the collimation and focusing lens, the beam is diffracted to different angles based on its wavelength. This causes the incident beam at some wavelengths to approach the focusing lens at an off-axis angle. The consequential coma and oblique astigmatism then starts to degrade the focal spot size away from the optical axis of the system [59, 60]. Zemax simulation data reveals that although spot sizes are, as expected, smaller with a 150mm collimating lens at the optical axis, the spot sizes away from the axis are actually larger due to optical aberration. This results in an overall larger average size than using the 50mm lens. Thus the 75mm collimation lens was chosen to make the trade-off between focal spot size and grating efficiency.  63  Figure 4.9: FWHM spot size at the CCD plane with a 100mm focusing lens and varied collimation lens. Note that the change in spot size is largely due to the curved focal plane and physical location of the beam. The off-center wavelength are actually out of the focus on the CCD plane. The grating is positioned at the back focal length of the lens and the camera is positioned at the lens’ front focal distance.  4.6.2 Aberration correction on focusing optics and spot size minimization  The emphasis in this section is to reduce the average spot size by minimizing the effect of aberration due to the focusing optics. Chromatic aberration arises from the inability of an optical system to focus polychromatic rays to location. This is due to the variation in the index of refraction with respect to wavelength. Monochromatic aberration, on the other hand, is defined as the deviation of performance of an optical system from its paraxial optics, where the incident angles of rays are small. Snell’s law governs the refraction of light through the interface of two medium and is often simplified using a small angle approximation [45, 59, 60]. However for rays far away from the optical axis and with a large incident angle, this assumption no longer holds and the typical lens equations fails to predict the behaviour of non-paraxial rays [45, 59, 60]. The focal surface for large aperture systems is usually spherical, which makes alignment to a flat CCD imaging plane difficult. Comatic and astigmatic aberration also creates non symmetrical spot profiles which are frequently elliptically shaped.  64  The focusing lens of the spectrometer is designed to give an effective focal length of 100mm to balance between imaging range and axial resolution. Lens design is often an iterative process; the development starts with a simple case and progresses to more complex arrangements. Optimization of material, surface curvature and spacing is automatically preformed using optical simulation software. However, due to the prohibitive high cost of a fully customized optic system, the design of this spectrometer was made using off-the-shelf optics. This restricts some of the controllable variables such as the material and curvature to ones that are commercially available. Nonetheless the design of the focusing optics could be accomplished by carefully selecting the premade lens and by varying the intra-lens spacing.  The most simplified case is the use of a singlet lens with one element, which theoretically produces the most aberration. Chromatic aberration due to the broadband nature of the light can be compensated by using a readily available achromatic doublet lens. The long and short wavelength in the lens’ targeted wavelength range is made to converge at the same location on the optical axis. Further improvements to the focusing system in an attempt to reduce monochromatic aberrations must be achieved through the use of multistage focusing. Using Zemax as a simulation and optimization tool, lenses and interlens spacing were chosen to minimize focal spot sizes. Some of the simulated lenses are shown below in figure 4.10. These include a common layout known as the rapid rectilinear lens and a four lenses custom design, which are compared to the standard singlet and achromatic doublet lens.  65  Figure 4.10: Four lens configurations are considered for the focusing optics. a) Singlet 100mm lens; b) 100mm achromatic doublet lens; c) Rapid Rectilinear lens consisting of two 200mm achromatic lens pair; d) Custom design lens consisting of a 100mm and a 40mm plano-convex lens, a -25mm plano-concave lens, and a 125mm plano-convex lens  66  The rapid rectilinear lens comprises of 4 elements, which are positioned to be symmetric about an aperture stop. It is a type of Telecentric optics system that is generally used to reduce aberrations caused by off-axis optical rays [14]. Telecentric optical systems are defined as systems in which all of the chief rays (center ray of a beam) on the image side are perpendicular to a planar image plane and parallel to the local optical axis [59, 60]. The curvature of the focal sphere is flattened by bending the chief ray to be parallel to the optical axis. The design was simulated in Zemax, in which the lens spacing and focal length were optimized. The two 200mm achromatic lens pairs together create an optical system with an effective focal length of 100mm. The 4 lens custom configuration was also implemented to a similar effective focal length. Using lenses of different types and focal lengths, the incident angles of the component beams were decreased. The choice of lens was made from commercially available lens catalogues, and the intra-lens spaces were optimized using Zemax. This process was repeated multiple times until a satisfactory result was obtained.  The performances of the lenses were compared using spot size profiles as well as other common methods such as the MTF, field curvature and aberration coefficient. The aberration coefficients, however, are only representative of monochromatic aberration. Therefore the best indication for the spectrometer is the MTF and the spot size of the focused beam at different wavelengths.  4.6.2.1 Seidel aberration coefficient  Listed in table 4.2 are the first three Seidel aberration coefficients [44] that describe the amount of aberrations in an optical system. S1, S2 and S3 correspond to spherical, comatic, astigmatic aberration respectively. A number closer to zero indicates that the optical system will exhibit a lower amount of that particular aberration. The singlet lens produces much greater spherical and comatic aberration than the alternatives. It should be noted that these coefficients are for a single wavelength at 845nm and doesn’t translate directly to an improvement to the sensitivity fall-off. It is, however, a great tool for pinpointing the main aberration and also acts as the basis for a design comparison.  67  Lens Singlet Doublet achromatic Rapid rectilinear 4 Lens Custom Design  SPHA S1 0.006688  COMA S2 0.00088  ASTI S3 0.002183  0.001461  -0.000021  0.002148  0.003358  0.000033  -0.001236  0.001649  0.000718  -0.00066  Table 4.2: Summary of the total Seidel aberration coefficient at 845nm. A number closer to zero indicates a smaller aberration for the optical system.  4.6.2.2 Field curvature The field curvature is a main concern in camera based spectrometers. The CCD pixels usually are manufactured on a planar surface, which is difficult to align with a curved focal surface. Illustrated in figure 4.11 are the graphical simulation results for field curvature; the horizontal axis is the distance from the ideal focal point of a beam propagating on the optical axis, and the vertical axis represents the distance that a beam deviates from the optical axis. For the orientation of the optics in the simulation software, the sagittal plane (s) is the plane of interest. As seen in figure 4.11, the doublet lens did not improve the overall curvature from the singlet lens, but corrected for chromatic aberration by bundling the focal surface of different wavelengths closer together. On the other hand, the custom configuration corrected for the field curvature, but did little to compensate for the chromatic effect. The rapid rectilinear implementation balanced both aspects, thereby reducing the field curvature as well as the distortion from the wide bandwidth.  68  Figure 4.11: Field curvature of lens design; a) Singlet 100mm lens; b) 100mm achromatic doublet lens; c) Rapid Rectilinear lens d) Custom design lens. Note the change in the scale of the axis between lens design, both the rapid rectilinear lens and the 4 lenses custom design shows a much flatter focal plane.  69  4.6.2.3 Modulation transfer function The modulation presented in figure 4.12 indicates the ability of the lens to transfer a modulation in the object space to the image space. The horizontal axis is the spatial frequency and the vertical axis is the modulus of the optical transfer function, which can be interpreted as the ratio of the image space modulation amplitude over the object space amplitude. At one pixel per 14µm, the camera should be able to image a line pair (bright and dark lines) in 28µm, which would result in a spatial cycle of 35.7lp/mm. The black line in the plot represents the best scenario in which the system is diffraction limited. Color lines correspond to fields of different incident angles, which are determined by the wavelengths of the beam. Superior performance is designated with a higher ratio, which is depicted as a line closer to the diffraction limited case. The MTF plot suggests that the rapid rectilinear design as well as the custom configuration is superior in reproducing the modulation in the object space.  70  Figure 4.12: Modulation transfer function; a) Singlet 100mm lens; b) 100mm achromatic doublet lens; c) Rapid Rectilinear lens d) Custom design lens  71  4.6.2.4 Focal spot size The design is further compared with the spot size profile over the full range of imaged wavelength. The results are presented in figure 4.13 in the x dimension and in figure 4.14 for the y dimension of the camera. The spot profiles increase in size as the wavelength deviates from the center wavelength, which is expected because these beams are further away from the optical axis. In addition to the two main dimensions of the spot, the shape and intensity distribution must also be considered. Frequently, spots will not exhibit a typical Gaussian shape and the x-y dimensions might not be a good indication of their effect on the neighbouring pixels. Therefore the actual spot profiles are illustrated in figure 4.15, depicting the actual shape and intensity distribution. The spot profile confirmed the simulation results of the other test, in which the rapid rectilinear lens performs the best out of the four choices.  Figure 4.13: x dimension of spot vs wavelength. The positioning of the lens was optimized in zemax to give the smallest spot size.  72  Figure 4.14: Y dimension of spot vs wavelength. The positioning of the lens was optimized in zemax to give the smallest spot size.  Figure 4.15: Spot profile over the full wavelength range. a) Singlet 100mm lens; b) 100mm achromatic doublet lens; c) Rapid Rectilinear lens d) Custom design lens, Note the scale of the picture is not equal, the illustration is meant to shown the shape and relative size in the x-y dimensions  73  4.7 Qualitative verification of simulation  In order to verify some of the predictions of the simulation, the designs of both the achromatic and the rapid rectilinear lenses, were tested for their focal plane curvatures and relative spot sizes. The rapid rectilinear lens was assembled using two 200mm achromatic lenses from Thorlabs. Similar to the 100mm achromatic lens, all three optical elements were coated with IR anti-reflective coating. The SLD light source was not used in this part of the experiment because it would be difficult to isolate a single spot size or wavelength for analysis. Therefore three laser diodes of different center wavelengths, namely 808nm, 850nm and 904nm, were acquired for system testing. By coupling only one laser diode into the system at a time, the system response in each particular wavelength could be determined. The CCD camera was mounted on a three axis micrometer stage that allows for adjustment in the x,y and z dimensions. The camera was positioned at a range of z location near the theoretical focal length of the lens. Its movement direction is illustrated in figure 4.16. By locating the smallest relative spot size of the wavelength corresponding to each diode wavelength, one can determine the focal point as well as the focal plane curvature.  For each laser diode, light was coupled into the system and the spectrometer recording was plotted. The optical power of each laser was adjusted to be identical to each other for comparison. Representative of the intensity detected, figure 4.17 displays the readings at the z location of the camera where the largest intensity was recorded for the 850nm laser. The single achromatic lens was able to focus the light from the 850nm and 808nm diode at this z location. However, the 904nm beam is out of focus as suggested by the low intensity. The rapid rectilinear lens, on the other hand, was able to focus the light of all three diodes to a relatively similar z location.  74  Figure 4.16: CCD camera setup; the red arrow shows the direction of movement used when verifying the focal curvature. Note the curved focal surface and the flat CCD plane.  The curvature of the field can be qualitatively compared by creating a contour map using slices of the intensity plot similar to figure 4.17. By varying the z location of the camera relative to the lens, a contour map in the x-z plane with color representing the intensity can be constructed, as shown in figure 4.18. The plots are truncated to zoom in and highlight the locations where the intensity peaks occur. The vertical axis is the z location and the horizontal axis is the x axis, or more specifically, the pixel location. The doublet achromatic lens produces a curved focal plane that can be represented by the interpolating red dotted line. The rapid rectilinear lens shows a much more flattened focal plane and is hence a better match to the planar CCD image plane. This results in a smaller average spot size because fewer wavelengths are out of focus.  75  Figure 4.17: Detected intensities at the lens focus of the three laser diodes at 808, 850 and 903nm combined into a single plot. Top – intensity detected with achromatic lens, note the relatively small signal at 808nm, indicating that it is not focused on the CCD pixel. Bottom - intensity detected with rapid rectilinear lens, note the more evenly distributed intensity indicating all three wavelengthss were focused on to the CCD.  76  Figure 4.18: Contour plots of the detected signals of the three laser diodes. The y-axis represents the distance of the CCD from the focusing lens, the x-axis is the CCD pixel number. The intensity is presented in false color with red corresponding to the highest reading and blue being the lowest. Top – doublet achromatic lens, bottom – rapid rectilinear lens.  4.8 Quantitative verification of simulation  To show the improvement in sensitivity fall-off, an experiment was done to show the sensitivity reduction related to the depth location. A mirror was placed at the focus of the sample beam optics. By adjusting the reference mirror, it simulates the relocation of the sample mirror to different depth locations in the imaging range. The sensitivity plots are presented in figure 4.19. It can be seen that the sensitivity fall-off is dramatically increased by the use of the rapid rectilinear lens, as a reduction of over 50dB is detected. However, this improvement might not solely be due to the focusing lens design, misalignment, calibration and other factors could have also affected the results. The alignment and calibration with the curved focal surface of the doublet is more difficult than the rapid rectilinear lens. Nonetheless, the dramatic improvement is an indication that the lens design does reduce the sensitivity fall-off.  77  Figure 4.19: Experimental data of sensitivity fall-off: Top – Achromatic doublet lens, Bottom – Rapid rectilinear lens. Both are measured across the full imaging range of the system.  78  4.9 Alignment of CCD camera  Sensitivity fall-off is highly dependent on the focal spot size, which in turn is tightly coupled to the alignment of the camera with the optics. The tilt angle of the y axis, as depicted in figure 4.20, will contribute to the sensitivity fall-off. The tilt of the y axis would inevitably put some of the wavelengths out of focus and cause an increase to the fall-off. Therefore, a simplified method was developed using a similar techniques to the one reported by the researchers at UC Irvine [61], however, no assumption was made as to the focal length and path of the central wavelength.  Figure 4.20: Alignment of the camera and its associated optics  Assuming the focusing optics to be ideal, the lens will not change the direction of the beam but only acts to focus it on the CCD imaging surface. The laser diode could then be used for alignment between the optics and the CCD camera. By recording the locations of the focal spots on the CCD at different wavelengths and combining it with the theoretical knowledge of the diffracted angle of the beam from the grating, one can determine the tilt of the camera with respect to the optical axis. The trigonometric analysis of the geometry is summarized in figure 4.21. The values appearing in green are known values and the spacing between focuses of the three diodes can be deduced from the pixel to pixel distance. The diffracted angle of the beam can also be calculated theoretically from the  79  grating equation. The other angles (b, c, d1, d2) need to be calculated before the tilt angle can be estimated. This is a more accurate estimate compared to those reported at UC Irvine since it does not assume the focal length of the lens nor the x translational alignment.  Figure 4.21: Geometry of the spectrometer alignment  Using sine law with the triangles ABD and ADC,  AD BD AD DC = and = sin(b) sin(a1 ) sin(c) sin(a 2 )  (4.19)  Notice that angle a1,a2 will be smaller than 90 degrees, so the use of the sine law is unambiguous. By equating AD of both equation in 4.19,  BD sin(b) DC sin(c) = sin(a1 ) sin( a 2 )  (4.20)  80  In the large triangle ABC, the angle be can be expressed in terms of a1,a2 and c. Substituting into equation 4.20 results in, BD sin(180 − a1 − a 2 − c) DC sin(c) = sin(a1 ) sin(a 2 )  (4.21)  Expanding the sin term and isolating c,      sin(180 − a1 − a 2 ) −1   c = tan  DC sin(a1 ) + cos(180 − a − a )  1 2   BD sin( a ) 2    (4.22)  After c is found, the rest of the interior angle of the triangle can be found by: d 2 = 180 − a 2 − c  (4.23)  d1 = 180 − d 2  (4.24)  b = 180 − a1 − d1  (4.25)  and the length of the sides can be found using equation 4.19. Using this information, the dimensions and angles of the shaded blue triangle can be found.  [  a 3 = 30.464° − sin −1 (1200 l  mm  × λ d ) − sin −1 (30.464°)  ]  (4.26)  And the tilt angle can be found using, tilt angle = d 2 + a3 − 90  (4.27)  The tilt of the camera was estimated using the above method, and the first measurement resulted in an angle of 12.13°. Multiple iterations of adjustments by hand were conducted to minimize the angle to 3.4°. Further improvement to the tilt angle was extremely  81  difficult due to the apparatus’ sensitivity to movement and lack of precision when aligning by hand.  Tilt angle  Fall-off at maximum imaging depth  12.13°  22.04 dB  3.408°  20.98 dB  Table 4.3: Sensitivity fall-off at initial and final tilt angles  4.10 Final design  The design of the spectrometer can affect most of the system parameters, so each component must be considered carefully. As described in previous sections, the axial depth dependent sensitivity fall-off is directly related to the ratio of the detector pixel size to the focal spot size. A larger pixel and smaller spot size will produce the best fall-off profile. The schematic of the final design is shown in figure 4.22.  Figure 4.22: Schematic of the spectrometer  82  Chapter 5 System design part 3: data processing SD-OCT, unlike conventional microscopy, requires several steps of processing before the image can be reconstructed. The data processing stage of SD-OCT is generally the most time consuming component. In cases where images are displayed immediately after acquisition, processing can become the bottleneck of the system [62]. The processing time is highly dependent on the algorithm used, which affects the reconstructed image quality. This chapter will investigate several common processing methods to reduce the sensitivity fall-off. It will also introduce a processing technique that is new to the SDOCT community which simultaneously improves both speed and image quality.  The second part of the chapter will focus on accelerating the image reconstruction with multiprocessing. With the advances in processor technology, the current workstations are typically equipped with two or more processors which can be used concurrently to process large amounts of data. The goal is to maximize the utilization of resources available to achieve real-time SD-OCT imaging, without the use of specialized hardware such as digital signal processors (DSPs) [36] or Field programmable grid arrays (FPGAs) [35].  5.1 SD-OCT data processing  Data collected using SD-OCT instruments are intensities, I(λ),  as a function of  wavelengths. This is accomplished by the use of a diffraction grating to distribute wavelength components evenly in space, followed by detection with an array of photodetectors. The Fourier transform pair of z, the axial depth of the sample, is however complementary to the wavenumber k. Thus, a conversion between wavelength and wavenumber is needed before the application of a Fourier transform. Figure 5.1 shows the basic steps in obtaining an axial profile from an acquired A-line data.  83  Figure 5.1: Data processing steps for SD-OCT  However the non-linear relationship between k and λ precludes the use of the fast FFT algorithm, as it requires the input to be sampled uniformly in its domain. Unless the data is resampled using interpolation, the Fourier transform must be computed via a slow direct matrix multiplication. Traditional approaches combine interpolation and the FFT algorithm for signal reconstruction. The accuracy of this resampling step directly influences the sensitivity fall-off in SD-OCT, which is compounded with the spectrometer induced sensitivity fall-off.  A hardware technique has been reported to eliminate the resampling step, which resulted in an improvement to the sensitivity fall-off and processing speed [29]. A linear-inwavenumber spectrometer uses an extra custom-made prism to redistribute the light evenly in wavenumber. In this case, the resampling is done in real-time by the prism. Although this technique is promising, the prism must be designed specifically for a wavelength range. It also requires an additional step of aligning the prism, which significantly increases the complexity of the spectrometer design and setup. This makes it unsuitable for SD-OCT system with commercially made spectrometer and makes it difficult to upgrade existing system. Most systems have used software based resampling techniques due to the simplicity.  Aside from sensitivity fall-off, SD-OCT images also suffer from dispersion effects. Based on interferometry, the interferences of waves at different wavenumbers are used to reconstruct the axial profiles of the sample. If the dispersion is not balanced between the reference and sample arms, waves of different wavenumbers will propagate with different velocities and will broaden the interferometric autocorrelation. This will effectively reduce the axial resolution of the OCT system. Without hardware compensation, numerical techniques must be used to compensate for the effect by post processing.  84  5.2 Conversion from wavelength to wavenumber  Measured by the spectrometer, N sampled points at evenly spaced values of λ are resampled into N points evenly spaced at a value of k. The simplest method is piecewise constant interpolation, where the new data points are assigned the same value as their closest neighbours. However, it offers a minimal speed advantage over linear interpolation which is used in some high-speed SD-OCT systems [62]. The interpolants of linear interpolation are calculated from the two nearest data point using a first order linear equation. This method is advantageous in settings where speed is important, but post-FFT results show sensitivity fall-off inferior to that of more accurate methods such as cubic spline interpolation. Cubic spline interpolation as the name implies, uses a cubic polynomial to interpolate points in intervals between two known data points [63, 64]. This method, although more accurate and with a better sensitivity than linear interpolation, is more complex and requires longer processing time. A recent paper by Wang et al. [65] demonstrated that the non-uniform discrete Fourier transform (NDFT) exhibited better sensitivity fall-off than the use of FFT combined with cubic spline interpolation. The NDFT technique however, requires an even longer process time due to the direct application of the Fourier transform by matrix multiplication.  NDFT proves to be one of the more successful algorithms in alleviating the sensitivity fall-off problem [65]. It would, however, be more useful for the clinical applications of OCT if its performance could be extended into the real time domain. The Non-uniform fast Fourier transform (NUFFT) presented in this chapter is a fast algorithm that approximates the NDFT, matching the sensitivity performance for NDFT with improved speed. NUFFT has been used in medical image reconstruction in magnetic resonance imaging [66], computed tomography [67] as well as ultrasound [68]. This is the first reported use of the application of NUFFT to reconstruct an SD-OCT image.  85  5.2.1 Spectrometer calibration  The wavelength to pixel mapping is an important factor that will affect the accuracy of the interpolation algorithm. Therefore the spectrometer needs to be calibrated to determine its pixel number to wavelength relation. This knowledge is required before applying the Fourier transform for A-line or image reconstruction. Results from section 4.9 determine the pixel location of the three wavelengths. However, to obtain intermediate location and spacing, an alternative method was used [69].  Considering a single reflector, the interference fringes can be written as I (k ) = s (k ) ⋅ cos(kz )  (5.1)  It can be seen that the sinusoidal modulation is a real valued signal based on the phase kz. For a real valued signal x(t), the instantaneous phase or local phase is defined as,  φ (t ) = arg[x a (t )]  (5.2)  where arg() is the argument function for a complex function, and xa(t) is the analytic function of x(t) which is defined to be,  x a (t ) = x(t ) + ixˆ (t )  (5.3)  And xˆ (t ) denotes the Hilbert transform of x(t). Therefore by applying a Hilbert transform to the real value signal I(k) and substituting into equation 5.3, the analytic function of the real valued interference signal can be formed. I a (k ) = I (k ) + iIˆ(k )  (5.4)  The phase can then be extracted using equation 5.2, where  86  φ (k ) = kz  (5.5)  The wavenumber k measured by the spectrometer is an array of N points, each detected by a pixel numbered n ∈ [1,N]. Expressing the propagation constant k in terms of wavelength:  k (n) z =  2π z λ(n)  (5.6)  The path length difference z is a parameter that is very difficult to measure properly, because it is determined by both the reference and sample arm length. Therefore the calibration is accomplished by placing a weak reflector at two locations of z1 and z2. The difference between the two locations z1 and z2 is easily measurable since the movement is only on one of the micrometer stage mounted interferometer arms. With two measurements, one can obtain the phase term for both interference fringes:  k (n) z1 =  2π ⋅ z1 λ ( n)  k ( n) z 2 =  2π ⋅ z 2 λ ( n)  (5.7)  Taking the difference of the two (since the difference of z is known) unwrapping the phase term and isolating for the wavelength λ,  λ ( n) =  2π ⋅ ( z 2 − z1 ) k (n) z 2 − k (n) z1  (5.8)  Therefore the pixel to wavelength mapping can be determined, and can be used for the interpolation and resampling step of image reconstruction.  87  5.2.2 Linear interpolation Linear interpolation is a simple method of curve fitting using a linear polynomial. For the interval between two known data points (x1, y1) and (x2,y2), an equation of a line is formed. To solve for the unknown y value at location x between the interval of [x1, x2], the formula is given as, y = y1 + (x − x1 )  y 2 − y1 x 2 − x1  (5.9)  The interpolated data set can then be Fourier transformed using the fast FFT algorithm. 5.2.3 Cubic spline interpolation Cubic spline interpolation is a more accurate way of finding unknown values between two known data points. The term spline means piecewise polynomial, and as the name suggests, the interpolation is done by deriving a cubic polynomial that describes the data range between known points. Unlike linear interpolation which is based on only two known data points, cubic spline interpolation takes into account the whole set of data. Given a set of coordinates, C = [( xo , y o ), ( x1 , y1 ), K , (x n , y n )] , The spline representing each interval i=1, K ,n-1 is given as [70],  S i ( x) =  z i +1 ( x − xi ) 3 + z i ( xi +1 − x) 3 + 3hi  y i +1 hi   − z i +1 ( x − xi ) + 3  hi   y i hi   − z i ( xi +1 − x)  hi 3   (5.10)  where hi=xi+1-xi and the coefficients zi are found by solving the system of equations,  88  zo = 0     y − y i y i − y i −1  , i = 1, K , n − 1 hi −1 z i −1 + 2(hi −1 + hi ) z i + hi z i +1 = 3 i +1 − hi −1    hi   zn = 0  (5.11)  A property of the cubic spline is that the spline is continuous up to the second derivative. This means that both the slope as well as the curvature is smooth between each interval, making this a much more accurate way to accomplish the task of interpolation. Similar to linear interpolation, the resulting data set from cubic spline interpolation is Fourier transformed to form an axial profile.  5.2.4 Non-uniform discrete Fourier transform (NDFT) The Non-uniform discrete Fourier Transform is a special type of Fourier transform algorithm that can use non-evenly sampled input data. The NDFT applies the Fourier transform directly at unequally spaced nodes in wavenumbers. The reconstructed axial profile can be given as [65],  a( z m ) =  N −1  ∑  I ( k i )e  −j  2π (ki − ko )m ∆k  (5.12)  i =0  where zm is the mth pixel of the depth coordinate z, ∆k is the spectral range in terms of the wavenumber, and ki is the wavenumber sampled at the ith pixel of the CCD camera. Equation 5.12 can be rewritten in matrix form as,  a = DI  (5.13)  Explicitly writing out each individual term, one can see that the matrix becomes what is known as the vandermonde matrix describing geometric progression [65],  89   a( z o )   a( z )  o  a=  M.    a ( z n −1 )  (5.14)   I (k o )   I (k )  1  I =  M.     I (k N −1 )  (5.15)   1  p −1  o D =  p o− 2   M  p o−( n −1)  1 −1 1 −2 1  L  p  L  p M  L O  p1−( N −1) L    p   p  M  p N−(−N1 −1)  1  −1 N −1 −2 N −1  (5.16)  where pn is given by,   2π  p n = exp j k i  , i = 0,1, L , N − 1  ∆K   (5.17)  By applying the NDFT directly with a matrix multiplication of complexity O(N2), it was shown that the sensitivity fall-off could be improved by to the eliminating interpolation errors [65].  5.2.5 Non-uniform fast Fourier transform (NUFFT) The NUFFT algorithm was presented and analyzed by Dutt and Rokhlin [71] in 1991. An accelerated algorithm for approximating the NDFT, the NUFFT is similar to the application of the FFT in performing a discrete Fourier Transform (DFT) and reduces the O(N2) complexity to O(N log N). There are three types of NUFFTs which are distinguished by the inputs and outputs. Type I NUFFT transforms data from a nonuniform grid to a uniform grid, Type II NUFFT goes from uniform to non-uniform and Type III NUFFT starts on a non-uniform grid that results in another non-uniform grid [72]. Here the focus will be on the Type I NUFFT, specifically in transforming data non-  90  uniformly sampled in wavenumber (k) into axial depth information in the uniform zdomain.  The NUFFT approximates NDFT by interpolating an oversampled FFT [73]. The flow of the algorithm is illustrated in figure 5.2. The signal is first upsampled by a convoluting with an interpolation kernel, followed by the evaluation of a standard FFT. The result of the FFT is then subjected to a deconvolution, producing the approximation. Each NUFFT algorithm can exchange speed for accuracy by selecting different upsampling rates and different interpolation kernels. We have chosen to use the Gaussian gridding method with the interpolation kernel suggested by Greengard and Lee [74] that is based on the work of Dutt and Rokhlin [71].  Figure 5.2: NUFFT algorithm  The following equation defines the DFT which type I NUFFT approximates:  a( z ) =  1 N  N −1  ∑ I (k )e j  − izk j  where z ϵ [0,M]  (5.18)  j =0  91  where I(kj) is the signal sampled at non-uniform k spacing and N is the number of sample points. The signal can be resampled using the user defined Gaussian interpolation kernel Gτ(k) [74] illustrated in figure 5.3 and is given by  Gτ (k ) = e  where  − k2  τ=  4τ  (5.19)  π 1 M sp 2 M R ( R − 0.5)  (5.20)  Figure 5.3. Resampling into equally spaced bins using the Gaussian interpolation kernel. The blue circles are the original unevenly sampled data. A Gaussian function is convolved with each original data point, spreading its power over a few adjacent bins. Each bin accumulates the power from nearby points via addition. The evenly distributed bins can be Fourier transformed by FFT  M is the number of points in the z domain which is the same as the input length in the SD-OCT application. R is defined as the oversampling ratio Mr/M where Mr is the length of the intermediate FFT result. Msp sets the length of the Gaussian kernel and its effect on neighbouring points. Changing the values of Msp and R, one can select the desired accuracy and speed performance trade-off. A larger Msp or a larger R will increase the accuracy of the NUFFT, but with reduced speed.  92  Convolving Gτ(k) with I(k) gives us the intermediate function Iτ(k) that can be defined as, I τ (k ) = I (k ) ⊗ Gτ (k ) =  ∫  ∞  −∞  I ( y )Gτ (k − y )dy  (5.21)  In order to compute the Fourier transform, only points in an evenly spaced grid is need. Performing the convolution in discrete form and sampling in uniform grid spacing,  m  N −1  m  I τ  ∆k  = I (k n )Gτ  ∆k − (k n − k o ) , m ∈ [0, Mr − 1] Mr  n =0 Mr   ∑  (5.22)  where ko is the first wavenumber in the sampled data. The discrete Fourier transform of Eq. (5) can then be computed using standard FFT algorithm on the oversampled grid with Mr points. 1 aτ ( z m ) ≈ Mr   m  M r −1     m  −iz m  M ∆k  I τ  ∆k e  r   Mr  m =0  ∑  (5.23)  Once aτ(z) has been calculated, a(z) value can be calculated by a deconvolution in k space by Gτ(k) or alternatively with a simple division by the Fourier transform of Gτ(k) in z space. The Fourier transform of Gτ(k) can be expressed as,  g ( z m ) = 2τ e − z m τ 2  (5.24)  This would result in a( z m ) =  π zm 2τ e aτ ( z m ) τ  (5.25)  The resulting a(zm) has extra data points appended at the end due to the oversampling grid. These points do not contain information on the original signal and theoretically correspond to locations in z that are beyond the imaging range of the system. In other words, the spectrometer does not have an adequate spectral sampling rate to produce these points; they are merely the by-product of the oversampling step.  The improvement in reconstruction over traditional methods is due to the use of an oversampling grid and the convolution function. As previously mentioned, the spectrometer does not acquire data evenly sampled in wavenumber. Even after interpolation and resampling, the spectral band integrated by each CCD pixel remains unequal in bandwidth [29]. Therefore part of the spectrum might not be sampled to 93  sufficiently meet the Nyquist criterion, which would cause aliasing effects in the signal. The oversampling grid avoids the problem of aliasing.  Aside from aliasing, signals with frequencies near the Nyquist frequency vary too rapidly for local interpolation methods to perform well. The highest spectral component contains marginally more than two points per period, which can hardly be approximated by linear or cubic interpolation [75]. The convolution with a Gaussian function spreads the data over more Fourier transform bins (up to 6 with Msp=3), allowing for a more accurate calculation of the Fourier transform.  The input and output of the NUFFT is quite similar to the FFT, both of which takes vector of complex numbers in one domain and produces their counterparts in another domain. The only difference is that the input of the NUFFT is not required to be equally spaced. Hence one can eliminate the interpolation step during SD-OCT image reconstruction, which is then inherited by the NUFFT algorithm. This is certainly an attractive trait of the NUFFT since the sensitivity fall-off can be improved with only minor changes to the system.  5.3 Sensitivity fall-off with different reconstruction method  To measure the system sensitivity fall-off using different processing algorithms, 1000 Alines were acquired from a mirror reflector in the sample arm at 17 positions along the imaging depth. The camera exposure time is 20µs for each A-line. The interference fringes were processed using several common methods, including linear interpolation with FFT, cubic spline interpolation with FFT, NDFT and NUFFT (Msp=3, R=2). With this choice of parameter for the NUFFT, one can expect an error of < 10-3 between the NDFT and NUFFT [17].  The depth dependent sensitivity fall-offs of each method are plotted in figure 5.4. It can be seen that at deeper axial depths, the sensitivity fall-off due to the interpolation method is significant. NDFT and NUFFT achieve the best fall-off at -12.5dB over the full range,  94  while typical low fall-off SD-OCT reconstruction using cubic spline interpolation suffers an 18.1dB decrease in sensitivity. Therefore, NUFFT improves the sensitivity fall-off by 5.6 dB. The regular linear interpolation has a fall-off greater than -21dB, nearly 10dB worst than its NUFFT counterpart. The improvements of using the NUFFT gradually start from shallow depths and increase significantly at deeper depths.  Figure 5.4: Sensitivity Fall-off using different reconstruction methods (with rapid rectilinear lens)  Aside from the benefits of a decreased fall-off, the NUFFT algorithm can further increase the local SNR by removing the shoulders or side-lobes. The shoulders appear due to the error in interpolation as the modulation fringes in the measured OCT signal approach the Nyquist rate, where local interpolation algorithms fail to resample the data at the correct value [76]. Depicted in Fig. 5.5, a single reflector at 1.3mm depth produced a single peak in the A-line profile. Note, however, that using linear or cubic spline interpolation for processing, a broad shoulder can be seen in the profile, which has also been reported by others [39, 76, 77]. This shoulder can degrade the image quality when multiple reflections occurring close together such as in biological samples. A typical method to reduce this shoulder is to zero-padded the data before Fourier transform [39, 76], which requires the use of a larger sized FFT, thus slowing down the imaging system. The NDFT  95  and NUFFT method as shown in Fig. 5.5 do not produce this shoulder even at deeper imaging depths.  Figure 5.5: (a) Typical point spread function with a single partial reflector: Linear interpolation, cubic spline interpolation, NDFT, and NUFFT are represented with blue, red, black, green respectively.  5.4 Image comparison To confirm the performance of the NUFFT based SD-OCT system, imaging on biological samples was conducted. Due to the widespread use of 800nm SD-OCT systems for ophthalmology imaging [78], the eye was used as a model in the experiment A frequently examined specimen is the cornea of the eye, where central corneal thickness often correlates with the progression of Glaucoma in humans [79]. Using a squid’s protruding eye as a sample, an ex-vivo image was taken and processed with the aforementioned algorithms. The reconstructed image using a linear interpolation with FFT is shown in figure 5.6. Although nothing was in the path of the probing beam, the image produced by the linear interpolation shows structures or artifacts above the cornea. In addition, blurring occurred at the posterior edge of the cornea which was also presented in the image reconstructed with a cubic spline interpolation. Both of these artifacts are absent in  96  the NDFT and NUFFT produced images. The cause of these artifacts is attributed to the broad shoulder effect, shown previously in figure 5.5.  Figure 5.6: Ex-vivo OCT image of the eye of a squid processed using a) Linear Interpolation + FFT, b) Cubic spline interpolating + FFT, c) NDFT, d) NUFFT; scale bars are 0.5mm.  97  Figure 5.7: Analysis of corneal images, highlighting the difference at the anterior surface. a) Liner interpolation with FFT, d) NUFFT. Shown on the left is a representative A-line (number 241) of the zoom in images. NUFFT produced an image with higher intensity. The red arrow indicates the location of the artifact due to the shoulder effect in reconstructions with linear and cubic interpolations.  98  Figure 5.8: Analysis of corneal images showing difference on the posterior surface. a) Liner interpolation with FFT, d) NUFFT. Shown on the left is a representative A-line (number 175) of the zoomed in images. NUFFT produced an image with higher intensity. The red arrow indicates the location of the artifact due to the shoulder effect in reconstructions with linear and cubic interpolations.  5.4 Numerical dispersion compensation  Dispersion within the OCT system will cause different frequencies to propagate with different velocities. This will broaden the interferometric autocorrelation if it is not balanced between the reference and sample arm. A dispersion mismatch produces a phase shift ejθ(k) in the detected spectrum as a function of the wavenumber k. The phase θ(k) can be expanded by a Taylor series about the center frequency of the light source [40, 56]:  1 2  1 3  θ (k ) = .θ (k o ) + θ ' (k o )(k o − k ) + θ ' ' (k o )(k o − k ) 2 + θ ' ' ' (k o )(k o − k ) 3 + ...  (5.26)  The first term is a constant and represents the phase delay of the center frequency passing through a material with propagation constant k. The second term is the inverse group  99  velocity; it describes the overall time delay of a pulse propagating through medium. In broadband optics, this term represents the inverse of the velocity at which the pulse envelope propagates. The first two terms are not related to dispersive broadening. The third term is named the group delay dispersion and symbolizes the variation of the group velocity with frequency. This term causes the broadening of the autocorrelation function and degrades the FWHM resolution in SD-OCT systems. Although higher terms do contribute to dispersion, compensation is largely done by adjustments to the third term. This term can be eliminated by hardware, adding in dispersive elements in one arm such that the dispersion is balanced between the two arms. It can also be compensated numerically by determining the relevant higher order term of θ(k) and introducing an opposite negative term to compensate for the dispersion.  To determine the phase term that arises from the dispersion mismatch in the OCT system, one measurement using a single reflector will suffices [80]. The method was introduced and explained in detail by B. Cense et al [56] and was used in an ultra-high resolution high speed SD-OCT system. The interference fringe is first resampled into k space using one of the above interpolation methods. It is then Fourier transformed into the z space, where the coherence peak is shifted to center in on the origin. The shift of the peak to the origin is to effectively set the path length difference ∆l to zero, which removes the k∆l contribution to the phase term. The remaining term is solely due to the phase shift θ(k) from the dispersion mismatch. After applying the inverse Fourier transform, a complex spectrum in the k-space is achieved. By taking the arctangent of the imaginary component over the real component, the phase term is extracted in an array. This phase term array represents how much each wavenumber k is shifted due to the dispersion mismatch. By fitting the N points phase term to a ninth order polynomial, nine coefficients are produced. Although a lower order polynomial fit could be used, it was shown that a ninth order fit could be used to eliminate most of the dispersion mismatch [56]. A e-jθ(k) inverse phase shift term, defined by the last seven polynomials, is multiplied to all interference fringes prior to the Fourier transform. This effectively removes the contributions of the higher order terms in equation 5.26. The result of numerically dispersion compensation was shown in figure 2.11 of chapter two.  100  5.5 Computation speed  In addition to the sensitivity fall-off, processing speed is another criterion used in assessing the reconstruction methods of SD-OCT. Real-time processing and the display of images without hindering the acquisition rate is highly desirable. To measure the processing speed of the different algorithm, timing is done using the on die high performance counter. This high performance counter has a resolution that is inversely proportional to the processor speed, which, for a computer operating in the order of gigahertz., is approximately in the nanosecond range.  While the NDFT can improve the sensitivity fall-off, its processing speed is slow, and thus it can’t perform real-time imaging. The NUFFT can significantly improve the image processing speed while maintaining the same sensitivity fall-off as the NDFT. To demonstrate the speed advantage using the NUFFT, processing speed was measured on a Dell 530 with an Intel E4500 Core 2 Duo processor (2.2Ghz) with 2 GB of RAM operating on an Microsoft WindowsXP SP3. The processing algorithms were written and compiled with Visual C++ using a single core for computation. The processing algorithms convert the raw data to an image, which includes the Fourier transform of data with the interpolation methods previously mentioned, numerical dispersion compensation [80], logarithmic scale calculation, contrast and brightness adjustments as well as display. The processing times were averaged over 100 B-mode frames to compensate for jitters in the frame rate, which results from using a non real-time operating system such as Windows. The performance of the NUFFT is much more efficient compared to that of the cubic spline interpolation and NDFT, as seen in figure 5.9.  101  Figure 5.9: 512 A-line frame processing time with numerical dispersion compensation. Platform: Intel Core 2 Duo E4500 at 2.2Ghz. Frame rate in frames per second is denoted in brackets.  Although figure 5.9 highlighted the relative speed of the reconstruction algorithms, it does not represent the optimal performance. Both the data acquisition hardware and the CPU have idle time during a measurement and display cycle. As seen in figure 5.10, the CPU is idle during the measurement phase, and similarly the DAQ is inactive during processing. By modifying the program structure, the DAQ can be initiated to acquire data without CPU intervention, allowing the CPU to process data concurrently. The CPU itself is also capable of multitasking, as most current day processors contains multiple cores that can be utilized to perform different tasks. Theoretically, if the algorithm can be fully parallelized, the computation time can be reduced by a factor equivalent to the number of cores. However, dividing the problem and recombining the results usually adds overhead to the computation, and as such, the actual performance increase observed with N processors is usually less than a factor of N [81].  102  Figure 5.10: Sequence of control in SD-OCT system. a) single-threaded control, where the system is only performing one task at a time; b) Multi-threaded control, the system makes use of idle time that is otherwise wasted.  To accelerate the processing algorithm, the computing platform was replaced by a Dell Vostro 420 with an Intel Q9400 Core 2 Quad with 3 GB of memory. The processing algorithms were optimized for computation speed and compiled using a more efficient Intel C++ compiler and the algorithm was accelerated by utilizing all four cores available in the machine. Once the frame grabber and data acquisition board is set up and started, it runs without CPU intervention during a single frame. During acquisition, all four cores can be used for processing. This multi-processing scheme was realized by an application programming interface called OpenMP [82] and is illustrated in figure 5.11.  The  processing time evaluation was performed with and without numerical dispersion compensation; the former was compensated with a lens in the reference arm.  103  Figure 5.11: Acquisition and processing sequence  The processing times are plotted in Fig 5.12. It can be seen that the processing time of the NUFFT is comparable to linear interpolation and is approximately 30x and 130x faster than cubic spline interpolation and NDFT respectively. This is one of NUFFT’s main advantages: it takes less computational time to produce a better image than cubic spline interpolation. The largest savings of computation time come from the interpolation. The cubic polynomial must be recalculated for every A-line in the frame, but the Guassian Interpolation kernel for the NUFFT is pre-calculated. This means that the bulk of the calculation can be pre-calculated outside of the processing loop, which reduces the computational time by a significant factor. The image processing based on the NUFFT can achieve a comparable speed to systems using a linear interpolation with a FFT. Furthermore, the NUFFT can improve the sensitivity fall-off far better than linear interpolation can.  104  Figure 5.12: 512 A-line frame processing time with Intel Core 2 Quad Q9400 at 2.66Ghz and multithreading. Frame rate in frames per second is denoted in brackets.  5.6 Complex full range OCT  The Fourier transform (FT) is a central component to the SD-OCT image reconstruction. An apparent disadvantage of the FT is its Hermitian symmetry property when dealing with real-valued inputs. This property results in the conjugate mirror image of the axial profile about the zero path length difference. If structures are present on both the negative and positive path difference, the mirror images will overlap with each other and the real structures are obscured. In a standard SD-OCT system, the sample is positioned on one side of the path length difference such that the mirror images don’t over lap. Therefore standard OCT utilizes only half of the FT results since the other half is a redundant mirror image. This effectively reduces the possible imaging range by a factor of two.  Complex SD-OCT can increase the imaging by two by removing the conjugate mirror. It is a method developed to realize the complex valued input array to the Fourier transform. This is usually done by recovering the phase term of the detected electromagnetic wave. Numerous successful techniques have been proposed and demonstrated in the literature.  105  All but one requires extra hardware such as dual camera [83], electro-optical phase modulator [84] and piezo-mirror [85], fiber stretcher [86] and 3x3 fiber coupler [87,88]. This particular method introduces a phase modulation to the interference fringes across a frame (typically 512 A-lines) by offsetting the scanning mirror [89, 90, 91] as shown in figure 5.13.  Figure 5.13: Illustration of the offset (s) needed for complex full range OCT. f denotes the focal length of the lens  Typical performance of the complex SD-OCT method depends on a few criteria, namely the amount of phase shift incurred for successive A-lines and the phase stability of the system, as well as the ability to filter out the undesired terms in the complex calculation. The conjugate mirror terms are usually not completely removed by this method. The suppression ratio, which is defined as the amplitude ratio between the actual and mirror image terms, is dependent on the criteria listed above. A typical suppression ratio of a complex SD-OCT with this method is in the range of 40dB.  Preliminary work on complex SD-OCT imaging has been done using the current system. The measured suppression ratio is approximate 7dB. Further improvement is expected with better alignment, along with the detailed analysis of phase shifts and phase stability of the system. Figure 5.14 shown below is an A-line produced by the above complex SDOCT method. 106  Figure 5.14: Reconstructed axial profile using complex SD-OCT showing the conjugate mirror suppression of 7dB.  107  Chapter 6 System characterization and image demonstration A SD-OCT system is characterized by a number of performance specifications, which allow the user to readily compare different systems objectively. Major specifications in SD-OCT systems include the sensitivity, sensitivity fall-off, imaging range, axial resolution, and processing speed. This chapter will summarize these performance characteristics of our SD-OCT system. Images taken with the current SD-OCT are also presented to the readers.  6.1 Sensitivity  To measure the sensitivity of the SD-OCT system, a mirror was used as a sample which was placed at the focus of the probing beam. The mirror was placed at depth ∆l = 0.1mm. Incident power on the sample was measured to be ~1.3mW. With the galvanometer stationary, the light in the sample arm was attenuated with a fixed neutral density filter. Since the filter operates in both forward and backward directions, its effect must be counted twice. The reference arm power was attenuated using a variable neutral density filter to avoid saturating the camera. The sensitivity can be calculated by [92]:   ft -1 ( I ) peak SNR = 20 log  -1  std ft ( I ) noise  [    + 2 × 10 × O.D. filter   ]  (6.1)  where ft-1(I)peak is the highest value of the signal after the Fourier transform and ft-1(I)noise is the noise floor away from the aforementioned signal peak, and O.D.filter is the optical density of the fixed neutral density filter in the sample arm. Using this method, the sensitivity of the system is measured to be approximately 96 dB. Typically SD-OCT can realize a sensitivity of over 100dB. Possible reasons for the low sensitivity might be misalignment and the low fiber coupling efficiency. There is a fiber coupling loss when light is focused back into the fiber. Due to limited resources, simple plano-convex lenses  108  were used to couple light back into the fiber. Most state-of-the-art systems, however, employ specialized fiber couplers or an achromatic lens that can accommodate a broad wavelength range.  6.2 Sensitivity fall-off  To measure the sensitivity fall-off, 1000 A-lines were acquired from a mirror reflector in the sample arm at 17 positions along the imaging depth. The camera exposure time is 20µs for each A-line. The interference fringes were processed using the NUFFT (Msp=3, R=2). With this choice of parameter for the NUFFT, the maximum fall-off is -12.5 dB. Table 6.1 shown below compares the sensitivity fall-off of current SD-OCT systems in the literature.  System  Wavelength  Our System [93]  λ= 845nm Δλ= Δλ 45nm λ= 820nm Δλ= 30nm λ= 870nm Δλ=170nm λ= 800nm, Δλ= 130nm λ= 890nm, Δλ= 145nm λ= 835nm, Δλ= 45nm  [94] [95] [92] [65]  Imaging Focusing Lens Sensitivity Spectral fall-off @ Resolution range max depth 12.5dB 0.101nm 1.73mm 100mm (rapid rectilinear) 25dB 0.11nm 1.54mm 100mm (single achromatic) 20dB 0.18nm 1.7mm 110m (two 200mm lens) 17dB 0.076nm NA 135mm (Objective Lens) 25dB NA 1.95mm 100mm F-theta Lens (Sil-optics) 14dB 0.0674nm 2.56mm 150mm (single achromatic)  Table 6.1: Comparison of SD-OCT system with the 14x14µm2 camera at similar wavelength. Spectral resolution is the smallest resolvable spectral width of the spectrometer with CCD camera.  6.3 Axial resolution  Data gathered in the sensitivity fall-off measurement were also used to determine the axial resolution of the system in air. The mirror reflection produces a delta function which is convolved with the system response as mentioned in chapter two. Each post-  109  Fourier transformed axial profile was up-sampled via zero-padding to increase the resolution [56]. The system axial resolution at different depths is illustrated in figure 6.1. It can be seen that the axial resolution remains close to the source limited resolution of 7.1µm regardless of the reconstruction method chosen. There is a slight decrease in resolution at greater depths, which we attribute to the effects of misalignment and noise. The interference fringes near the maximum imaging range contain about two points per period. Therefore read-out noise and quantization noise from the camera has greater effects on the signal at this depth.  Figure 6.1: Axial Resolution with different processing methods  6.4 Imaging range  The imaging range was evaluated using two methods in our experiment. By recording the pixel number where the peak occurred in successive measurements of the previous section, one can calculate the number of pixels representing a 100µm spacing. The imaging range can then be determined by multiplying this number by 512, the total number of axial pixels. This method of measurement resulted in an imaging range of 1.7mm.  110  Another method can also be used to determine the maximum imaging range. By placing a mirror in the sample arm and translating it with a micrometer stage, the “folding range” can also be found. The imaging range is indicated on the display when the signal peak disappears and is replaced by its aliasing mirror, which can be identified by movement opposite to the movement of the micrometer stage. The image range established by this method is 1.73 mm, which is in good agreement with the alternative measurement technique.  The imaging range can be increased by using a CCD with a greater number of pixels. If one were to sacrifice the axial resolution, the imaging range can be further improved. However, real imaging ranges are typically limited to a few millimetres by the absorption and scattering of light in the sample.  6.5 Processing speed  In our current system, the theoretical reconstruction speed of the processing algorithm is over 90k A-lines/s. It decreases by half to approximately 48k A-lines/s when numerical dispersion compensation is used. Limited by the line rate of the camera, the SD-OCT can capture, process and display the images at approximately 51k A-lines/s which translates to a frame rate of approximately 100 frames per second. Other hardware-based parallel processing is also a popular method to reconstruct SD-OCT images in real time. Researchers have used field programmable grid arrays [35] and digital signal processors [36] to realize speeds of 14k A-lines/s and 4k A-lines/s respectively. A recently developed parallel processing based SD-OCT system using linear interpolation generates images at a theoretical 80k A-lines/s [62].  111  System Processor Our Intel Core 2 System Quad Q9400 (2.66Ghz) [62] Intel Xeon X5355 (2.66Ghz) [35] Xilinx Virtex4 FPGA (1536 logic blocks) [36] Texas Instrument C6701 Programmable DPS (132Mhz)  Processing NUFFT  Linear interpolation + FFT Linear interpolation +FFT NA  A-line rate (Including processing and display) 90kHz (hardware dispersion compensation) 48kHz (numerical dispersion compensated) 51kHz (demonstrated, camera limited) 80kHz (theoretical) 20kHz (demonstrated, source limited) 14khz (demonstrated)  4kHz (demonstrated, including Doppler OCT)  Table 6.2: Processing speed of comparable SD-OCT systems using specialized acceleration  6.6 Overall performance  The overall performance of our SD-OCT system has both a speed and sensitivity advantage compared to similar systems in the literature. Our demonstrated system can achieve a processing limited A-line rate of 90 kHz with the NUFFT, which has a superior image quality as compared to the fastest system to date that uses linear interpolation. Although other systems using the NDFT have a similar sensitivity fall-off performance, their processing speed is nearly 700 times slower due to a matrix multiplication step of N2 complexity. Simultaneously, our system can achieve both the image quality of a system using the NDFT [65] and the processing speed of systems using linear interpolation [62].  6.7 Image demonstration  Several sample images are shown in the following section. The image size is 512x512 pixels and the image depth in the y direction is 1.73mm. The images are taken with 20µs exposure time. Each scale bar represents 0.5mm.  112  Figure 6.2: In-vivo OCT image of the human distal phalanx at the palmar surface (finger tip)  Figure 6.3: In-vivo OCT image of the human distal phalanx at the dorsal surface  113  Figure 6.4: In-vivo OCT image of the human finger nail bed, showing the transition from nail to skin  114  Figure 6.5: Ex-vivo image of bovine omasum  Figure 6.6: Ex-vivo image of chicken skin  115  Figure 6.7: OCT image of onion; Some cellular structure can be observed  Figure 6.8: OCT image of a lettuce leaf  116  Figure 6.9: Ex-vivo lateral scan image of tiger shrimp across the 2nd and 3rd abdominal segments (tergum)  Figure 6.10: Ex-vivo image of tiger shrimp with shell removed  117  Chapter 7 Ultrasound and optical coherence tomography As discussed in chapter one, different imaging modalities have varying resolutions and imaging ranges. Most importantly, the image contrast mechanism is based on different physical or chemical properties of the sample. Medical ultrasound has been an established method in imaging at the organ level [1, 2]. High frequency ultrasound [26] can produce images with resolution in the micrometer range, rivalling that of OCT. Ultrasound differs from OCT in its contrast mechanism in that it creates an image based on the mechanical property of the sample. Interfaces between two layers differing in rigidity are presented on an ultrasound image. OCT, on the other hand, measures the optical properties of an object and is able to detect changes in the index of refraction. By combining the two modalities, both properties of the sample can be investigated.  Collaborating with Prof. Rohling, Narges Afsham and Leo Pan, a method of combining the two modalities was realized. By placing the ultrasound probe and OCT sample arms side by side, a lateral translational motion of the sample can be used to produce a B-scan image. The novel part of this project is the alignment of the two probes, which was primarily the responsibility of the other students.  7.1 Synchronization  Synchronization is a key to producing images that can be co-registered. The 50 MHz high-frequency ultrasound machine (Episcan 2000I) used in this project was a commercial model, normally deployed in a clinical setting used by medical professionals with limited the possible modifications. The ultrasound system uses an RS-232 serial interface to communicate with a motorized linear stage (Zaber T-LSR150B), which produced the translation needed for a B-scan image. Digital control signals from the RS232 port were redirected to the frame grabber board of the OCT system as a trigger to start the acquisition of an A-line with the CCD. A notable difference in contrast to the original system was a lack of trigger signal to generate the control waveform for the 118  galvanometer, whose responsibility was replaced by the RS-232 control signal from the Episcan ultrasound.  Figure 7.1: Synchronization scheme in the combined HF-Ultrasound SD-OCT system  7.2 Alignment  The goal of aligning the two probes will ultimately require the knowledge of the two probes in three dimensional space. By determining the orientation of the probes, one can adjust for the offset and tilt until it is within the misalignment tolerance. To assist in resolving the orientation of the two probes, a small phantom with different slopes and steps was designed by Leo Pan and is illustrated in figure 7.2. An iterative matlab program, written by Narges, was used to determine the location and direction of the probe based on the known dimensions of the phantom.  119  Figure 7.2: Left – 3D view of the alignment phantom; right – an ultrasound image of the phantom [Courtesy of Narges Afsham]  7.3 Co-registered images  After careful alignment to within tolerable range, experiments were conducted with the SD-OCT and ultrasound system. Due to the established work of OCT and HF ultrasound in ophthalmology, bovine eyes were chosen to be the subject of investigation. Ex-vivo bovine eyes were imaged within 48 hours post-mortem. Figure 7.3 below shows the SDOCT image of the structure of the cornea. The curvature of the eye can be seen and the thickness of the cornea can be estimated.  Figure 7.3: Ex-vivo OCT image of bovine cornea, 48 hours post-mortem, taken at 50µs exposure time  The OCT image is further processed to be co-registered with the ultrasound image and is shown in the figure 7.4. Notice the fine structural line in both images of the cornea. After co-registration, the images are overlapped and displayed with different colors.  120  Figure 7.4: OCT and Ultrasound images of an ex-vivo bovine eye; Bottom – co-registered result of the two modalities; both axis represent the pixel number [Courtesy of Narges Afsham]  121  Chapter 8 Conclusion Optical coherence tomography is an imaging modality that provides cross-sectional images with micrometer resolution. In recent years, SD-OCT has experienced a large increase in attention due to its advantages over TD-OCT in imaging speed as well as signal-to-noise ratio. One major drawback of SD-OCT, however, has been the axial depth dependent sensitivity fall-off. Image quality rapidly degrades in pixels representing deeper locations of the sample. Post-processing time is another area of weakness in SDOCT, where processing time is typically the bottleneck in an acquisition cycle. This thesis aims to address both problems, and to provide a general solution that will improve them both simultaneously.  This chapter will include sections on the significance of the work, and a summary of current work and potential future work.  8.1 Significance of work  At present, many hardware and software-based solutions to the sensitivity fall-off and processing speed limitation have been proposed and demonstrated. However, many of them require elaborate processing algorithms or custom manufactured optical parts, which limit the scope of use and deployment in the clinical setting.  Although the importance of the spectrometer design related to the sensitivity fall-off in SD-OCT systems has been identified, very little published work focuses on the optical design of the spectrometer. This project developed a systematic way to design the optics and pinpointed the trade-offs in selecting optical components for improving the sensitivity fall-off. The experimental work also further strengthens the correlation between the optical design and sensitivity fall-off. A proof of concept was demonstrated by contrasting the optical design in the spectrometer arm that resulted in a better sensitivity fall-off.  122  Present day high-speed real time displays of OCT images have been achieved using simple processing algorithms. These reconstruction methods introduce image artifacts and exhibit worse sensitivity fall-offs. Although many other processing methods have been developed with improved image qualities, the algorithms compromise the processing speed and hence have limited function as a non-real time system. This thesis presents the use of the NUFFT with an SD-OCT system, a processing method that is novel to the OCT community. Demonstrated with the use of our SD-OCT hardware, the adaptation of the NUFFT improved the sensitivity fall-off by >5dB, while simultaneously achieving a real time processing rate (>90k A-lines/s or 100 frames/s).  Upon the successful demonstration of a real-time SD-OCT system, this thesis has achieved the two main goals of the project, which was to improve SD-OCT sensitivity fall-off and processing speed.  8.2 Future work and improvements  In the present implementation, limitation to using off-the-shelf optics has restricted the optical design of the spectrometer to a simple rapid rectilinear lens. Typical lens designers have the freedom of using different lens materials and customization of lens elements. A more in depth study of the optical design of the spectrometer could be performed with these variables. Furthermore, due to the widespread adaptation of the current spectrometer layout, it has become the de facto standard in the SD-OCT community. However, to our knowledge there has been no research performed thus far that supports this layout as the optimal one. Therefore the effect of the overall spectrometer layout and detection scheme on the performance of an SD-OCT system still remains unknown, and is a possibly interesting area of research that can be pursued.  With the focus on hardware improvement, software implementation of processing algorithms is sometimes neglected. As demonstrated by this thesis, the reconstruction method has a great effect on the sensitivity fall-off and image quality. The NUFFT  123  algorithm presented here is only an adaptation of one of the many NUFFT variants. Experimentation with different variations of the NUFFT should be performed in order to assess their performance in SD-OCT reconstruction. Other extensions to OCT such as Doppler flow detection, polarization sensitivity and complex full range detection are all based on the Fourier transform of data for reconstruction. The application of the NUFFT on these extensions could lead to an improvement such as an increased extinction ratio in complex OCT or faster flow detection in Doppler measurements. Further studies should be performed to assess the effects of the NUFFT usage.  As with all optical systems, alignment and calibration can lead directly to performance improvements. Alignment methodologies for an SD-OCT system could be developed to improve the coupling efficiency and phase stability of the system in a systematic manner. In addition, calibration methods for the spectrometer and the spectrometer’s accuracy should also be considered when designing an SD-OCT system.  Recently, it came to our attention that the galvanometer has an undocumented lag time of 80-100µs. Since the response time of the galvanometer is greater than the minimum exposure time of the camera, it is possible that a few A-lines are captured before the mirror starts to move. Although synchronization of the camera and analog control waveform has been achieved, the movement of the mirror lags by the delay due to the galvanometer controller board. A current work around is to introduce an equivalent delay for the camera. However, a more robust system could be achieved using the galvanometer position signal as a feedback. Future implementation of this synchronization scheme could possibly increase SNR and lateral resolution due to a better synchronization.  124  Bibliography [1]  T.L. Szabo, Diagnostic Ultrasound Imaging: Inside Out, Burlington, MA: Elsevier, 2004.  [2]  W.R. Hedrick, D.L. Hykes, D.E. Starchman, Ultrasound Physics and Instrumentation, 4th ed. , St. Louis: Mosby, 2005.  [3]  C.J. Pavline, K. Harasiewicz, M.D. Sherar, and E.S. Foster, “Clinical Use of Ultrasound Biomicroscopy,” Ophthalmology, vol. 98, pp 287-295, 1991.  [4]  T.Seiler, and T.Bende, Noninvasive Diagnostic Techniques in Ophthalmology, New York, NY: Springer-Verlag, 1990.  [5]  E.M. Haacke, R.F. Brown, M. Thompson, R. Venkatesan, Magnetic resonance imaging: Physical principles and sequence design, New York: J. Wiley & Sons, 1999.  [6]  G.T. Herman, Fundamentals of Computerized Tomography: Reconstruction from Projections, 2nd ed. , Springer, 2009.  [7]  J. G. Fujimoto, M. E. Brezinski, G. J. Tearney, S. A. Boppart, B. E. Bouma, M. R. Hee, J. F. Southern, and E. A. Swanson, “Optical biopsy and imaging using optical coherence tomography,” Nature Med, vol. 1, pp. 970–972, 1995.  [8]  M.E. Brezinski, G.J. Tearney, B.E. Bouma, J.A. Izatt, M.R. Hee, E.A. Swanson, J.F. Southern, J.G. Fujimoto, “Optical Coherence Tomography for Optical Biopsy: Properties and Demonstration of Vascular Pathology” Circulation, vol. 93:6, pp.1206-1213, 1996.  [9]  M. Vogt and H. Ermert, “ In Vivo Ultrasound Biomicroscopy of Skin: Spectral System Characteristics and Inverse Filtering Optimization, ” IEEE Transactions on UFFC, vol. 54:8, pp. 1551-1559, 2007.  [10]  J.B. Pawley and B. R. Masters, “Handbook of Biological Confocal Microscopy”, J. Biomed. Opt., vol 13, 029902, 2008.  [11]  D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J.G. Fujimoto, “Optical coherence tomography,” Science, vol. 254, pp.1178–1181, 1991.  [12]  A. F. Fercher, C. K. Hitzenberger, W. Drexler, G. Kamp, H. Sattmann, "In-vivo optical coherence tomography," Am. J. Ophthalmol., vol. 116, pp. 113-115, 1993.  Image  125  [13]  E. A. Swanson, J. A. Izatt, M. R. Hee, D. Huang, C. P. Lin, J. S. Schuman, C. A. Puliafito, and J. G. Fujimoto, "In vivo retinal imaging by optical coherence tomography," Opt. Lett., vol. 18, pp. 1864-1866, 1993.  [14]  J.G. Fujimoto and W. Drexler, Optical Coherence Tomography: Technology and Applications, 1st ed. , Springer, 2008.  [15]  J M Schmitt, A Knuttel, M Yadlowsky and M A Eckhaus, “Optical-coherence tomography of a dense tissue: statistics of attenuation and backscattering,” Phys. Med. Biol., vol. 39, no. 10, pp.1705, 1994.  [16]  G.J.Tearney, M.E. Brezinski, J.F. Southern, B.E. Bouma, S.A. Boppart, J.G.Fujimoto, “Optical biopsy in human gastrointestinal tissue using optical coherence tomography,” Am J Gastroenterol., vol. 92, no.10, pp.1800-1804, 1997.  [17]  C. Pitris, A. Goodman, S.A. Boppart, J.J. Libus, J.G Fujimoto, M.E. Brezinski, "High-Resolution Imaging of Gynecologic Neoplasms Using Optical Coherence Tomography," Obstetrics & Gynecology, vol 93, no.1, pp. 135-139, 1999.  [18]  C. Pitris,, M.E. Brezinski, B.E. Bouma, G.J.Tearney, J.F. Southern and J.G. Fujimoto, "High Resolution Imaging of the Upper Respiratory Tract with Optical Coherence Tomography," Am. J. Respir. Crit. Care Med., vol 157, no.5, pp.16401644, 1998.  [19]  G.J.Tearney, M.E. Brezinski, J.F. Southern, B.E. Bouma, S.A. Boppart, J.G.Fujimoto, “Optical Biopsy in Human Urologic Tissue Using Optical Coherence Tomography,” J. Urology, vol. 157, no.5, pp.1915-1919, 1997.  [20]  U. L. Mueller-Lisse, M. Bader, M. Bauer, E. Engelram, Y. Hocaoglu, M. Püls, O.A. Meissner, G. Babaryka, R.Sroka, C.G. Stief, M.F. Reiser and U.G. MuellerLisse, “Optical coherence tomography of the upper urinary tract: Review of initial experience ex vivo and in vivo,” Medical Laser Application, vol. 25, pp. 44-52, 2010.  [21]  M.E. Brezinski, G.J.Tearney, B.E. Bouma, S.A. Boppart, M.R. Hee, E.A Swanson, J.F. Southern, J.G. Fujimoto, “Imaging of coronary artery microstructure (in vitro) with optical coherence tomography, ” Am. J. Cardiology, vol. 77, no.1, pp.92-93, 1996.  [22]  C. Kelley, A. Nesbitt, "LightLab Imaging Announces FDA Clearance of C7XRTM Coronary OCT Products in the United States," Lightlabs Imaging. May 5, 2010. [online] Available: http://www.lightlabimaging.com/downloads/news_050510.pdf  [23]  J.M. Schmitt, "Optical Coherence Tomography (OCT): A Review," IEEE Journal of selected topics in Quantum Electronics, vol.5, no.4, pp.1025-1215, 1999.  126  [24]  R. Leitgeb, C. Hitzenberger, and Adolf Fercher, "Performance of fourier domain vs. time domain optical coherence tomography," Opt. Express, vol. 11, pp. 889894. 2003.  [25]  S. Tang, T.B. Krasieva, Z.P. Chen, B.J. Tromberg, "Combined multiphoton microscopy and optical coherence tomography using a 12 femtosecond, broadband source", J. of Biomed. Opt., vol. 11, pp. Art. No. 020502, 2006.  [26]  D.Z. Reinstein, R.H. Silverman, J. Coleman, "High-Frequency Ultrasound Measurement of the Thickness of the Corneal Epithelium," Refractive & Corneal Surgery, vol. 9, pp. 385-388, 1993.  [27]  J.O. Schenk and M.E. Brezinsk, "Ultrasound induced improvement in optical coherence tomography (OCT) resolution," PNAS, vol. 99, no. 5, pp. 9761-9764, 2002.  [28]  C. Huang, B. Liu, and M.E. Brezinski, "Ultrasound-enhanced optical coherence tomography: improved penetration and resolution," J. Opt. Soc. Am. A, vol. 25, pp. 938-946, 2008.  [29]  Z. Hu and A.M. Rollins, "Fourier domain optical coherence tomography with a linear-in-wavenumber spectrometer," Opt. Lett., vol. 32, pp.3525-3527. 2007.  [30]  J.F. de Boer, B. Cense, B.H. Park, M.C. Pierce, G.J. Tearney, and B.E. Bouma, "Improved signal-to-noise ratio in spectral-domain compared with time-domain optical coherence tomography," Opt. Lett., vol. 28, pp. 2067-2069. 2003.  [31]  S. H. Yun, G. Tearney, J. de Boer, and B. Bouma, "Motion artifacts in optical coherence tomography with frequency-domain ranging," Opt. Express, vol. 12, pp. 2977-2998, 2004.  [32] M.J. Seiler, B. Rao, R.B. Aramant, L. Yu, Q. Wang, E. Kitayama, S. Pham, F. Yan, Z. Chen, and H.S. Keirstea, "Three-dimensional optical coherence tomography imaging of retinal sheet implants in live rats, " Journal of Neuroscience Method, vol. 188, no. 2, pp. 250-257, 2010 [33]  J. Kalkman, A. V. Bykov, D. J. Faber, and T. G. van Leeuwen, "Multiple and dependent scattering effects in Doppler optical coherence tomography," Opt. Express, vol. 18, pp. 3883-3892, 2010.  [34]  Y. Zhang, X. Li, L. Wei, K. Wang, Z. Ding, and G. Shi, "Time-domain interpolation for Fourier-domain optical coherence tomography," Opt. Lett., vol. 34, pp. 1849-1851, 2009.  127  [35]  T. E. Ustun, N. V. Iftimia, R. D. Ferguson, and D. X. Hammer, “Real-time processing for Fourier domain optical coherence tomography using a field programmable gate array,” Rev. Sci. Instrum., vol.79, art no. 114301. 2008.  [36]  A. W. Schaefer, J. J. Reynolds, D. L. Marks, and S. A. Boppart, “Real-time digital signal processing-based optical coherence tomography and doppler optical coherence tomography,” IEEE Trans. Biomed. Eng., vol. 51, pp. 186-190. 2004.  [37]  Andrew M. Rollins and Joseph A. Izatt, "Optimal interferometer designs for optical coherence tomography," Opt. Lett. vol. 24, pp.1484-1486. 1999.  [38]  G.Hausler and M. W. Lindner, “ Coherence radar and spectral radar – new tools for dermatological diagnosis,” J. Biomed. Opt., vol. 3, no.1, pp. 21–31, 1998.  [39]  N. Nassif, B. Cense, B. Park, M. Pierce, S. Yun, B. Bouma, G. Tearney, T. Chen, and J. de Boer, "In vivo high-resolution video-rate spectral-domain optical coherence tomography of the human retina and optic nerve," Opt. Express, vol. 12, pp. 367-376, 2004.  [40]  M. Wojtkowski, V. Srinivasan, T. Ko, J.G. Fujimoto, A. Kowalczyk, and J. Duker, "Ultrahigh-resolution, high-speed, Fourier domain optical coherence tomography and methods for dispersion compensation," Opt. Express., vol. 12, pp. 2404-2422, 2004.  [41]  P. F. Moulton, "Spectroscopic and laser characteristics of Ti:Al2O3," J. Opt. Soc. Am. B, vol. 3, pp.125-13, 1986.  [42]  V. Shidlovski, "Superluminescent Diodes.Short overview of device operation principles and performance parameters." Superlum Diodes Ltd. 2004 [online] Available: http://www.superlumdiodes.com/pdf/sld_overview.pdf  [43]  Z. Hu, Y. Pan, and A. M. Rollins, "Analytical model of spectrometer-based twobeam spectral interferometry," Appl. Opt., vol. 46, pp. 8499-8505, 2007.  [44]  B.H. Walker, Optical Engineering Fundamentals, 2nd ed. , SPIE Press, 2009.  [45]  R. Fischer, B.Tadic-Galeb and P. Yoder, Optical System Design, 2nd ed. , McGraw-Hill Professional, 2008.  [46]  K.L. Moore and A.F. Dalley, Clinically Oriented Anatomy, Lippincot Williams & Wilkins, 1999.  [47]  Beckman Laser Institute, UC Irvine [online] Available: http://dosi.bli.uci.edu/userfiles/image/basis_spectra.jpg  128  [48]  Thorlabs “Fiber Polarization Controller Manual,” 1998 [online] Available: http://www.thorlabs.com/Thorcat/0400/0482-D01.pdf  [49]  B.A. Saleh and M.C. Teich, Fundamentals of Photonics, Wiley-Interscience, 2007.  [50]  T.S. Ralston; D.L. Marks, F. Kamalabadi, S.A. Boppart, “Deconvolution methods for mitigation of transverse blurring in optical coherence tomography,” IEEE Trans. Image Process. vol 14. pp. 1254-1264, 2005.  [51]  Y. Yasuno, J. Sugisaka, Y. Sando, Y. Nakamura, S. Makita, M. Itoh, and T. Yatagai, "Non-iterative numerical method for laterally superresolving Fourier domain optical coherence tomography," Opt. Express, vol 14, pp. 1006-1020. 2006.  [52]  J.M. Schmitt, S.L. Lee, and K.M.Yung, “An optical coherence microscope with enhanced resolving power in thick tissue,” Optics Communications, vol. 142, pp. 203-207, 1997.  [53]  Z. Ding, H. Ren, Y. Zhao, J. S. Nelson, and Z. Chen, "High-resolution optical coherence tomography over a large depth range with an axicon lens," Opt. Lett., vol. 27, pp. 243-245, 2002.  [54]  R. A. Leitgeb, M. Villiger, A. H. Bachmann, L. Steinmann, and T. Lasser, "Extended focus depth for Fourier domain optical coherence microscopy," Opt. Lett., vol. 31, pp. 2450-2452, 2006.  [55]  PCI Special Interest Group, PCI Local Bus Specification, revision 2.2, ” 1998 [online] Available: http://www.ece.mtu.edu/faculty/btdavis/courses/mtu_ee3173_f04/papers/PCI_22. pdf  [56]  B. Cense, N. Nassif, T. Chen, M. Pierce, S.H. Yun, B. Park, B.E. Bouma, G.J. Tearney, and J. de Boer, "Ultrahigh-resolution high-speed retinal imaging using spectral-domain optical coherence tomography," Optics Express, vol 12, pp. 2435-2447, 2004.  [57]  S. Yun, G. Tearney, B. Bouma, B. Park, and Johannes de Boer, "High-speed spectral-domain optical coherence tomography at 1.3 µm wavelength," Opt. Express, vol. 11, pp. 3598-3604, 2003.  [58]  Z.Hu, A.M. Rollins, “Optimum design of spectrometer in FD-OCT,” Proceedings of SPIE, vol. 6429, art no. 642925, 2007.  [59]  R. Kingslake, R.B Johnson, Lens Design Fundamentals, Academic Press, 1978.  129  [60]  J.R. Meyer-Arendt, Introduction to Classical and Modern Optics, Benjamin Cummings, 1995.  [61]  B. Rao, “Optical coherence tomography and its applications in ophthalmology,” Ph.D. Dissertation, University of California, Irvine, CA, 2008  [62]  G. Liu, J. Zhang, L. Yu, T. Xie, and Z. Chen, "Real-time polarization-sensitive optical coherence tomography data processing with parallel computing," Appl. Opt., vol. 48, pp. 6365-6370, 2009.  [63]  E. Maeland, “ On the comparison of interpolation methods, ” IEEE Transaction on Medical Imaging, vol 7. no.3, 213-217, 1988.  [64]  H.Hou and H.C. Andrews, “Cubic splines for image interpolation and digital filtering, ” IEEE Transaction on acoustics, speech, and signal processing, vol. 26. no.6, pp. 508-516, 1978.  [65]  K. Wang, Z. Ding, T. Wu, C. Wang, J. Meng, M. Chen, and L. Xu, "Development of a non-uniform discrete Fourier transform based high speed spectral domain optical coherence tomography system," Opt. Express, vol 17, pp. 12121-12131, 2009.  [66]  G.E. Sarty, R. Bennett, and R. W. Cox, "Direct Reconstruction of Non-Cartesian k-Space Data Using a Nonuniform Fast Fourier Transform," Magnetic Resonance in Medicine, vol. 45, pp. 908-915, 2001.  [67]  S. De Francesco and A.M.F. da Silva, "Efficient NUFFT-based direct Fourier algorithm for fan beam CT reconstruction," Proceedings of SPIE, vol. 5370, pp.666-677, 2004.  [68]  M. Bronstein, A. Bronstein and M. Zibulevsky, "Reconstruction in Diffraction Ultrasound Tomography Using Non-Uniform FFT," IEEE trans. medical imaging. vol.21, no. 11, pp.1395-1401, 2002.  [69]  K. Wang, G. Huang, Z. Ding, L. Wang, “High-speed Spectral-Domain Optical Coherence Tomography at 830nm Wavelength, ” ," Proceedings of SPIE, vol. 6826, art no. 68260A, 2007.  [70]  G.D. Knott, Interpolating Cubic Splines, Birkhauser Boston, 1999.  [71]  A. Dutt and V. Rokhlin, “Fast Fourier transforms for nonequispaced data,” SIAM J. Sci. Comp., vol. 14, no. 6, pp. 1368-1393, 1993.  [72]  J. Lee and L. Greengard, “The type 3 nonuniform FFT and its application, ” J. Computational Physics, vol. 206, iss. 1, 1-5, 2005.  130  [73]  J.A. Fessler, B.P. Sutton, "Nonuniform fast Fourier transforms using min-max Transactions on Signal Processing, vol. 51, no.2, 560-574, 2003.  [74]  L. Greengard and J. Lee, "Accelerating the Nonuniform Fast Fourier Transform," SIAM Review, vol. 46, no. 3, pp. 443-454, 2004.  [75]  Y. Rolain, J.Schoukens, and G. Vandersteen, “Signal Reconstruction for NonEquidistant Finite Length Sample Sets: A “KIS” Approach,” IEEE trans. on Instrumentation and Measurement, vol 47, no.5, pp.1046-1052, 1998.  [76]  C. Dorrer, N. Belabas, J.P. Likforman, and M. Joffre, "Spectral resolution and sampling issues in Fourier-transform spectral interferometry," J. Opt. Soc. Am. B vol.17, pp.1795-1802, 2000.  [77]  M. Choma, M.V. Sarunic, C. Yang, and J. Izatt, "Sensitivity advantage of swept source and Fourier domain optical coherence tomography," Opt. Express, vol. 11, pp.2183-2189, 2003.  [78]  D.C. Lee, J. Xu, M.V. Sarunic, and O.L. Moritz, “Fourier Domain Optical Coherence Tomography as a Noninvasive Means for In Vivo Detection of Retinal Degeneration in Xenopus laevis Tadpoles,” Investigative Ophthalmology and Visual Science, vol 51, no.2, pp.1066-1070, 2010.  [79]  L.W. Herndon, J.S. Weizer, S.S. Stinnett, “ Central Corneal Thickness as a Risk Factor for Advanced Glaucoma Damage, ” Archives of Ophthalmology, vol. 122, no. 1, pp.17-21, 2004.  [80]  M. Mujat, B. H. Park, B. Cense, T. C. Chen, and J. F. de Boer, "Autocalibration of spectral-domain optical coherence tomography spectrometers for in vivo quantitative retinal nerve fiber layer birefringence determination," J. Biomed. Opt., vol. 12, art no. 041205, 2007.  [81]  G.Amdahl, “The validity of the single processor approach to achieving large-scale computing capabilities,” in Processings of AFIPS Spring Joint Computer Conference (AFIPS, 196), vol. 30, pp. 483-485, 1967  [82]  OpenMP Architecture Review Board, “The openMP API specification for parallel programming,” [online] Avaiable: <http://www.openmp.org/>  [83]  A.H. Bachmann, R. Michaely, T. Lasser, and R.A. Leitgeb, "Dual beam heterodyne Fourier domain optical coherence tomography," Opt. Express, vol. 15, pp. 9254-9266, 2007.  131  [84]  J. Zhang, J.S. Nelson, and Z. Chen, "Removal of a mirror image and enhancement of the signal-to-noise ratio in Fourier-domain optical coherence tomography using an electro-optic phase modulator," Opt. Lett, vol. 30, pp.147-149, 2005.  [85]  Yoshiaki Yasuno, Shuichi Makita, Takashi Endo, Gouki Aoki, Masahide Itoh, and Toyohiko Yatagai, "Simultaneous B-M-mode scanning method for real-time full-range Fourier domain optical coherence tomography," Appl. Opt., vol. 45, pp.1861-1865, 2006.  [86]  S. Vergnole, G. Lamouche, and M.L. Dufour, "Artifact removal in Fourierdomain optical coherence tomography with a piezoelectric fiber stretcher," Opt. Lett., vol. 33, pp. 732-734, 2008.  [87]  M.V. Sarunic, M.A. Choma, C. Yang, and J.A. Izatt, "Instantaneous complex conjugate resolved spectral domain and swept-source OCT using 3x3 fiber couplers," Opt. Express, vol. 13, pp. 957-967, 2005.  [88]  M.V. Sarunic, B.E. Applegate, and J.A. Izatt, "Real-time quadrature projection complex conjugate resolved Fourier domain optical coherence tomography," Opt. Lett., vol. 31, pp. 2426-2428, 2006.  [89]  B. Baumann, M. Pircher, E. Götzinger, and C.K. Hitzenberger, "Full range complex spectral domain optical coherence tomography without additional phase shifters," Opt. Express, vol. 15, pp. 13375-13387, 2007.  [90]  L.An and R.K. Wang, "Use of a scanner to modulate spatial interferograms for in vivo full-range Fourier-domain optical coherence tomography," Opt. Lett., vol. 32, pp. 3423-3425, 2007.  [91]  R.A. Leitgeb, R. Michaely, T. Lasser, and S.C. Sekhar, "Complex ambiguity-free Fourier domain optical coherence tomography through transverse scanning," Opt. Lett., vol. 32, pp. 3453-3455, 2007.  [92]  J. Liu, “Spectral/Fourier Domain Doppler Optical Coherence Tomography in the Rodent Retina,” Master Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2008.  [93]  M. Kankaria, “Design and Construction of a Fast Spectrometer for Fourier Domain Optical Coherence Tomography,” Master Thesis, University of Texas at Arlington, Arlington, TX, USA, 2006.  [94]  S. Makita, “High-Speed Spectral-Domain Optical Coherence Tomography and In Vivo Human Eye Imaging,” Doctoral Dissertation, University of Tsukuba, Ibaraki, Japan, 2007  132  [95]  A.H. Bachmann, R.A. Leitgeb, T. Lasser, “Heterodyne Fourier domain optical coherence tomography for full range probing with high axial resolution”, Optics Express, vol 14, no. 4, pp. 1487-1496, 2006.  133  Appendix A - Regarding the use of animal tissues The use of animal tissues in this thesis is for the purpose of demonstrating the imaging ability of the developed SD-OCT system. No additional studies were preformed on the animal tissues and no live animals were kept in the lab. The animal tissues, other than the bovine eyes, were acquired as consumables from local supermarkets. The bovine eyes used for the combined High-frequency ultrasound-OCT system were acquired from a local slaughterhouse. The in-vivo images of the human finger were obtained from a volunteering from our lab. No human subjects were recruited from outside the laboratory. All imaging procedures were non-invasive and only required the sample to be placed on a microscope stage.  134  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0071121/manifest

Comment

Related Items