Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

On the design of a combined Raman and interferometric scattering high resolution microscope Christy, Ashton 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2016_september_christy_ashton.pdf [ 11.35MB ]
Metadata
JSON: 24-1.0306442.json
JSON-LD: 24-1.0306442-ld.json
RDF/XML (Pretty): 24-1.0306442-rdf.xml
RDF/JSON: 24-1.0306442-rdf.json
Turtle: 24-1.0306442-turtle.txt
N-Triples: 24-1.0306442-rdf-ntriples.txt
Original Record: 24-1.0306442-source.json
Full Text
24-1.0306442-fulltext.txt
Citation
24-1.0306442.ris

Full Text

On the Design of a Combined Raman and InterferometricScattering High Resolution MicroscopebyAshton ChristyB.Sc. Hons., The University of British Columbia, 2012A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFMaster of ScienceinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Chemistry)The University Of British Columbia(Vancouver)July 2016© Ashton Christy, 2016AbstractContemporary imaging science needs label-free methods of microscopy that re-solve morphological and chemical information in complex materials on a sub-micron length scale. Many of the most commonly-used techniques require sub-stantial adulteration of samples, and thus are hindered in their utility for in vivo ortime-resolved studies.This thesis introduces a new approach that integrates a confocal Raman mi-croscope with interferometric scattering, or iSCAT. The former technique is well-reported in the literature; iSCAT, however, is relatively novel. The principles ofiSCAT, developed in the last few years, have established a platform that providessuperior resolving power and signal contrast compared with other optical tech-niques. Our novel approach integrates wide-field iSCAT microscopy acquired atvideo rate with point-by-point confocal Raman spectroscopy.After first providing a brief overview of contemporary methods in microscopyand the challenges they present, this thesis discusses the basics of iSCAT, and thedesign and development of the instrument that unites this new technique with con-focal Raman microscopy. A discussion of design challenges follows. Next is a de-scription of the instruments user-end capabilities, followed by a brief explorationof future prospects.Provided throughout the text are results illustrating the capability of the instru-ment. These demonstrate how much potential the combination of iSCAT and Ra-man holds for characterizing complex materials, as well as the precision with whichthe instrument can do so. Wide-field images, 100 mm square with 200 nm resolu-tion, are sampled at 45 frames per second. The integrated Raman probe provideslabel-free highly reproducible chemical information without sample degradation.Together, these two data sets provide insights into covariance between morphologyand chemistry, all with minimal sample preparation.iiPrefaceThe original idea for the instrument presented in this thesis was concieved with thepartnership of the pulp and paper industry, as well as the pioneering work of Dr.Philipp Kukura at Oxford University. The instrument in its various configurations(Chapters 3-6) was largely designed by me, with assistance in design refinementand construction provided by Dr. Qifeng Li of Tianjin University. The instrument’suser interface, outlined in Chapter 7, was coded by me.The research presented in this thesis provided the foundation for the follow-ing conference presentation: A. Christy, N. Tavassoli, A. Bain, L. Melo, and E. R.Grant, ”Wide-Field Confocal Interferometric Backscattering (iSCAT)-Raman Mi-croscopy,” in Optics in the Life Sciences, OSA Technical Digest (online) (OpticalSociety of America, 2015), paper NM4C.4. I prepared and delivered the presenta-tion at the conference in April 2015, and the text published online was drafted byDr. Edward Grant, with input from all co-authors.Pulp and paper samples were provided by FPInnovations and Canfor Corp.Aerosol samples were provided by the lab of Dr. Allan Bertram, Department ofChemistry, University of British Columbia. Brain tissue samples were provided bythe lab of Dr. Shernaz Bamji, Department of Cellular and Physiological Sciences,University of British Columbia. Mineral samples were provided by Dr. GregoryDipple, Department of Earth, Ocean &Atmospheric Sciences, University of BritishColumbia. Plant specimens were provided by the University of British ColumbiaHerbarium.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xGlossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi1 Conceptual and Theoretical Background . . . . . . . . . . . . . . . 11.1 Contemporary Optical Microscopy . . . . . . . . . . . . . . . . . 11.1.1 Brightfield Microscopy . . . . . . . . . . . . . . . . . . . 11.1.2 Darkfield Microscopy . . . . . . . . . . . . . . . . . . . 21.1.3 Interferometric Microscopy . . . . . . . . . . . . . . . . 21.1.4 Disadvantages of Conventional Optical Techniques . . . . 31.2 Super-Resolution Microscopy . . . . . . . . . . . . . . . . . . . 41.2.1 Scanning Probe Techniques . . . . . . . . . . . . . . . . 41.2.2 Fluorescence Techniques . . . . . . . . . . . . . . . . . . 51.3 Interferometric Scattering Microscopy (iSCAT) As an Alternative . 7iv2 Experimental and Technical Background . . . . . . . . . . . . . . . 82.1 iSCAT: Interferometric Scattering Microscopy . . . . . . . . . . . 82.2 Confocal Raman Microscopy . . . . . . . . . . . . . . . . . . . . 122.3 A Novel Combination . . . . . . . . . . . . . . . . . . . . . . . . 133 Early Stages of Design . . . . . . . . . . . . . . . . . . . . . . . . . . 153.1 Early Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.1.1 Non-Confocal Raman Instruments . . . . . . . . . . . . . 163.2 Classification Models and Chemometrics . . . . . . . . . . . . . 183.3 Original Design Ideas . . . . . . . . . . . . . . . . . . . . . . . . 203.3.1 Designing an iSCAT System . . . . . . . . . . . . . . . . . 213.4 First Drafts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.5 Constructing the Instrument . . . . . . . . . . . . . . . . . . . . 263.5.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . 263.5.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . 284 The Original Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.1 Description of the Instrument . . . . . . . . . . . . . . . . . . . . 314.1.1 Description of Optical Train . . . . . . . . . . . . . . . . 334.1.2 List of Major Components . . . . . . . . . . . . . . . . . 334.2 Using the Instrument . . . . . . . . . . . . . . . . . . . . . . . . 354.3 Limitations and the Need for More . . . . . . . . . . . . . . . . . 364.3.1 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . 364.3.2 Time Investments . . . . . . . . . . . . . . . . . . . . . . 384.3.3 Moving Forward . . . . . . . . . . . . . . . . . . . . . . 415 Upgrading to Wide-Field . . . . . . . . . . . . . . . . . . . . . . . . 425.1 Wide-Field iSCAT: How and Why . . . . . . . . . . . . . . . . . 425.1.1 Advantages of Wide-Field Imaging . . . . . . . . . . . . 445.1.2 The Acousto-Optic Beam Deflector . . . . . . . . . . . . 465.1.3 Implementing the Wide-Field Channel . . . . . . . . . . . 485.1.4 iSCAT Beating . . . . . . . . . . . . . . . . . . . . . . . 48v6 The Finalized Design . . . . . . . . . . . . . . . . . . . . . . . . . . 506.1 Description of Upgraded Optical Train . . . . . . . . . . . . . . . 536.2 List of Upgraded Components . . . . . . . . . . . . . . . . . . . 536.2.1 Brightfield and Ko¨hler Illumination . . . . . . . . . . . . 546.2.2 Light Source . . . . . . . . . . . . . . . . . . . . . . . . 546.2.3 Optical Path . . . . . . . . . . . . . . . . . . . . . . . . . 556.3 Challenges and Limitations . . . . . . . . . . . . . . . . . . . . . 556.3.1 Raman Power Limitations . . . . . . . . . . . . . . . . . 566.3.2 Raman Signal Limitations . . . . . . . . . . . . . . . . . 576.3.3 iSCAT Resolution Limit . . . . . . . . . . . . . . . . . . . 586.3.4 The iSCAT Background . . . . . . . . . . . . . . . . . . . 597 Designing the User Interface . . . . . . . . . . . . . . . . . . . . . . 647.1 Communication Problems . . . . . . . . . . . . . . . . . . . . . 647.2 Constructing a Unified Interface . . . . . . . . . . . . . . . . . . 657.3 Functionality of the User Interface . . . . . . . . . . . . . . . . . 687.3.1 iSCAT Features . . . . . . . . . . . . . . . . . . . . . . . 687.3.2 Raman Features . . . . . . . . . . . . . . . . . . . . . . . 697.3.3 Other Features . . . . . . . . . . . . . . . . . . . . . . . 698 Data Collection and Processing Techniques . . . . . . . . . . . . . . 718.1 Data Collection: Procedures and Practices . . . . . . . . . . . . . 718.1.1 iSCAT Data Collection . . . . . . . . . . . . . . . . . . . 738.1.2 Raman Data Collection . . . . . . . . . . . . . . . . . . . 748.2 Common Processing Techniques . . . . . . . . . . . . . . . . . . 758.2.1 Processing iSCAT Data . . . . . . . . . . . . . . . . . . . 758.2.2 Processing Raman Signals . . . . . . . . . . . . . . . . . 769 Future Prospects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809.1 Moving to Confocal iSCAT: A Clearer Picture . . . . . . . . . . . 809.1.1 Challenges Facing Wide-Field iSCAT . . . . . . . . . . . . 809.2 Designing a Confocal iSCAT Channel . . . . . . . . . . . . . . . . 829.2.1 Choosing the Right Detector . . . . . . . . . . . . . . . . 839.2.2 Confocal Channel Implementation . . . . . . . . . . . . . 83vi9.2.3 Reconstructing the Confocal Image . . . . . . . . . . . . 859.3 Looking Forward . . . . . . . . . . . . . . . . . . . . . . . . . . 85Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86A Raman Map Reader Program: RMR.VI . . . . . . . . . . . . . . . . 95A.1 Raman Map Reader EXE.VI . . . . . . . . . . . . . . . . . . . . 96A.2 DWT Single.VI . . . . . . . . . . . . . . . . . . . . . . . . . . . 108A.3 SDVM Matlab.VI . . . . . . . . . . . . . . . . . . . . . . . . . . 109B MATLAB Code for Waveform Simulation . . . . . . . . . . . . . . . 111viiList of FiguresFigure 2.1 A simplified outline of a single-channel iSCAT experiment. . . 11Figure 2.2 A simplified outline of a confocal experiment. . . . . . . . . . 12Figure 3.1 Raman and ATR-FTIR spectra of a tissue paper sample. . . . . 16Figure 3.2 Classification models for tensile strength of bleached pulp. . . 19Figure 3.3 First Draft of the Single-Color Instrument. . . . . . . . . . . . 24Figure 4.1 Diagram of the Instrument’s Original, Single-Channel Config-uration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Figure 4.2 A 20x20 mm, 60x60 step single-channel iSCAT map of seaspray aerosol particles. . . . . . . . . . . . . . . . . . . . . . 37Figure 4.3 A 5x5 mm, 100x100 step single-channel iSCAT map of seaspray aerosol particles. . . . . . . . . . . . . . . . . . . . . . 37Figure 4.4 Raman spectrum of polystyrene bead. . . . . . . . . . . . . . 39Figure 4.5 Raman spectrum of polypropylene resin. . . . . . . . . . . . . 39Figure 4.6 Data set showing of thin slices of plasticized pig brain mountedin a TEM grid. . . . . . . . . . . . . . . . . . . . . . . . . . . 40Figure 5.1 iSCAT image of Microsphaera vaccinii. . . . . . . . . . . . . . 43Figure 5.2 iSCAT images of thin slices of plasticized pig brain mounted ina TEM grid. . . . . . . . . . . . . . . . . . . . . . . . . . . . 45Figure 5.3 MATLAB simulations of AOD raster patterns. . . . . . . . . . 47Figure 6.1 Diagram of the Instrument’s Final, Wide-Field Configuration. 51Figure 6.2 iSCAT images of Schistidium papillosum. . . . . . . . . . . . . 52viiiFigure 6.3 Emission spectrum of QTH10 tungsten-halogen lamp. . . . . 61Figure 6.4 Schematic of Ko¨hler illumination setup. . . . . . . . . . . . . 61Figure 6.5 Two Raman spectra of a marine aerosol sample, demonstratingcosmic ray removal. . . . . . . . . . . . . . . . . . . . . . . 62Figure 6.6 Raw and background-removed spectra of poly(methyl methacry-late) (PMMA). . . . . . . . . . . . . . . . . . . . . . . . . . . 62Figure 6.7 iSCAT image of a 30 nm gold nanoparticle. . . . . . . . . . . . 63Figure 6.8 Optically expanded iSCAT images of a 1951 USAF resolutiontest chart. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Figure 6.9 Schematic of the 1951 USAF resolution test chart. . . . . . . 63Figure 7.1 Example of user interface block diagram. . . . . . . . . . . . 66Figure 7.2 Example of hardware communication block diagram. . . . . . 66Figure 8.1 iSCAT image of graphene deposited on nickel. . . . . . . . . . 72Figure 8.2 Processed Raman spectrum of graphene deposited on nickel. . 72Figure 8.3 iSCAT and Raman data showingmagnesite adsorbed to a polystyrenebead. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78Figure 9.1 Irregular illumination in an unfocused iSCAT sample. . . . . . 81Figure A.1 RMR.VI in use, showing polystyrene. . . . . . . . . . . . . . 97Figure A.2 RMR.VI in use, showing magnesite. . . . . . . . . . . . . . . 98Figure B.1 Results of MATLAB AOD raster pattern simulations, and cor-responding experimental observations. . . . . . . . . . . . . . 112ixList of Equations1.1 Abbe Diffraction Limit . . . . . . . . . . . . . . . . . . . . . . . . . . 32.1 Intensity from an Inverted Microscope . . . . . . . . . . . . . . . . . . 92.2 Darkfield Microscopy Intensity . . . . . . . . . . . . . . . . . . . . . . 92.3 Snell’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.4 iSCAT Intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.5 iSCAT Signal Contrast . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.6 Mie Scattering Cross-Section . . . . . . . . . . . . . . . . . . . . . . . 113.1 Differential Amplifier Output . . . . . . . . . . . . . . . . . . . . . . . 213.2 LOG OUTPUT on Nirvana Photoreceiver. . . . . . . . . . . . . . . . . . 223.3 SIGNAL MONITOR on Nirvana Photoreceiver. . . . . . . . . . . . . . . 234.1 Abbe Diffraction Limit for the Instrument . . . . . . . . . . . . . . . . 385.1 Acousto-optic Deflection Angle . . . . . . . . . . . . . . . . . . . . . 466.1 iSCAT Intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598.1 Generalized Multivariate Regression . . . . . . . . . . . . . . . . . . . 77xGlossaryAcronyms50:50 BS 50:50 Beamsplitter Cube, an optical component that uses two equilateraltriangular prisms fused together on their hypotenuses to form a cube; thisconfiguration reflects 50% of incident light at an angle of 45◦, and allows50% of incident light to pass through the cube unperturbed. They can beoriented in either a right- or left-handed configuration.AOD Acousto-Optic Deflector, an optical hardware component that uses a radiofrequency electrical input to piezoelectrically induce shear waves in a tel-lurium dioxide (TeO2) crystal; light passing through the crystal is deflectedalong a rastering pattern.AFM Atomic Force Microscopy, a microscopic surface analysis technique thatfunctions in a manner analogous to a record player; a needle is dragged overa surface, and vibrations caused by surface topology are used to reconstructa surface image.ATR-FTIR Attenuated Total Reflectance-Fourier Transform Infrared Spectroscopy,a widely used suite of spectroscopic techniques commonly bundled into asingle instrument. The ATR portion of the instrument uses a crystal witha high refractive index to send an infrared evanescent wave into a sample.The reflected light is then collected into the FTIR portion of the instrument,where the data are converted from the frequency to the time domain, and arecollected with a very high sensitivity. [18]xiCCD Charge-Coupled Device, a type of high-sensitivity photodetector that con-sists of an array of metal-oxide-semiconductor pixels, and that allows accu-mulated photoelectric charge to be shifted between pixels to an external am-plifier. A more specialized, sensitive, and expensive alternative to a CMOSsensor.CMOS Complementary Metal-Oxide-Semiconductor sensor, a type photodetectorthat consists of an array of metal-oxide-semiconductor pixels, each with anintegrated amplifier unit. A cheaper and less sensitive alternative to a CCD.CRT Cathode Ray Tube, a type of screen that uses an electron beam to illuminatephosphorescent pixels to display an image.DBS Dichroic Beamsplitter, an optical band-rejection filter that uses a dichroic op-tical coating to split light of a certain frequency range away from its incidentbeam path.DWT Discrete Wavelet Transform, a method of multivariate analysis that decom-poses a signal into a set orthogonal wavelets. Some wavelets (highest andlowest frequencies) are discarded, and the signal is reconstructed from theremaining wavelets.FPS Frames per Second, a measure of the data retrieval rate from an image sensor.HeNe Helium-Neon laser, a type of gas-phase laser that uses a 10 : 1 mix of heliumand neon as its gain medium.HNF Holographic Notch Filter, an optical component that uses a holographicallyetched surface to stop a very narrow range of frequencies.iSCAT Interferometric Scattering Microscopy, a technique that relies on the inter-ference between reflected and scattered light to observe refractive index mor-phology.LCF Laser Clean-up Filter, an optical bandpass filter that uses an optical coatingto transmit light over a very narrow range of wavelengths, used to filter outextraneous wavelengths generated by a laser.xiiNA Numerical Aperture, a number that quantifies the resolving power of a micro-scope objective.NBSK Northern Bleached Softwood Kraft, a standard type of pulp produced fromboreal softwood (i.e. coniferous) trees, used to reinforce writing paper andmanufacture kraft and tissue paper. [49]NDF Neutral Density Filter, an opitcal filter that uses smoked glass to attenuatelight passing through it.NEP Noise-equivalent Power, a measure of a photodetector’s minimum detectionthreshold, integrated over half a second. That is to say, it is the power belowwhich a signal is indistinguishable from random noise.NIR Near-Infrared Spectroscopy, a spectroscopic technique using light in the near-infrared region, typically between 700 and 2500 nm. Its main advantage overconventional (mid-range) infrared spectroscopy is its penetration depth; NIRis commonly used in medicine.NSERC Natural Sciences and Engineering Research Council of CanadaN-BK7 Nachfolgematerials Bor-Kronglas 7 (lit. “Successor to Boro-Crown Glass7”), a proprietary type of high-quality borosilicate glass, developed andmanufactured by Schott AG (Mainz, Germany), used in a wide variety opti-cal applications.OSC-SVM Orthogonal Signal Correction-Support Vector Machine, a pair of ad-vanced multivariate statistical techniques used to build a classification modelfor complex sets of variables. These are designed to remove orthogonal datafrom the model to simplify it and yield better results, something which aPartial Least-Squares Regression cannot do. [90]PCA Principal Component Analysis, a multivariate statistical technique used torank orthonormal components of a transformed data set by their contributionto set’s overall variance.xiiiPLS Partial Least-Squares Regression, a multivariate statistical technique used tobuild a classification model for complex sets of variables, where correlationsmay not be readily apparent.PMMA poly(methyl methacrylate), a transparent polymer with a wide variety ofuses. Commonly known as acrylic, Plexiglas, or Lucite.PSF Point Spread Function, a broadening or blurring of in-focus point sources onan image. Mathematically, it is the optical domain transfer function (as op-posed to optical frequency), and is usually determined by the system ratherthan the sample.ROI Region of InterestRMSEP Root-Mean-Square Error of Prediction, a measure of the uncertainty ofa classification model’s predictions; as such, the lowest possible value ispreferred. [87] In this way it the converse of the more commonly used Co-efficient of Determination (R2).SDVM Second-Derivative VarianceMinimization, an iterative data processing tech-nique used to minimize the variance between a data signal and a backgroundsignal; it functions by successively subtracting the second derivatives of thesignals from one another until their variance reaches some arbitrary thresh-old, then removing the computed background components from the data sig-nal. [84]SERS Surface-Enhanced Raman Spectroscopy, a Raman technique that employssurface adsorption to enhance the Raman effect of a sample by up to tenorders of magnitude, although the exact mechanism is still debated in theliterature.SiPM Silicon Photomultiplier, a type of photodetector consisting of an array ofavalanche photodiodes on a silicon platform. SiPM detectors are exception-ally sensitive, able to detect single photons.SITK Scientific Imaging Toolkit, a proprietary collection of LabVIEW software,xivdeveloped by RCubed Software (Princeton, NJ, USA), used for communi-cating with various scientific cameras.TEM Transverse Electromagnetic Mode, a laser beam mode in which neither theelectric nor magnetic field oscillates along the direction of light propaga-tion. A 00 subscript (TEM00) indicates the beam is propagating in a purelyGaussian shape.TOGA Template-Oriented Genetic Algorithm, a type of evolutionary multivariateprocessing technique that uses a fixed set of predictor variables to guide itsiterative calculations, in order to minimize their variance. [45]VCO Voltage-Controlled Oscillator, a hardware component that uses an AC volt-age input to control the oscillation frequency of its signal output.VI Virtual Instrument, a proprietary file format used for LabVIEW software.xvAcknowledgments• Dr. Ed Grant• Dr. Qifeng Li• Najmeh Tavassoli• Luke Melo• The rest of the lab crew, past and present - Hossein, Julian, Mahyad, Evan,Jamie, Jachin, Zhiwen, Alison, Markus, J.P.• The Canada Foundation for Innovation• The Natural Sciences and Engineering Research Council of CanadaxviChapter 1Conceptual and TheoreticalBackgroundThis chapter will serve to frame the new instrument that is the focus of this thesis,in the context of a brief overview of contemporary microscopy and its limitations.1.1 Contemporary Optical MicroscopyOptical microscopy is nothing new; the optical microscope dates back to at least1609, when Galileo Galilei developed an instrument he called an occhiolino, con-sisting of two simple lenses. Since then, microscopes have offered us a view intowhat our eyes alone cannot see.For most of the history of the microscope, designs tended to be quite simple,with a minimum number of lenses and simple incandescent illumination. Overtime, as our understanding of optics progressed, new techniques were developed.This section outlines some of the most basic approaches to light microscopy.1.1.1 Brightfield MicroscopyThe simplest of all microscopies is termed brightfield microscopy. It simply con-sists of a lamp transmitting incoherent light through a sample and simple lensesor an objective to expand and collect the transmitted light. This transilluminationapproach requires that samples be transparent (and ideally colored); thus, a sample1appears dark on a bright background, hence the name of the technique. In modernapplications, brightfield microscopy is often considered a “quick and dirty” tech-nique, providing fast, reliable, and reproducible results, at the cost of low contrastand resolution. [1, 2]1.1.2 Darkfield MicroscopyAn alternative illumination approach, termed darkfield microscopy, uses an in-verted configuration, where the light illuminates the sample through the objective,as opposed to through the sample. Thus, instead of collecting the transmitted light,the objective collects backscattered light. This causes a sample to appear brighton a dark background. Darkfield microscopy can provide better detail than bright-field because it does not collect shadows, but it typically cannot image a sample inits totality, and requires more specialized sample preparation than does brightfield.[1, 3]1.1.3 Interferometric MicroscopyA number of techniques have been developed to enhance the contrast of transil-luminated brightfield microscopy. These techniques exploit interference betweentransmitted and scattered light that has passed through the sample. The interfer-ence arises due to a shift in the phase of scattered light. Transmitted light doesnot interact with a sample, while scattered light does; thus, the optical path lengthof the scattered light is longer than that of transmitted light, causing the former tomove slightly out of phase of the latter. This interference amplifies image contrast.The first such technique, known as phase-contrast microscopy, uses a series ofannular optics to exploit the phase shift caused by diffraction in a sample. Ring-shaped light is first focused onto a sample; light scattered (and phase-shifted) bythe sample is collected as-is, while light transmitted by the sample is phase-alignedto either 0◦ or 180◦ relative to the scattered light, then attenuated. This phase mod-ulation increases image contrast by introducing constructive interference betweenthe transmitted and scattered light when they recombine at the image plane. Selec-tively attenuating the transmitted light further increases contrast. [4, 5] Thus, animage produced with a phase-contrast microscope is much more sharply resolved2than that produced with an analogous brightfield microscope.Though phase-contrast microscopes are relatively simple and robust, they arelimited by two factors; first, samples must be optically transparent. Second, thediffraction caused by the annular illumination often creates halos of light aroundsamples, which can make interpreting an image somewhat more complicated. [4, 6]An alternative approach to increasing phase contrast is Differential InterferenceContrast microscopy (DIC), which uses a specialized prism to separate polarized il-lumination into two orthogonal rays. The separated rays enter the sample at slightlydifferent positions, so after interacting with the sample, their phases are slightly dif-ferent. The rays are recombined in another prism, and subsequently interfere. Atpoints where there is little phase shift between the rays, there is constructive inter-ference, and thus image brightening; conversely, where the phase shift is large, dueto sample morphology, there is destructive interference and image darkening. Ineffect, samples appear to cast shadows at an oblique angle. [4, 6, 7]As with phase-contrast microscopy, imaging opaque samples is not possiblewith a typical DIC microscope.1 Additionally, the orientation of the recombinationprism can have a substantial effect on the observed image, by rotating the angleof the sample “shadows”; thus, some features may become more pronounced at adifferent prism orientation, while others become less visible. [4, 6]1.1.4 Disadvantages of Conventional Optical TechniquesThough these conventional light microscopy techniques are routinely used, theyhave serious limitations, probably the most widely recognized of which is the Abbediffraction limit, shown below.d =l2nsinq=l2∗NA (1.1)where l is the illumination wavelength, n is the index of refraction of the samplemedium, q is the maximum half angle of the light cone exiting the objective, andNumerical Aperture (NA) is defined as: NA ≡ nsinq . In simplest terms, the limit1By substantially altering the optical train of a DIC microscope, opaque samples can indeed beimaged, though this is not commonly done. [4]3can be understood following: one cannot measure something smaller than one’sinstrument of measure. In the context of optical microscopy, this effectively meansthat objects smaller than the wavelength of illumination cannot be resolved.Another limitation of conventional optical techniques is their dynamic range ofresolution. Specifically with regards to inverted microscopy, the detected intensityis dependent on the diameter of the sample scatterer to the sixth power (D6).2 Withthe Abbe diffraction limit in mind, this dynamic range scaling means that it can bedifficult to observe fine features with specificity.These two limitations mean that conventional techniques are not well suited toimaging the nanoscale samples that are often of interest to today’s researcher.1.2 Super-Resolution MicroscopyRecently, new developments in optical microscopy techniques have broken theAbbe diffraction limit. There are a number of ways this is accomplished, such asexploiting molecular fluorescence, or imaging using a nanoscale scanning probe.[8–12]1.2.1 Scanning Probe TechniquesOne approach to surpassing the Abbe diffraction limit is to decrease the size of theprobe, thereby improving the angular resolution of the instrument beyond that ofpure diffracted light. By feeding all the sample information through a nanoscopicconductor - a metal tip - the sources of the information can be localized with greatprecision.Tip-Enhanced Raman SpectroscopyTip-Enhanced Raman Spectroscopy (TERS) is a popular technique to extract bothchemical and topographical information from a wide range of samples. The probeitself, a nanoscopic gold tip, performs Surface-Enhanced Raman Spectroscopy(SERS); its point is small enough that its evanescent field enhances the Raman effectin samples by several orders of magnitude. [13, 14] The tip is kept a very short dis-tance from the sample surface, and is illuminated by a laser; the most efficient illu-2See Ch. 2.1 for more information.4mination geometry is in-line, that is to say, focused onto the tip through the sample.[14] The tip is then rastered across the sample surface, providing highly localizedRaman spectra and effectively constructing an Atomic Force Microscopy (AFM)image. TERS can provide spatial resolutions in the tens of nanometers, but the Ra-man spectra it generates are often difficult to interpret without a reference collectedusing more conventional means. [13, 15]Infrared Nano-Imaging (Nano-IR)Infrared absorption is an alternative approach to imaging samples with a scanningprobe. The instrumental design for a nano-IR experiment is very similar in prin-ciple to that of a TERS experiment, and in fact relies on a similar surface enhance-ment effect. [16, 17] Infrared absorption is label-free, provides complementarydata to Raman spectroscopy, [18], and is often used to investigate metallic sam-ples. [16, 19–21]DisadvantagesTechniques that rely on metallic nanoscopic probes suffer one major drawback, andthat is the sheer complexity of the setup they require. Nanoscopic probe tips are ex-tremely delicate and prone to contamination, and thus require special accommoda-tion. Isolating signals from these probes requires a very complex optical train, andgenerating a sample image requires a substantial time commitment. [14, 17, 21]Further, scanning probe techniques are designed for surface analysis, so imagingthe interior of samples can be challenging. [13, 14]1.2.2 Fluorescence TechniquesAnother approach to breaking the Abbe diffraction limit is to resolve light sourceswithin the sample (i.e. fluorophores), rather than relying on external light to illumi-nate the sample. Direct detection of fluorophores affords greater certainty of theirposition within the sample; thus, fluorescing sample features can be localized withgreater precision than the Abbe diffraction limit would seem to allow.The earliest effective super-resolution fluorescence technique involves selec-tive depletion of fluorophores, and was awarded the Nobel prize in Chemistry in52014. This depletion is generally referred to as the Reversible Saturable OpticalLinear Fluorescence Transitions (RESOLFT) principle, and perhaps best known ap-plication is Stimulated Emission Depletion Microscopy (STED). [12, 22–25]STED images samples with two laser pulses; one circular pulse to excite fluo-rophores within a sample, shortly followed by a second annular pulse (of a differentcolor light) to quench the excited fluorophores, with the exception of those in thesmall central area of the ring. Thus, only those fluorophores this small central areaare observed. The two beams are rastered over a sample, so that a super-resolvedimage of the sample can be built, with resolutions in the dozens of nanometers.[22, 23, 25]DisadvantagesUsing techniques such as STED requires that a sample contain spatially distributedphotoswitchable fluorophores. These fluorophores must be introduced to samples,either as dyes or as labels. [12, 26] Though with the proliferation of highly func-tionalizable fluorescent proteins, this has become a routine procedure in most bio-logical labs, labeling still presents several challenges. Label specificity and densityare the most notable, as well as concerns about adulteration of the sample itself bythe introduction of large proteins functionalized with antibodies. [27]Another difficulty that often arises is photobleaching; that is to say, repeatedexposure to high intensity light may alter the chemical structure of fluorophores(eg. the denaturing of a fluorescent protein), which prevents them from fluorescing.[10, 26] This is an especially relevant concern for stochastic methods, that collectsequences of fluorescence images.Note that photobleaching is a separate phenomenon from quenching, as ex-ploited by STED. Quenching is an electronic effect, where electrons are forciblytransferred to lower energy levels without fluorescing,3 and no chemical changestake place. [22] Nonetheless, photobleaching does remain a problemwith STED andRESOLFT techniques, albeit one of less concern than stochastic techniques. [24]3Quenching forces an excited fluorophore to undergo stimulated emission, rather than fluores-cence.61.3 Interferometric Scattering Microscopy (iSCAT) As anAlternativeGiven the pitfalls of conventional optical and fluorescence techniques, a new tech-nique has been recently developed to offer an alternative, not necessarily to replace,but to compliment these techniques. The new technique is known as Interferomet-ric Scattering Microscopy. Essentially, the technique exploits the interference be-tween backscattered and reflected light from a sample to probe the sample’s refrac-tive index morphology. This is similar to the phase-contrast techniques mentionedabove, but with the noted difference that iSCAT is an inverted microscopy technique,like darkfield, and does not use transillumination like phase-contrast and DIC mi-croscopies. The next chapter will outline the technical aspects of the methodology,but its advantages bear mentioning here.One of iSCAT’s biggest advantages is iSCAT’s much greater resolution than otheroptical techniques. In brief, iSCAT’s dynamic range of resolution depends on ascatterer’s diameter to the third power (D3), as opposed as to the sixth power (D6).The reasons are discussed in more detail in Ch. 2.1. In effect, this differenceaffords iSCAT a much greater ability to image smaller samples than techniques suchas darkfield imaging.A further advantage is that iSCAT is entirely label-free, i.e. that it requiresno sample adulteration. This advantage is shared with conventional optical tech-niques. iSCAT is also entirely non-destructive, and photobleaching is not a concern.When combined with its increased resolution, iSCAT is well-suited to provide newinsights into biological samples that may not be particularly compatible with othertechniques.When compared specifically with phase-contrast techniques, iSCAT can imageboth optically transparent and opaque samples without any alteration of the opticaltrain. This allows for a much larger variety of samples to be imaged.7Chapter 2Experimental and TechnicalBackgroundThis chapter will outline the fundamental physical principles of two key method-ologies, Interferometric Scattering Microscopy (iSCAT) and confocal Raman mi-croscopy. This thesis focuses on the instrument that unites the two, as opposedto the methodologies themselves, this chapter will not delve into great detail, butwill provide a review of the underlying principles sufficient for the reader to betterunderstand the instrumental development outlined in the next several chapters.2.1 iSCAT: Interferometric Scattering MicroscopyIn their seminal 2012 paper, Kukura and Ortega-Arroyo describe Interferomet-ric Scattering Microscopy as “[yielding] extremes in sensitivity and speed thathave until recently been deemed far beyond reach in fluorescence-free optical mi-croscopy.”[28]Generally, iSCAT has a nearly identical in setup to darkfield imaging; that is tosay, it relies on an inverted objective. This differs from phase-contrast techniques;although they rely on similar interference, the latter use transmitted light. In aninverted regime, detected intensity I generally depends on the sum of the reflectedand backscattered light fields, Ereflection and Escattering respectively.8I can be calculated as follows:I =12ceEreflection+Escattering2 = |Eincident|2{r2+ |s|2−2r |s|cosϕ} (2.1)where c and e are the speed of light and permittivity of the medium, respectively,r and s are the reflection and scattering amplitudes, respectively, Eincident is theincident light field (from the illumination source), and ϕ is the phase differencebetween the reflected and scattered light.1 [28, 29] The right-most term in Eq.(2.1) is obtained by expanding the reflection and scattering field terms, assumingthat they interfere. [29] In this formulation, it is easier to separate iSCAT fromdarkfield microscopy. The three resulting variables represent reflected, scattered,and interfering light, respectively.In the darkfield regime, scattered light is the dominant contribution to the de-tected intensity I; thus, Eq. (2.1) becomes:Idarkfield = |Eincident|2 |s|2 (2.2)This simple scattering-based formulation is observed in darkfield images, inwhich the sample appears light and the background appears dark. However, iSCATrelies not on the pure backscattering from a sample, but on the interference betweenthat backscattering and the reflected light from the sample.The eponymous interference arises because light that is backscattered from asample has a longer optical path length than light that is simply reflected from thesample surface; thus, is phase-shifted with respect to reflected light. The differ-ence in path length, manifested as phase difference, is due to refraction within thesample. Snell’s law (Eq. (2.3)) describes how light bends when passing through aninterface between materials with different indices of refraction.1In a standard transillumination scheme, used for brightfield or phase-contrast microscopies, thereflection field is replaced with a background field, Ebackground. Otherwise the math in Eq. (2.1)remains the same.9sinq1sinq2=n1n2(2.3)The ratio of indices of refraction (n) of the sample determines the angle of re-fraction (q ), and thus the path length. Since backscattering may occur throughouta sample, and an inhomogeneous sample will have a pronounced refractive indexmorphology, backscattered light collected by the inverted objective will likewisehave an inhomogeneous set of phases. When reflected light from the flat glass cov-erslip (or from the sample itself) is taken as a reference, the interference betweensaid reflection and the backscattered light will provide an accurate image of thesample’s refractive index morphology, visible as variations in detected intensity.Returning to Eq. (2.1), it can be assumed that most iSCAT samples are weakscatterers, such that r≫ s; thus, the interference term will outweigh the pure scat-tering term. The resulting iSCAT image will therefore look more like a brightfieldimage, with dark features on a light background. Eq. (2.1) becomes:IiSCAT = |Eincident|2{r2−2r |s|cosϕ} (2.4)Thus, it can be seen that the interference will be destructive when the scatteredand reflected light are in of phase (i.e. when ϕ = 0 or 2p), and constructive in theconverse case (when ϕ = p). For strongly scattering samples, i.e. when r ≤ s,the |s|2 term in (2.1) becomes non-negligible, and the iSCAT image inverts; lightfeatures will be observed on a dark background. [28–30]In either case, the iSCAT signal contrastC can be determined as follows:C =IsignalIbackground=interferencereflection=2 |s|cosϕr(2.5)Note that the contrast is linearly dependent on scattering amplitude |s|. PerMie theory, this in turn means that the dynamic range of feature sizes visible toiSCAT is scaled to the third power (D3). The equation below (Eq. (2.6)) shows thewavelength-dependent scattering cross-section of an object, as determined by Mie10Figure 2.1: A simplified outline of a single-channel iSCAT experiment.theory.2s(l ) =pD32Qext(m;a) (2.6)where Dp is the diameter of the scatterer and Qext is extinction efficiency, thelatter dependent on refractive index and the dimensionless size parameter a ≡pD=l . Referring to Eq. (2.2), the contrast provided by darkfield microscopy hasa quadratic dependence on the scattering amplitude, due to the dominance of the|s|2; thus, darkfield’s dynamic range of resolution scales with D6. Recall from Eq.(2.4) that iSCAT’s intensity depends on |s| in the interference term; thus, the dy-namic range of its resolution scales with D3.3[28, 29, 31] In practical terms, thisgives iSCAT a much broader detection range, allowing it to observe much smallerfeatures than darkfield microscopy, though both are ultimately Abbe diffraction-limited.Figure 2.1 shows a schematic of a basic single-channel iSCAT detector. Thedetector, in this case, would be something like a photodiode or a Silicon Photomul-2Mie scattering occurs when the wavelength of light and diameter of the scatterer are on the sameorder of length. The actual mathematics involved are beyond the scope of this thesis. [31]3Brightfield and phase-contrast microscopies’ dynamic ranges also scale with D6.11Figure 2.2: A simplified outline of a confocal experiment, showing in-focus(red) and out-of-focus (green) light.tiplier (SiPM). The interference pattern would be measured by the detector (alongwith the reflection background; see Eq. (2.4)); the interference would vary overthe spatial domain of the sample, and by collecting a series of points across thesample, an image could be reconstructed.2.2 Confocal Raman MicroscopyThe basic principle of confocality is evident in its name: a focus point in a sampleis also focused through a pinhole. Thus, in a confocal microscope, out-of-focuslight from the sample is largely - though not completely - excluded by the pinhole,and the in-focus image can be collected with much higher contrast than with aconventional optical microscope. [32]The basic outline of a confocal microscope can be seen in Fig. 2.2. Light isfocused through a fixed objective onto a sample; light returning from the samplethrough the objective is fed through a confocal pinhole and into a detector. Theposition of the pinhole must be tuned so that it collects exclusively in-focus light.The graph on Fig. 2.2 shows the detected signal with respect to the position of ori-gin within the sample (along the z-axis). At lower z values, i.e. z values below thefocal length of the objective, light is largely excluded by the pinhole, and relatively12little signal reaches the detector. When the z value is equal to the objective’s focallength, the light is focused through the pinhole, and collected in its entirety by thedetector. Moving the sample is all that is required to image different areas of thesample, since the focal length and position of the objective are fixed.In order to optimize confocal methodology, care must be taken to choose theright optical components. This includes an appropriate objective and pinhole; Nu-merical Aperture (NA) and pinhole diameter both affect performance. The choiceof objective affects the shape and position of the light cone in the sample, which inturn determines the size of the focus point, and thus, the pinhole. [33, 34]There is a plethora of literature regarding confocal Raman microscopy, espe-cially relating to its uses in the life sciences. [11, 35–41] One of its big advantagesis its ability to perform in vivo imaging of biological tissues. [42–44] This is partlydue to its purely optical, non-invasive nature, as well as its minimal sample prepa-ration. The other key advantage of Raman spectroscopy that lends itself to in vivoimaging, compared to similar non-invasive techniques such as Fourier TransformInfrared (FTIR), is that water heavily absorbs infrared light, and transmits visiblelight. Thus, Raman spectroscopy conducted in the visible range is better able tocharacterize biological samples that consist mainly of water. [38, 43]2.3 A Novel CombinationMerging confocal Raman microscopy with iSCAT microscopy presents a numberof unique opportunities. There are two main disadvantages to Raman microscopy.Firstly, not every material is susceptible to the Raman effect, and secondly, mostmaterials that are scatter very weakly and are difficult to observe and quantify, evenusing a confocal microscope.By pairing confocal Raman microscopy with iSCAT, these limitations are notnecessarily resolved, but they become less of a concern. Rather than necessitat-ing adulterative Raman enhancement methods such as Surface-Enhanced RamanSpectroscopy (SERS), iSCAT opens up a second window on a sample, where a re-searcher can observe refractive index morphology that is often complementary tochemical morphology. Thus, the non-invasive, label-free approach is preserved,and two data sets can be obtained from a single sample with a single instrument.13Another advantage, specifically arising from the two complementary data sets,is the ability to instantaneously cross-reference iSCAT and Raman data. iSCAT imag-ing may reveal information about a sample that is invisible to Raman microscopy,and vice versa. Further, if there is a region of particular interest that is visible inan iSCAT image, the confocal Raman probe can be directed to that specific region,obviating the need for lengthy trial-and-error approaches to collecting meaningfulRaman data.The process of combining these two techniques into a single working instru-ment is outlined in the next several chapters.14Chapter 3Early Stages of DesignFrom a sketch on the proverbial cocktail napkin to a working reality: this chapterdescribes the development and construction of the first ever microscope to combineconfocal Raman spectroscopy and Interferometric Scattering Microscopy (iSCAT).The earliest designs were significantly altered by the time the instrument was com-pleted. One need only glance at Figs. 3.3 and 4.1 to see the divergence; this processof design refinement is described in this chapter, and the finalized instrument is de-scribed in the next (Ch. 4).3.1 Early WorkInitially, the new microscope instrument was conceived of as both an exercise inoptomechanical engineering and, more importantly, as a way to better examinesamples of wood pulp and paper products. At the time this new instrument wascoming together as a concept, much work was being done in the lab on the clas-sification of samples provided by our research partners in the forestry and paperindustry. This work involved taking many Raman spectra of a variety of samples,then using a variety of chemometric methods to build a classification model forvarious sample parameters of import to the industry. Some of these models alsoinvolved data taken using Near-Infrared Spectroscopy (NIR) and Attenuated TotalReflectance-Fourier Transform Infrared Spectroscopy, both of which provide muchhigher signal strength than spontaneous Raman spectroscopy. [18]15Figure 3.1: Raman spectrum (left, taken using the RP-1 instrument; integra-tion time 4 s) and ATR-FTIR spectrum (right; average of 20 scans) of atissue paper sample.3.1.1 Non-Confocal Raman InstrumentsThe Raman data was collected from the pulp and paper samples using one of twoinstruments, as described below.RP-1The first instrument routinely used was an RP-1 “Raman Gun”, manufactured bySpectraCode, Inc. (formerly of West Lafayette, IN, USA). The instrument itselfwas created for use when sorting plastic materials for recycling; it was designedto collect a library of spectra from samples of known composition, and to classifyunknown samples using that library. Thus, it was well suited for use with theclassification of pulp and paper samples. Its robust, user-friendly design meantthat no modifications were needed.The RP-1’s spectrograph is a 150 mm (f/4.0) component monochromator, man-ufactured by Acton Research Corp. (Acton, MA, USA), and the detector is aCharge-Coupled Device (CCD) camera comprised of a 1024×256 array of 20 mmpixels. The RP-1’s laser source is a 785 nm, 350 mW single-mode diode source,suitable for spontaneous Raman spectroscopy. The light itself is guided to and fromthe sample using fiber optics; this was necessary to construct the mobile “gun”probe, which was designed to be used by minimally-trained employees.When used to collect data from pulp and paper samples, the “gun” was mounted16in a vertical position, with the active end pointing down towards the sample. Sincethe probe’s internal optics have a fixed focal length of several centimeters, its ver-tical position was fixed, and the sample stage was placed on a lab jack. Said stageconsists of a fixed sample holder attached to a spinning plate, designed to allow theRP-1 to collect data over an area of the sample, rather than just a single point. Thus,a single spectrum (with an exposure time of a second or more) contains spatiallyaveraged data; in theory, this would average out any sample abnormalities.Due to the limitations imposed by the pre-packaged design of the RP-1, atten-uating background light proved difficult; the best (or, at least, easiest) solution wassimply to shroud the “gun” and sample stage with a blanket. This has the cleardrawback of impeding user access to both the “gun” and the sample. Addition-ally, the RP-1’s use of fiber optics mean that the system is especially susceptibleto picking up sample fluorescence and background noise, even when shrouded.Both drawbacks necessitated the frequent use of background removal. The lack ofconfocality also limits the RP-1’s resolution. However, despite those issues, the ro-bustness of the RP-1’s design mean that it is eminently reliable, and records highlyreproducible spectra. An example can be seen in Fig. 3.1.The RP-1 requires no sample preparation whatsoever; that was indeed the goalof its gun-like point-and-shoot design. This unfortunately makes the RP-1 impos-sible to use with certain types of samples, especially those that necessitate mi-croscope slides. Since the list of incompatible samples includes most samples ofbiological and biomedical interest, the RP-1 alone could not satisfy the researchgoals of the group.Olympus Microscope (BX51)Another instrument that was routinely used to collect data was a Raman micro-scope. This instrument consists of a 785 nm, 300 mW diode laser manufactured byInnovative Photonic Solutions (IPS) (Monmouth Junction, NJ, USA), fiber-coupledto an upright optical microscope (model no. BX51, manufactured by OlympusCorp., Shinjuku, Toyko, Japan). Much as with the RP-1, light returning from thesample is spread through a vertical stack of fibers onto a 300 mm monochromatormanufactured by Acton Research Corp. (Acton, MA, USA), and detected by a17back-cooled CCD camera (PIXIS family, manufactured by Princeton Instruments,Trenton, NJ, USA). [45]Although the BX51 microscope routinely provides reproducible spontaneousRaman spectra, its resolution is still limited, due to the nature of fiber optics and itslack of confocality. Nonetheless, its ability to probe samples on microscope slidesmakes it highly useful. Its sample stage features manual and computer-controlledpositioning, which allows sample surfaces to be held in place or easily scanned,both of which were distinct challenges using the RP-1.ATR-FTIRAs mentioned previously, an Attenuated Total Reflectance-Fourier Transform In-frared Spectroscopy (ATR-FTIR) instrument was used to take additional data for usein classification models. The instrument is located in the UBC Chemistry Depart-ment’s Shared Instrument Facility (SIF); it is a commercial PerkinElmer Frontier™Fourier Transform Infrared (FTIR) spectrometer (Waltham, MA, USA), with an At-tenuated Total Reflectance (ATR) attachment. See Fig. 3.1 for an example.3.2 Classification Models and ChemometricsMost of the samples processed using the RP-1 were of soft tissue paper, providedby FPInnovations (Pointe-Claire, QC, Canada). The ATR-FTIR was mainly usedwith bleached and unbleached cardboard samples provided by Canfor Corp. (Van-couver, BC, Canada). The BX51 microscope was used for both types of samples(and for other projects besides).The immediate goal of the work was to see if Raman spectroscopy, using theRP-1 to begin with, would be able to accurately predict a number of the papersamples’ intrinsic qualities, such as handfeel (or softness), tensile strength, burststrength, tear strength, and pulp density. Some of these parameters, especiallythose related to physical strength of the pulp samples, are fairly easy to determineusing standard force gauging equipment. However, finding some parameters, suchas cellulose content, requires the lengthy and destructive chemical transformationof samples. [46] Still other parameters, such as handfeel, are entirely subjective,and cannot be determined before the product manufacture is complete. The forestry18Figure 3.2: Classification models for tensile strength of bleached pulp; thespectra were collected using ATR-FTIR. Red circles show the data setsobtained using traditional methods (physically ripping samples apart us-ing a force gauge), [50] and green stars show the predicted data sets,based on the ATR-FTIR spectra. Note the significantly lower RMSEP onthe right-hand model.Left: Model built using DWT and PLS; RMSEP= 0:42Right: Model built using OSC-SVM analysis; RMSEP= 0:35.industry has significant interest in the ability to predict these relevant parametersbefore or during manufacture, as they can significantly affect product quality, andultimately, profit margins.Of further interest to the forestry industry are the effects of the mountain pinebeetle (Dendroctonus ponderosae) on the pulp and paper produced from infestedtrees. D. ponderosae infestations have caused significant damage to the forest re-sources of western North America. [47] In the British Columbia Interior, popula-tions of lodgepole pine (Pinus contorta var. latifolia) have been severely affectedby D. ponderosae. P. contorta var. latifolia is extremely important to the forestryindustry, as it is the main constituent of Canadian Northern Bleached SoftwoodKraft (NBSK) pulp, which is of significant economic importance both locally andnationally. [47–49]To determine the predictive ability of Raman spectroscopy, several classifica-tion models were constructed for various parameters. The Raman data were ana-lyzed using a variety of multivariate analyses and chemometric methods, such asPrincipal Component Analysis (PCA), Partial Least-Squares Regression (PLS), and19Template-Oriented Genetic Algorithm (TOGA). In some cases, the Raman-basedclassification models were compared to those built using NIR or ATR-FTIR-baseddata. [51] See Fig. 3.2 for an example of such a model. Though there is am-ple documentation of the use of NIR- and ATR-FTIR-based classification models forpulp and paper analysis in the literature, [52–58] the success of our Raman-basedmodels was hampered by the instrumental limitations of the RP-1 and BX51 mi-croscope. As mentioned previously, there were also difficulties related to samplemounting and Raman resolution. These limitations led to the development of theinstruments that are the focus of this thesis.3.3 Original Design IdeasThe original concept for an integrated iSCAT and confocal Raman microscope arosefrom the group’s collaboration with partners in the pulp and paper industry. Thedesign concept was an instrument that would allow a user to take an iSCAT image ofa pulp sample, use it to localize individual fibers, then direct the confocal Ramanprobe to them directly.An early design decision was the orientation of the objective. In most conven-tional light microscopes, the objective points town towards the sample stage, andthe light source illuminates the sample from below. Such microscopes typicallyuse broadband sources to produce brightfield or darkfield imagery. However, foriSCAT imaging, the illumination beam and signal beam are collinear, and thereforean inverted orientation where the objective points up to the sample is preferable.This maximizes ease of access to the sample, as well as keeping the optical trainlow to the table. The height of the optical train (from the surface of the opticaltable) is almost always determined by the height of the monochromator’s entranceslit, which is generally fixed, on the order of several inches.In order to implement the collinearity of the iSCAT and confocal Raman sys-tems, a laser appropriate for both methodologies had to be selected. In this deci-sion was a tradeoff; shorter wavelengths provide better spatial resolution for iSCAT,higher signal intensity for Raman, but compromise the ability to obtain a sponta-neous Raman signal by greatly increasing sample fluorescence. [59, 60] Longerwavelengths, on the other hand, are more effective for spontaneous Raman mea-20surements despite decreasing signal intensity, but sacrifice iSCAT resolution by in-creasing the diffraction limit. [59, 61, 62] Additionally, reliably collecting spon-taneous Raman emissions through a confocal pinhole requires an incredibly cleanlaser beam with a perfect spatial mode. A Helium-Neon (HeNe) laser was selectedas the best solution to these requirements.3.3.1 Designing an iSCAT SystemWe decided a single-channel iSCAT system using an autobalanced photoreceiverfor a detector would best suit our needs. Such a design had been used to local-ize a single molecule of terrylene diimide adsorbed to a thin layer of poly(methylmethacrylate) (PMMA), [28] though that instrumental setup was substantially dif-ferent from the instruments described herein.Using an autobalanced detector would correct for fluctuations in the laser’spower over time, thereby providing a more true-to-life iSCAT signal. However,the most important reason a single-channel iSCAT detector was chosen was be-cause from the earliest moments of conception, iSCAT was intended to be fullyintegrated (i.e. collinear) with a confocal Raman system. This meant that wide-field iSCAT designs, using Acousto-Optic Deflector (AOD) units and a Complemen-tary Metal-Oxide-Semiconductor (CMOS) camera, were entirely inapplicable. [28]These wide-field designs, however, were revisited later (see Ch. 5.1).To understand how an autobalanced detector works, one must consider a basicdifferential amplifier. Such an amplifier reads two voltage inputs and outputs asingle voltage, the latter being the difference between the two input voltages, suchthat:Vout = g(Vsignal−Vre f ) (3.1)where g is the circuit gain, that is to say, the factor by which the amplifier amplifiesthe difference between the two inputs. The output, Vout , is termed the differentialsignal. The amplifier thus rejects the voltage range common to both the signal andreference inputs (Vsignal and Vre f , respectively). [63] The autobalanced photore-ceiver used in the instrument (model name Nirvana 2007, New Focus Inc., Santa21Clara, CA, USA) uses two photodiodes to convert optical power to current, whichis then fed to differential amplifiers. Given the basic relationship of power, volt-age, and current (P=VI), the Nirvana effectively converts the difference in opticalpower between the two inputs into a single output voltage. [64]This differential signal output should theoretically be zero when the signal andreference inputs are identical; that is to say, when there is no sample for iSCATto probe. Recall from Ch. 2.1 that iSCAT relies on the phase difference betweenlight reflected from a coverslip and backscattered from a sample. The backscat-tered light will be out of phase with the reflected light due to the change in opticalpath length; this, in turn, is due to the different refractive indices of the sampleand glass coverslip. [28] Thus, the optical power of the iSCAT signal read by thephotodetector will vary with respect to the reference signal coming from the laser.This variance falls outside the common mode of the two signals, and thus is con-verted to Vout by the detector. If there is no discernible sample, then there is nooptical path difference, and the signal and reference channels are effectively iden-tical. However, in such a situation, the inescapable scourge that is shot noise rearsits head, preventing Vout from truly reaching zero. Thus, the autobalanced detectoris shot noise-limited. [64]Any variability - or rather, instability - in the laser’s output over time wouldalso cause the signal beam to vary, which would be indistinguishable from actualrefractive index-induced variance in the iSCAT signal. Using the autobalanced de-tector solves this potential problem, in that the variance would be detected on boththe signal and reference channels. HeNe lasers are generally very stable, [65] butnonetheless environmental factors such as vibrations or humidity changes couldcause some variation to occur.In the instrumental design, the autobalanced photoreceiver collects a singlechannel of iSCAT data, through its two inputs (Reference and Signal). The Nir-vana’s two physical outputs used to collect pertinent data are Log Output and Sig-nal Monitor. They can be determined as follows:Log Output (V ) =− T273∗ ln PreferencePsignal−1 (3.2)22Signal Monitor (V ) =−10∗R∗Psignal (3.3)where T is the circuit temperature in K, P is the optical power of the signal orreference photodiode in W, and R is the photodiode’s responsivity, in A=W . [64]A single, auto-balanced iSCAT channel was deemed most appropriate for thisinstrument, though this would prohibit video-rate viewing of samples. To facilitateintegration with a high-precision confocal Raman branch, a single iSCAT channelthat did not require the use of AOD units was ideal, as the AOD-induced beammotionwould present a severe impediment to achieving confocality.3.4 First DraftsThe earliest drafts of the instrument took on a substantially different form from thecompleted product. Figure 3.3 shows the cleaned-up original design, used to beginsourcing components; compare with Figure 4.1 to visually contrast this with thefinal product. The most striking element of the early designs that was eliminatedwas the raised breadboard that would have held the objective.23Figure 3.3: First Draft of the Single-Color Instrument.24This breadboard concept was conceived of as an easy way to create a fixed plat-form for the inverted objective and associated components. Mounting the bread-board at a fixed height several inches above the tabletop would ensure the highdegree of stability necessary for maintaining the collimation of the objective beam.The breadboard was to be threaded to accommodate the objective in a fixed posi-tion over an angled mirror. This mirror would reflect the horizontal (i.e. parallel tothe tabletop) illumination beam directly upwards into the objective. On top of thebreadboard were to be mounted the piezo stage and sample holder; these compo-nents are by necessity independent from the objective.Also included in the earliest designs was a system for brightfield imaging, rep-resented by yellow lines in Fig. 3.3. This was to be implemented using an incandes-cent lamp for illumination, flip-down mirrors or beamsplitters, and a webcam-likeCMOS camera, but was eventually scrapped. Brightfield imaging was resurrectedlater on, however; see the next section (Ch. 3.5) for more.In order to obtain useful data, the system had to be designed in such a way thatwould allow a user to control where to collect data on the sample. Confocal Ramandata vary most meaningfully between samples; a single spectrum is largely mean-ingless unless contextualized within a sample set.1 As such, the important designfactors for collecting Raman data are user-controlled positioning and ease of sam-ple mounting. Conversely, iSCAT signals vary most meaningfully within samples.An iSCAT signal collected at a single point within a sample is entirely devoid ofmeaning, unless it can be compared with signals collected at neighboring pointswithin the sample. To facilitate this, automated positioning becomes necessary, es-pecially when considering the need for repeatable data collection. Using wide-fieldiSCAT imaging would obviate this problem (see Ch. 5.1), but the detector in thiscase is single-channel for reasons discussed previously.To allow both user-controlled and automated positioning of the sample requiredtwo xyz stages: one with manual actuators and one with computer-controllablepiezo drivers. In fact, a stage integrating the two was sourced and purchased (model1Strictly speaking, this statement only holds true when the Raman data are being used to constructclassification models, as described in Ch. 3.2. Since this is the general purpose of the instrumentdescribed herein, this can be accepted as a reasonable assertion. Single Raman spectra are indeeduseful when attempting the identification of individual compounds, or classification of samples usinga pre-existing model, neither of which are of relevance here.25no. MAX312D/M, Thorlabs, Inc., Newton, NJ, USA), along with a microscopeslide holder. These together formed the sample mount. These components aredescribed in more detail in Chapter 4.1.1.Integrating the confocal Raman probe into this design was not especially chal-lenging, given the prevalence of the methodology. Given their vastly different de-tection systems, the key step to integrating Raman spectroscopy and iSCAT was toeffectively and efficiently separate the signal beam returning from the sample intoits Raman and iSCAT components. This required finding an appropriate DichroicBeamsplitter (DBS) that would reflect non-shifted light towards the iSCAT detec-tor and transmit Stokes-shifted light towards the spectrometer. The Stokes-shiftedlight would then pass through a confocal pinhole, as per the typical design of con-focal Raman systems. [33, 66]3.5 Constructing the InstrumentThis section outlines the instrument’s construction process. The next chapter (Ch.4) describes the instrument in general terms, and details how it was operated.3.5.1 HardwareSourcing and pricing of the various components needed to construct the instru-ment began once the NSERC grant came through. In order to save money, some oldequipment already present in the lab was used. Chief among these were the opticalisolation table (manufactured by Newport Corp., Irvine, CA, USA), the spectrom-eter (model no. SP 2150i, manufactured by Acton Research Corp., Acton, MA,USA), the CCD camera (model no. Spec-10, manufactured by Roper ScientificInc., Sarasota, FL, USA), and the computer (running Windows XP, manufacturedby Dell Inc. (Round Rock, TX, USA)).The components required for constructing the breadboard design outlined inthe previous section (Ch. 3.4) were purchased as well, and the breadboard wasthreaded to mount the objective. At this point, however, it became apparent thatthe breadboard mount made adjusting the objective and tuning the laser beam soimpractical as to be effectively impossible. A new objective mount was quickly de-vised: a 45◦ elliptical mirror mount, fitted with a thread adapter, mounted between26two optical posts. This was simple enough, and necessitated that the sample stagebe elevated on optical posts as well.Another design flaw, or more accurately, an inefficiency, was the design of thesample stage system itself. Originally, the manual and piezo actuators were to beseparate entities. The system would have worked as follows:• Sitting atop the breadboard would be a two-dimensional (xy) stage with acircular hole bored through the middle (model no. 122-0245, OptoSigma,Santa Ana, CA, USA).2 The stage came with two manual linear actuators,and was fitted to incorporate two standalone piezoelectric actuators (modelno. AE0505D16F, Thorlabs Inc., Newton, NJ, USA).• A 90◦ L-bracket would then be attached to the 2D stage.• A one-dimensional (z) stage (model no. NFL5DP20/M, Thorlabs Inc.) wouldthen be attached to the vertical face of the L-bracket. This stage incorporatedboth a manual and a piezoelectric actuator.• The two standalone (xy) piezo actuators and the integrated (z) piezo actu-ator were to be driven by a 3-axis open-loop piezo controller (model no.MDT693A, Thorlabs, Inc.).• A slide holder was to be designed and fabricated, in order to be attached tothe vertical (z) stage.This design proved to be needlessly complicated. Furthermore, there were anumber of difficulties in getting the standalone piezo actuators to work with the xystage. Modifying the stage to accommodate the actuators proved far more chal-lenging than anticipated, so much so that they could not be relied upon to movethe stage repeatably, if at all. On top of this, the L-bracket proved to be moreunstable than anticipated. A fork-like piece of aluminum was milled to give theL-bracket more stability, but it made positioning the vertical z stage with respect tothe objective much more difficult.Eventually, the entire multi-part design was scrapped in favor of a self-containedbut much more expensive unit (model no. MAX312D/M, Thorlabs Inc.). This unit2This hole was the factory design, not after-market.27featured three manual linear actuators as well as integrated three-dimensional piezoactuators, and was compatible with the piezo controller we had already purchased.In addition to the new stage, a new, more stable slide holder was purchased (modelno. MAX3SLH, Thorlabs Inc.). After these design flaws were corrected, construc-tion proceeded smoothly.After the system was constructed, the next step was to align the HeNe laser, andtest its performance for spontaneous confocal Raman spectroscopy. This was doneusing a sample of diamond, which produces a very sharp peak. During testing itquickly became apparent that the random polarization of the laser was significantlyhindering performance. The laser was exchanged for its polarized equivalent, at thecost of a slightly lower output power (model no. HNL225R, 22.5 mW vs. modelno. HNL210L, 21.0 mW, both manufactured by Thorlabs Inc., Newton, NJ, USA).The performance of the linearly polarized model was deemed to be acceptable.Another problem that surfaced during the construction of the instrument was adefect in the piezo controller unit, which was unable to vary the voltage on the z-axis piezo driver. The problem was resolved after two RMAs, and with the workingcontroller in hand, the software needed to operate the instrument could finally becoded.3.5.2 SoftwareThe software was coded using LabVIEW 2012 (National Instruments, Austin, TX,USA). LabVIEW uses a proprietary graphical dataflow programming languagecalled G; each program coded in G is referred to as a Virtual Instrument (VI), andfor this instrument, two VIs were created. The first VI was originally designed totest the autobalanced photoreceiver, but was adapted to serve as a calibration pro-gram to ensure that iSCAT signals would be read correctly. The second VI was theinstrument’s user interface, containing all the necessary functionalities to collectdata. For a description of how to use the user interface, see Ch. 4.2.To interface with the relevant hardware, namely the autobalanced photoreceiverand the spectrometer, the LabVIEW VIs relied upon two separate driver suites. Toread Raman data from the spectrometer, the VI used a third-party suite of driversknown as the Scientific Imaging Toolkit (SITK), which specializes in communica-28tion between Acton (Acton, MA, USA) and Princeton Instruments (Trenton, NJ,USA) hardware and LabVIEW software. To read iSCAT data from the autobalancedphotoreceiver, a USB-powered Data Acquisition (DAQ) card was used (model no.NI USB-6210, National Instruments Inc., Austin, TX, USA); the DAQ card hadnative LabVIEW support, as they are both developed by the same company.The iSCAT calibration VI was designed simply to display raw output from theautobalanced photoreceiver’s log output (i.e. autobalanced signal) and signal mon-itor (i.e. reference) channels. While this did not display data in any useful capacity,it was designed to show whether the signal beam was properly attenuated relativeto the reference beam. This was a requirement for reliable data collection; the sig-nal beam should be roughly half as powerful as the reference beam, in order forthe photoreceiver’s differential amplifier to effect the best common-mode rejection.[64]As part of its software suite, SITK provides a number of example VIs; one ofthese was used as the basis for the instrument’s user interface. This example VIwasintended for use with basic spectroscopic applications; it featured simple spectrom-eter controls and data viewing functionality. The VI was less robust than PrincetonInstruments’ proprietary spectroscopy software, WinSpec; the latter was used foralignment of the confocal Raman branch of the instrument, but could not be inte-grated with LabVIEW-based iSCAT functionality.The VI’s main functions were organized into tabs; one for initializing the CCDcamera, one for selecting a Region of Interest (ROI) on the CCD, one for moving thespectrometer’s grating turret, one for performing simple background subtractions,and two for collecting data (live spectra display and single spectrum collection).The functions in these tabs allowed the user to make the necessary adjustmentsbefore any data collection would begin, in order to ensure the data set’s integrity.This included the live spectra display tab, which was used for Raman focusingoperations where data storage was unnecessary.The example VI’s basic spectroscopic functions were augmented by integratingiSCAT functionality. The basic process for collecting iSCAT data was to set up araster scan, record the photoreceiver’s log output, then raster the piezo drivers. Theraster was either done as a stack of horizontal xy planes, or a stack of vertical xzplanes, with user-set distances and number of steps. These two values controlled29the resolution of the resulting iSCATmap, though there were also instrumental limi-tations; see Figs. 4.2 and 4.3 for a practical example of this instrumental resolutionlimit.Raman data collection could also be incorporated into the aforementioned rasterscan routines. Thus, at each point in the scan, the software would collect a pointof iSCAT data and a Raman spectrum, then compile both into maps. Further, therewas also an option for Raman-only raster scans.A third piece of software was needed to facilitate the joint viewing of the col-lected iSCAT maps and Raman spectra. This VI was designed to visually unify thetwo data sets; it displayed both the iSCAT map and a Raman heatmap for a selectedwavelength, as well as the complete Raman spectrum for any point selected onthe map. It also incorporated some simple data processing functionalities, such asDWT and Second-Derivative Variance Minimization (SDVM). See Appendix A forimages of the VI’s front panel and block diagram, as well as examples of the VI inuse.With all the software in place, the instrument was deemed to be in workingorder and was ready to be tested. This was done using a variety of samples, someexamples of which can be seen in the next chapter.30Chapter 4The Original DesignThis chapter describes the design and use of the instrument in its original single-channel configuration, used for both Interferometric Scattering Microscopy (iSCAT)and Raman microscopy. Refer to Fig. 4.1 for a schematic diagram of the instru-ment. The glossary contains definitions for the abbreviations used in Fig. 4.1.4.1 Description of the InstrumentThe first part of this section describes the line-of-sight propagation through theinstrument, mentioning only key optics, followed by a list of important hardwareand some of their properties.31Figure 4.1: Diagram of the Instrument’s Original, Single-Channel Configuration.324.1.1 Description of Optical TrainAfter exiting the laser, the beam first met a 50:50 Beamsplitter Cube (50:50 BS);half of the beam was split off and directed through a variable Neutral Density Fil-ter (NDF) towards the Reference channel of the photoreceiver (see Ch. 3.3.1 formore details on the photoreceiver). The light passing through the beamsplitter nextpassed through a Laser Clean-up Filter (LCF), and then was expanded and reflectedoff a longpass Dichroic Beamsplitter (DBS) towards the objective and sample. Thebeam expansion optics enlarged the beam to completely fill the objective’s aper-ture. The sample was held in place above the objective upon a 3-axis open-looppiezo stage with integrated manual actuators. After returning from the sample, thesignal beam again met the DBS; at this point, the iSCAT and Raman beams diverged.Rayleigh-scattered light (i.e. light of the same wavelength as the laser) was re-flected back towards the 50:50 beamsplitter, where it then was directed into the Sig-nal channel of the photoreceiver; this provided the iSCAT signal. Raman-scatteredlight (i.e. Stokes-shifted, of higher wavelength that he laser) passed through theDBS and was directed through a 30 mm confocal pinhole, through a HolographicNotch Filter (HNF), and into the monochromator; this provided the Raman signal.4.1.2 List of Major Components• Microscope objective: oil immersion, infinity-corrected, planar apochro-matic; model name PLAPON60XO, manufactured by Olympus Corp., Shin-juku, Toyko, JapanMagnification: 60xImmersion oil refractive index: n= 1:518Numerical Aperture (NA)=1.42Working distance: 0.15 mm• Autobalanced photoreceiver for iSCAT: model no. Nirvana 2007, manufac-tured by Newport Corp., Irvine, CA, USADetector: 2x PIN diodeDetector diameter: 2.5 mm33Noise-equivalent Power (NEP): 3 pW/√HzPeak responsivity: 0.5 A/WCommon mode rejection: 50 dB• HeNe laser: model nameHNL225R, manufactured by Thorlabs, Inc., Newton,NJ, USAEmission wavelength: 632.8 nmMaximum output power: 21.0 mWPolarization ratio: 500:1• Piezo stage (incl. drivers): model name MAX312D/M, manufactured byThorlabs, Inc.Manual range: 4 mm (coarse), 300 mm (fine)Piezo range: 20 mmTheoretical resolution: 20 nmAccuracy: 1.0 mm• Piezo controller (open-loop): model MDT693A, manufactured by Thorlabs,Inc.Voltage range: 0 to 75 V• Monochromator: model SP 2150i, manufactured by Acton Research Corp.,Acton, MA, USAFocal length: 150 mmAperture ration: f/4Gratings: 600 g/mm, 1200 g/mmGrating blaze wavelength: 750 nmSpectral resolution: 0.4 nm (1200 g/mm grating)• CCD detector: model Spec-10 XTE-100B, manufactured by Roper ScientificInc., Sarasota, FL, USA34Chip size: 1340x100 pxPixel size: 20 mmDark current: 0.0005 e– /p/secSpectrometric well capacity: 250 ke– to 1 Me–Read noise: 3.5 e– RMSChip type: forced-air cooled, back-illuminated thermoelectricTypical operating temperature: −90 ◦C4.2 Using the InstrumentThe following is a generalized outline of the procedure used to collect and viewdata using the instrument.Preparation of samples was no different from any conventional optical micro-scope: the sample was laid on a glass slide, and covered with a thin glass coverslip.The objective required a dab of immersion oil to be placed on the coverslip; then,the sample slide could be put in place, upside down, in the sample holder.The next step was to calibrate the iSCAT system using the Virtual Instrument(VI) mentioned in the previous chapter (Ch. 3.5.2). To perform the calibration, theobjective was moved to a “blank” area of the sample (i.e. so the focal point wasmoved off the sample and into the glass). Then, the NDF on the reference beam(see Fig. 4.1) was adjusted so that the readout in the VI was close to zero. Thisensured that the background signal was half as intense as the reference signal, andthat therefore the photoreceiver’s autobalancing circuitry would correctly recordthe interference pattern in the signal beam that characterizes iSCAT.1 Adjustingthe NDF was straightforward, as the filter itself was linearly graded between 0.01%and 91% transmission;2 thus, adjusting the filter was simply a matter of moving ithorizontally with respect to the beam.After the iSCAT signal was calibrated, the spectrometer was initialized using themain VI, also mentioned in the previous chapter. Once the experimental parameters1See Ch. 2.1 for a more thorough explanation of the principles of iSCAT.2Optical density between 0.04 and 4.0; T = 10−OD35(exposure time, number of frames to collect, etc) were written to the spectrometer,and it became sufficiently cool, data could be collected as desired. This requiredprogramming scan parameters for the piezo controller. At each point along thescan, the system would record an iSCAT value and a Raman spectrum; the VI couldalso be set to only record either iSCAT or Raman, though this made it more difficultto correlate the two data sets. The User Interface (UI) displayed the latest Ramanspectrum and would auto-update an iSCAT map, so that the latest value could bevisualized in context. All data were stored in Text Array (TXT) files.The data was best visualized using MATLAB or the Raman Map Reader VI(Appendix A). The latter allowed the user to correlate iSCAT and Raman data, aswell as apply some basic processing (Discrete Wavelet Transform (DWT), Second-Derivative Variance Minimization (SDVM)) to the Raman data; however, it was avisualization tool only. Data processing methods are discussed in more detail inCh. 8.2.4.3 Limitations and the Need for MoreWith the single-channel design completed, routine sample imaging began in earnest,both to see what how the instrument worked and to probe any unforeseen compli-cations. There are several examples of collected data in this chapter.4.3.1 ResolutionTwo of the inherent difficulties with building iSCAT maps in the method used bythis instrument, i.e. using a single-channel detector and rastering the sample, canbe seen in Figs. 4.2 and 4.3. Both are maps of the same sample of sea spray aerosolparticles. The map on top (Fig. 4.2), a 20x20 mm field of view, consists of 3,600data points. Nonetheless, the resolution is limited, because of the relatively largestep size of 333 nm; thus, only the larger particles are clearly visible.This was a common problem with mapping samples, as it was difficult to ascer-tain both whether a sample was in focus and whether it contained any visible data.A large overview map of a sample may not show small features, while a small,zoomed-in map may miss features in other areas. The lack of live video displayalso made it difficult to determine whether the sample was in focus. Instead, Ra-36Figure 4.2: A 20x20 mm, 60x60 step single-channel iSCAT map of sea sprayaerosol particles, demonstrating the user-determined resolution limit,determined by step size. Collection time was approximately 45 min-utes. Sea spray aerosols are predominantly composed of sodium chlo-ride, and are collected using a cascade impactor. The size distributionis non-uniform.[67]Figure 4.3: A 5x5 mm, 100x100 step map of the same sample, demonstratingan instrumental resolution limit of around 300 nm. Collection time wasapproximately 2 hours.37man was sometimes used to focus the sample, but only in Raman-active samples.As many samples lack strong spontaneous Raman spectra, this was often impossi-ble, so a workaround was devised. Focusing the light reflected from the cover sliponto the confocal pinhole was adequate for iSCAT mapping.The lower map (Fig. 4.3), a 5x5 mm field of view, consists of 10,000 datapoints; it shows a clearly-defined resolution limit. This is in fact an instrumentallimit: though each step is larger the piezo driver’s theoretical resolution, the imageshows data in “blocks” of 6x6 pixels, or about 300 nm to a side. This limit is theAbbe diffraction limit, calculated as follows:d =l2nsinq=l2∗NA =632:8 nm2∗1:42 = 222:8 nm (4.1)Thus, the 300 nm resolution in Fig. 4.3 is slightly higher than the Abbe diffrac-tion limit (222.8 nm) for the instrument. This limit is smaller than the step size in4.2, so it is not observable in that map.The discrepancy between the actual Abbe diffraction limit and the observedinstrumental limit is likely due to the piezo driver; since the piezo controller isopen-loop, this presented a certain degree of uncertainty - specifically, an absoluteuncertainty of 1 mm - in the piezo actuators’ physical positions. Thus, maps witha step size smaller than that figure had to be viewed with a degree of caution, evenif they appeared accurate.4.3.2 Time InvestmentsTo draw again on the aerosol examples (Figs. 4.2 and 4.3), both iSCAT maps re-quired serious time investments to collect. Collecting an individual iSCAT data pointis trivially fast; however, stringing together hundreds or thousands of points, plusrastering the piezo drivers, greatly increases the collection time. The 3,600-stepimage (top) took about 45 minutes to collect, whereas the 10,000-step image (bot-tom) took approximately two hours; thus, collecting iSCAT data took approximately750 ms per step.When it comes to collecting Raman data, time is an even more serious concern.Since the probability of spontaneous Raman scattering is so low, [68, 69] a long38Figure 4.4: Raman spectrum of polystyrene bead (see also Figs. 8.3, A.1, andA.2). Collection time: 45 sec. The spectrum has been processed usingSDVM and a median filter.Figure 4.5: Raman spectrum of polypropylene resin, from a cottage cheesetub. Collection time: 20 sec. The spectrum has been processed usingSDVM and a median filter.39Figure 4.6: Data set showing of thin slices of plasticized pig brain mountedin a TEM grid.Left: Cropped single-channel iSCAT map, approx. 15x15 mm. See Fig.5.2 for the full map.Right: Map of principal component 2 based on collected Raman spectra,processed using PCA. PC2 accounts for 7.50% of the variance in theRaman data set, and shows a complementary spatial distribution to theiSCAT image. PC2 was chosen over PC1, as the latter represented thecontribution of the TEM grid to the overall variance (see Fig. 5.2).exposure time must be used in order to collect a useful spectrum. This “useful”exposure time is at a minimum 15 seconds, and often several minutes (see Figs.4.4 and 4.5). When hundreds or thousands of collections were strung together tomake a map, as with (and often alongside) iSCAT, data collection on a single samplemap could run for dozens of hours.For example, the full iSCATmap shown in Fig. 5.2,3 along with the accompany-ing Raman data with an exposure time of 25 seconds per point, took approximatelythree hours to collect; the Raman collection was clearly the limiting factor. Bearin mind that this map consisted of only 400 steps; the examples from Figs. 4.2and 4.3 are orders of magnitude larger, and as such, had they had accompanyingRaman data acquisition, would have taken days to collect.3See also Figs. 4.6, A.1, and A.2.404.3.3 Moving ForwardBecause of these challenges - especially because of the time requirements - we be-gan to consider alternative designs to improve the instrument’s performance. Thekey change was to convert the iSCAT system from a single-channel detector to awide-field detector, an idea previously mentioned in Ch. 3.3.1. This wide-fieldchannel would mean that iSCAT data collection would no longer be vinculated toRaman data collection. These upgrades, and other design refinements, are dis-cussed in the next chapter.41Chapter 5Upgrading to Wide-FieldAfter the completion of the instrument in its single-channel configuration, we rec-ognized a number of new challenges. Routine use of the instrument pointed to anumber of possible refinements that would resolve some issues and improve itsperformance. The biggest refinement was the introduction of a wide-field iSCATsystem, which is detailed in this chapter.5.1 Wide-Field iSCAT: How and WhyThe key bit of engineering that was required for the instrument’s major upgrade wasthe design of the new, wide-field Interferometric Scattering Microscopy (iSCAT)channel. While iSCAT imaging is a relatively novel technique, wide-field imagingis not. [9, 70–74]Simply replacing the photoreceiver with a wide-field detector such as a Com-plementary Metal-Oxide-Semiconductor (CMOS) camera is not possible, becausethe detector cannot elucidate localization information from the signal it receives.There are a number of ways to get around this problem, many of which involve in-terferometry of one kind or another. In fact, this is the key ability of interferometry:extracting recondite data from an otherwise useless signal. In the case of iSCAT, therecondite data in question is the variation in refractive index within a sample (seeCh. 2.1). Simply collecting all the backscattered light from the sample will notindicate where in the sample the scattering occurred.42Figure 5.1: Cropped iSCAT image of four conidia - asexual spores - from thepowdery mildew fungus Microsphaera vaccinii, imaged in situ on adried leaf (Vaccinium sp.), collected in the Nanaimo River valley (BC,Canada). False color; field of view approx. 19 by 13 mm.One way to obtain such localization information is to introduce a pattern alongthe optical train, either just before the sample or just before the detector; this tech-nique is commonly referred to as structured illumination. By using a known illumi-nation pattern, any perturbations to that pattern caused by a sample can be recordedand reconstructed into images. This produces a wide-field image, since the lightpattern is observed all at once. [8, 70] Thus, through the interference caused byperturbing the illumiation pattern, localization information can be mathematicallyobtained. Structured illumination, however, would not be suitable for iSCAT mea-surements, as it would be highly challenging to set up a reference beam for thenecessary scattering interference measurements.An alternative approach to wide-field iSCAT that eschews further interferom-etry is to physically reposition the illumination beam as it enters the objective ina controlled fashion, thus moving the location of the focus point on the sample.In effect, this mechanically introduces localization information into the beam, byvarying where the beam strikes the sample while also varying where it strikes thesensor. Such beam motion could introduced using Acousto-Optic Deflector (AOD)43units or galvanometric mirror sets. This methodology has found use in variousapplications, including an iSCAT experiment. [24, 28] We chose this approach, es-pecially considering there was literature precedent of a wide-field iSCAT experimentusing AODs.5.1.1 Advantages of Wide-Field ImagingOne of the biggest advantages of wide-field imaging is its instantaneity: iSCAT datacan be observed in real-time. This is a huge advantage over single-channel detec-tion; there is no longer a need for lengthy mapping, and no longer any questionabout whether a sample is in focus, or indeed whether it is even visible in iSCATbefore making a time commitment.In the same vein, wide-field imaging allows the user to see and record theentirety of the sample - or at least as much of it as the optics physically allow - allat once. In addition to the aforementioned benefits, this video-rate data collectionprovides much more flexibility in how the data are stored. iSCAT images can bestored as Text Array (TXT) files as before, but also as image files and even videofiles. Chapter 8.1 contains more details on the variety of iSCAT data collectionpossibilities.Figure 5.2 provides an example of the stark difference between wide-fieldimaging and single-channel mapping. Both images show plasticized pig brainslices mounted in a TEM grid, as in Fig. 4.6. Both images also show the TEM griditself, on the top and left margins, for comparison. The top image, a map recordedusing the single-channel configuration, took 5 minutes to collect,1 and was recon-structed using MATLAB. The bottom wide-field image was collected in 22 ms,2and shows a substantially larger area, even though its size has been cropped.When it comes to using iSCAT data to identify potential regions of interest forRaman data collection, the advantages of wide-field iSCAT become even clearer.Data processing time decreases by orders of magnitude, and even simply locatingregions becomes easier. Collecting a Raman map is still advisable for some sam-1This stated collection time is calculated based on iSCAT-only collection speed of about 80 stepsper minute (see Fig. 4.3). The actual collection time for this map was three hours, due to 25 sec.Raman spectrum collection at each point.2The framerate the CMOS camera used to collect the image is 45 Frames per Second (FPS).44Figure 5.2: iSCAT images of thin slices of plasticized pig brain mounted in aTEM grid; see also Fig. 4.6. The images are not to scale. (Note: theimages are of different cells in the same TEM grid.)Top: Approx. 20x15 mm map, taken using old single-channel configu-ration (Ch. 4).Bottom: Approx. 60x80 mm image, taken using new wide-field config-uration (Ch. 5), with false color.45ples, but using the wide-field iSCAT image, the user can be far more judicious aboutwhere to move the Raman probe on the sample.5.1.2 The Acousto-Optic Beam DeflectorThe implementation of the wide-field iSCAT system depends on the use of AODunits. In brief, an AOD consists of a tellurium oxide (TeO2) crystal attached to alithium niobate (LiNbO3) piezoelectric transducer. The transducer is fed a radiofre-quency signal from a Voltage-Controlled Oscillator (VCO). This signal causes thetransducer to expand and contract on a MHz frequency; this oscillation causespropagation of shear waves in the crystal along its (110) plane. The modular defor-mation of the crystal lattice changes the diffraction angle of light passing throughthe crystal, causing it to be linearly rastered at a high frequency.The deflection angle qd , in other words the difference between the maximumand minimum diffraction angles, can be calculated as follows:qd =(llightV)lRF =(632:8 nm0:66 mmms)lRF (5.1)where llight is the wavelength of incident light, in this case 632.8 nm, V is theacoustic velocity in the crystal, in this case 0.66 mmms for TeO2, and lRF is theradiofrequency input. In the case of the AODs purchased for use in the new wide-field configuration, the deflection angle qd is approximately to 56 mrad, or about3.2◦, before beam expansion.However, since there is only one transducer per crystal, individual AODs canonly raster a beam one-dimensionally. Producing a wide-field image necessitatestwo perpendicular AODs, one to raster over the x axis, and the other to simultane-ously raster over the y axis, forming a rectangular pattern. Such an arrangementpresents a problem, however; if the VCO-produced radiofrequencies are identical,the beam will simply be rastered along a diagonal line (y= x). Thus, the radiofre-quencies need to be slightly offset from one another.The function generators have four preset waveform outputs: sawtooth, sine,triangle, and square, and simulations were constructed in MATLAB to determinewhich would provide the most uniform coverage of the image plane. Figure 5.346Figure 5.3: MATLAB simulations of AOD raster patterns. Point color repre-sents location of beam over time; red is at t = 0. See also Fig. B.1.A: Sawtooth wave.B: Sine wave.C: Triangle wave.D: Square wave.shows the results; see Appendix B for the simulation code, as well as experimentalobservations. Subfigure A, top left, shows the sawtooth output, and clearly providesthe most uniform coverage of the arbitrary area. Subfigure B, top right, shows thesine wave output, which is noticeably sparse in the middle and concentrated in thecorners, due to the shape of the trigonometric function. Subfigure C, bottom left,shows the triangle wave output, which although it uniformly covers the area, formsa diamond pattern that leaves significant gaps. Subfigure D, bottom right, showsthe square wave output. Clearly, the latter is entirely unsuitable, as it points only tothe corners (and to the center point at t = 0); this is due to the binary nature of thesquare wave. Based on the simulations, the sawtooth function was deemed the best47option, but experimental results showed the triangle function to be slightly better(see Fig. B.1).5.1.3 Implementing the Wide-Field ChannelWith the wide-field iSCAT design nailed down, we began implementing the up-grades. Figure 6.1 shows the design of the fully upgraded instrument. On thatfigure, both the PtGrey3 and Thorlabs4 cameras have CMOS sensors. The AOD unitswere purchased from Gooch & Housego plc (Ilminster, UK).Getting the collinear wide-field iSCAT and confocal Raman channels opera-tional required a convenient way for the user to switch between operating modes.This was achieved by changing the output of the function generators, thus changingthe behavior of the AOD units. As discussed previously, for wide-field iSCAT datacollection, the function generators must be outputting waveforms with similar (butnot identical) frequencies in order to raster the beam over the sample. For confocalRaman data collection, on the other hand, the beam must not be rastering. Thebeam travels through the AOD crystals regardless, and the crystals will diffract thebeam even when unpowered; however, the diffraction angle depends on the voltageinput, as per Eq. (5.1). Thus, to maintain alignment, the beam must pass throughthe crystals while they are receiving a constant voltage equal to the average voltagethey receive during iSCAT data collection. That is to say, the function generatorsmust be set to output waveforms with amplitudes of 0 V, i.e. a constant voltage.That adjustment can be made on the front panel of the function generators, and assuch, switching between iSCAT and Raman modes on the instrument is straightfor-ward.5.1.4 iSCAT BeatingA problem that is routinely encountered during wide-field iSCAT imaging is aninterference pattern observed in the image. The pattern does not entirely disruptthe ability to collect data; however, it is pernicious enough to merit attention. Sincewide-field iSCAT imaging is simulated by using point illumination rastered over a3Point Grey Research Inc., Richmond, BC, Canada4Thorlabs Inc., Newton, NJ, USA48sample using the AOD units, at a rate near 40 kHz, it creates the illusion of a wide-field image.5 In fact, the redraw rate of the iSCAT image (i.e., the time it takesfor the beam to raster over the entire image) is perceptible by the iSCAT camera’sCMOS sensor. This is directly analogous to a very common problem: the 60 Hzrefresh rate of most Cathode Ray Tube (CRT) screens interferes with the standard24 FPS capture rate used by the film industry. Thus, CRT screens often appear to beflickering when captured on film.6Exactly the same effect occurs in the iSCAT case. When faced with the highpower of the laser, the camera adjusts the CMOS sensor’s exposure time to becomeso short as to interfere with the iSCAT redraw rate, even at relatively low laser pow-ers (around 100 mW). The iSCAT redraw rate is around 13.5 kHz, or once per 74.1ms. The camera’s exposure time ranges between 32 ms and 22 ms; when capturingbright laser light, it is typically less than 100 ms. Thus, when the exposure andredraw rate interfere, they form a beating pattern.In the case of most CRT screens captured on film, the beat frequency is 36 Hz;this is highly perceptible to the human eye. In the case of the iSCAT camera, the fre-quency is variable due to the camera’s rapid automatic exposure time adjustments.If the exposure time is faster than the redraw time, then each captured frame willbe incompletely illuminated. For example, at 60 ms exposure, each frame is 80%illuminated. Because the redraw frequency is independent of the framerate, the il-lumination begins at a different point on the field of view for each frame. Thus, theincompletely illuminated area appears to oscillate across the frame. When strungtogether at the default 45 FPS framerate, this oscillation beats with the latter.To combat this beating, a Neutral Density Filter (NDF) was installed to attenuatethe laser’s power before the illuminating beam reaches the objective, and againbefore the signal beam reaches the camera. However, the former NDF significantlyimpedes the spectrometer’s ability to observe spontaneous Raman signals, and thusmust be removed for confocal Raman detection.5See Chs. 5.1 for more information on wide-field iSCAT imaging.6This problem occurs with traditional photochemical film stock, as well as with the CMOS sen-sors found in modern digital cameras.49Chapter 6The Finalized DesignThis chapter describes the design and use of the instrument in its final wide-fieldconfiguration, as well as some of the challenges it faces.Figure 6.1 shows the finalized design for the new wide-field configuration.50Figure 6.1: Diagram of the Instrument’s Final, Wide-Field Configuration.51Figure 6.2: iSCAT images of a Schistidium papillosum sporophyte specimen. False color; field of view approx. 150 mmsquare. S. papillosum is a species of moss, widespread across the northern hemisphere.Left: Exothecial cells of a sporangium (spore capsule), conforming to literature expectations. [75] Small trigonesbetween the cells are visible.Right: Distal cells of a seta. Cells are typically short-rectangular and 8-10 mm wide; [75] this is supported by theiSCAT image, though iSCAT’s narrow focal plane impedes observation of the three-dimensional specimen.526.1 Description of Upgraded Optical TrainAfter exiting the laser, the beam passes through a flip-down Neutral Density Fil-ter (NDF) (for iSCAT mode only) and a spatial filter, before being focused into theAcousto-Optic Deflector (AOD) units. The beam passes through the two perpendic-ular AOD crystals (x and y); in Raman mode, they simply diffract the beam, while iniSCAT mode, they raster the beam into a roughly square shape at MHz frequencies.The beam is then expanded to fill the objective, directed through a 50:50 Beam-splitter Cube (50:50 BS), and reflected off a 633 nm longpass Dichroic Beamsplit-ter (DBS) into the objective, where it is focused onto the sample. Light returningfrom the sample is split by the DBS. Rayleigh-scattered light is reflected back to-wards the 50:50 BS, and is then focused onto the PtGrey camera’s ComplementaryMetal-Oxide-Semiconductor (CMOS) sensor. Raman-scattered light passes throughthe DBS and is directed through the confocal pinhole and into the spectrometer.6.2 List of Upgraded ComponentsThis brief list specifies those components critical to the upgrades discussed in thischapter. See Ch. 4.1.2 for a list of original components; of those, only the autobal-anced photoreceiver was removed.• AOD system: deflector model name 45070-5-6.5DEG-.63, driver model nameMVL050-90-2AC-A1, manufactured by Gooch &Housego plc, Ilminster, UKTransmission: >95%Frequency range: 50-90 MHzDiffraction efficiency: >70%Deflection angle: 67 mrad (38 mrad from center, at 633 nm illumina-tion)• CMOS camera for iSCAT: model GS3-U3-41C6M-C, manufactured by PointGrey Research Inc., Richmond, BC, CanadaChip size: 2048x2048 pxPixel size: 5.5 mm53Mode: monochrome 8-bitReadout rate: 45 FPSDynamic range: 52.78 dBMaximum SNR: 38.82 dB• CMOS camera for brightfield: modelDCC1545M, manufactured by Thorlabs,Inc.Chip size: 1280x1024 pxPixel size: 5.2 mmMode: monochrome 10-bitReadout rate: 30 FPSDynamic range: 68.2 dBMaximum SNR: 45 dB6.2.1 Brightfield and Ko¨hler IlluminationThe brightfield system is partially depicted in yellow in Fig. 6.1; for clarity, thepath between the objective and flip-down mirror is omitted.6.2.2 Light SourceThe lamp used for brightfield illumination is a quartz tungsten-halogen lamp, modelname is QTH10, manufactured by Thorlabs, Inc. (Newton, NJ, USA). It emits abroadband spectrum between 300 and 2500 nm, peaking at 621 nm; the spectrumis shown in Fig 6.3. Its output power is approximately 50 mW, with a color tem-perature of 2800 K.54The lamp is housed in an SM2-threaded Ø = 2” lens tube system, and clampeddirectly above the objective. The clamp is intended to allow the lamp be movedwhen it is not needed, as it is quite bulky. Also included in the lens tube systemare two lenses and two irises, as well as an area of adjustable length (see Fig. 6.4).This setup allows the lamp to be used for Ko¨hler illumination of samples duringbrightfield operations. The first lens, labeled A in Fig. 6.4, is the field lens thatfocuses light emitted by the lamp through the field diaphragm (the iris labeled B).From there, the light passes through the adjustable infinity space (C) until it reachesthe condenser diaphragm (D). At this point, the image of the lamp’s filament is infocus. After passing through the condenser diaphragm (D), the now de-focusinglight is collected by the condenser lens (E) and passed towards the sample (F).When it reaches the sample, the light is collimated, so the image of the lamp’sfilament is invisible. Thus, the sample is uniformly illuminated, and its image isseparated from the image of the filament, making observation much clearer.In the instrument’s Ko¨hler illumination system, the field lens (B) is an uncoatedbiconvex N-BK7 lens with a focal length of 60 mm (part no. LB1723, Thorlabs,Inc.). The condenser lens (E) is an uncoated aspheric N-BK7 lens with a focal lengthof 40 mm and a Numerical Aperture (NA) of 0.554 (part no. ACL5040, Thorlabs,Inc.).6.2.3 Optical PathThe brightfield lamp, when needed, is positioned directly above the sample, and itsdiaphragms are adjusted as necessary. Light passes through the sample, objective,and two DBS filters. This has the effect of removing all light with wavelengthbelow 633 nm; however, the QTH10 lamp’s emission spectrum extends far into theinfrared (see Fig 6.3), so this is not a big concern. The filtered light is then focusedonto the chip of the dedicated brightfield CMOS camera.6.3 Challenges and LimitationsThe instrument faces a number of limitations and challenges. Many of these arephysical limitations, imposed by the laws of nature, that prevent the instrumentfrom working in just as one might want it to. Perhaps the best known example of55such a limitation, at least with regards to microscopy, is the Abbe diffraction limit.In simplest terms, one cannot observe objects much smaller than the wavelength ofthe light being used to observe them.6.3.1 Raman Power LimitationsOne of the most straightforward limitations of the instrument can be found in itsRaman functionality. Namely, the relatively low emission power of the laser (21:5mW) means that the Raman effect in general, and Raman data of interest in par-ticular, can be difficult to observe. The instruments ability to pair Raman data col-lection with iSCAT imagery does mitigate this challenge somewhat, insofar as onecan associate Raman data with features observed in iSCAT, thus making it easier toattribute Raman data to a particular sample region.Nevertheless, collecting Raman data is still a challenge. By the time light fromthe laser has reached the objective, it has passed through two lenses, a longpassfilter, and a quarter wave plate, as well as being reflected off two silvered mirrorsand a notch filter (see Fig. 6.1). None of these optical components have 100%transmittance or reflectance, and thus each component attenuates the light’s powerslightly; the light filling the objective is slightly less than 21:5 mW, and that is justthe incident light.Spontaneous Raman emission is, in itself, an unlikely process. In most cases, aminiscule fraction of scattered photons has undergone spontaneous Stokes or anti-Stokes scattering (as opposed to elastic Rayleigh scattering). [69] Thus, the powerof spontaneous Raman scattered light returning from the sample has been reducedby many orders of magnitude compared to the incident light. After leaving theobjective, this weak Raman light passes through three lenses, three filters, the 100mm confocal pinhole, and the quarter wave plate before reaching the spectrometer,with each component attenuating its power slightly more. Spontaneous Ramansignals are thus very weak and difficult to observe, due to the limited laser power.There are a number of tricks one can employ in order to improve the strengthof a Raman signal. One common example is Surface-Enhanced Raman Spec-troscopy (SERS), which can increase the power of Raman signals by many ordersof magnitude. This is accomplished by adsorbing a sample to a metallic surface,56and although the exact mechanism of enhancement is not well understood, it istypically believed to be the result of surface plasmons. [68, 69] However, this tech-nique is not commonly applied to biological samples, where non-destructive andnon-adulterative imaging is crucial. [77]6.3.2 Raman Signal LimitationsA further complication with spontaneous Raman emission is that weak signals arehighly prone to outside interference. This can come from any number of sources,including cosmic rays and overhead lights, as well as sample fluorescence. Cos-mic rays are transitory extrasolar high energy particles, typically protons, and areobserved by the spectrometer’s Charge-Coupled Device (CCD) if they happen tostrike it. See Fig. 6.5 for an example; note how the cosmic ray artifact drowns outthe Raman signal, which is itself mostly background fluorescence from the glasscover slip. There is no practical way to shield the system from cosmic rays, so cos-mic ray artifacts must be mathematically removed from collected data. Given theirappearance as very sharp spikes, this is not difficult; one need only use a simplemedian filter to remove them.More persistent interference, however, is not so easy to remove. Raman sig-nals are extremely prone to being overwhelmed by fluorescence signals, whichare much more intense. Fluorescent overhead lights are no exception to this, asmercury’s fluorescence emissions will handily drown out any useful Raman data.Thus, the instrument is operated with the overhead fluorescent lights turned offand thick laser-proof curtains blocking out light from other parts of the laboratory.This works well enough, but other light sources within the darkened laboratory cancause interference, especially the mercury lamps or LEDs that illuminate computermonitors, as well as any incandescent lamps that are used to help researchers seewhat they are doing. In order to exclude as much foreign light as possible, the Ra-man branch of the instrument, after the confocal pinhole, is isolated within a lenstube.Fluorescence in the sample is much harder to abate than either cosmic raysor background light. In fact, there is no way to reliably extract Raman data fromhighly fluorescent samples. For mildly fluorescent samples, however, there are a57number of mathematical post-processing techniques that can be employed to re-move or abate the fluorescence background. These are discussed in more depth inCh. 8.2.2. All samples contain some mild fluorescence from the glass on whichthey are mounted. As demonstrated in Fig. 6.6, removing this information is notimpossible, but the results often need further processing to become usable. In theexample, the fluorescence removal process used Second-Derivative Variance Min-imization (SDVM) background subtraction and Discrete Wavelet Transform (DWT)(sym 5 wavelet, 10 iterations). Data processing methods are discussed in moredetail in Ch. 8.2.6.3.3 iSCAT Resolution LimitAs previously mentioned, the resolution of the wide-field iSCAT system is Abbediffraction limited to 223 nm1 In practical terms, this limit is not as clear-cut as itappears; as long as a sample’s Point Spread Function (PSF) is larger than 223 nm indiameter, it can be imaged, even if the sample itself is smaller than the diffractionlimit. The wide-field configuration also obviates the piezo-imposed instrumentallimit discussed previously.The PSF is observed as a blurring or broadening of a small sample. In generalterms, it arises because the system’s optics can only capture a portion of a smallsample’s scattering field.2 Since, under Mie theory, scattering phase functions forsmall particles are more or less spherical, capturing a segment of a scattering fieldand projecting it onto a planar detector causes an Airy disk to form around thefocal point. Under normal conditions, it will not be a perfect pattern, because ofoptical aberrations in the lenses, but nonetheless it will expand the observed spot.Thus, the PSF effectively allows the detection of some large sub-diffraction sizedsamples.To put this to a practical test, we imaged various sizes of gold nanoparticles.Since they are only commercially available at certain diameters, it was a matterof finding the smallest visible diameter set. In fact, the smallest that could beimaged were 30 nm, several times smaller than the Abbe diffraction limit (2231See Ch. 4.3, Eq. (4.1).2Typically, the PSF is described in terms of a perfectly spherical point emitter, but that does notapply in the case of iSCAT.58nm). Figure 6.7 shows one of the particles that was imaged in aqueous solution.The nanoparticle’s apparent size is approximately 450 nm; this is caused by itsPSF. Using the instrument’s iSCAT video function, the motion of the nanoparticlesin solution was easily observed.Another way to test the resolution of the instrument is to use a test target; Figs.6.8 and 6.9 show the results of imaging a 1951 USAF resolution test chart, whichconsists of a group of progressively smaller line pairs. Fig. 6.9 shows a schematicof the imaged area. The actual target, however, is the negative of Fig. 6.9: the blackpatterns are transparent glass and the white background is opaque (low-reflectivitychrome, plated onto the glass substrate).In the iSCAT image of the target (Fig. 6.8), the transparent glass sections appeardark and the opaque chrome appears light. This speaks to the imaging mechanismof iSCAT; the transparent glass areas have the same refraction index as the immer-sion oil and objective optics (n= 1:518), thus they do not produce any interferencein the scattering field. On the other hand, the chrome plating has a substantiallydifferent index of refraction (approx. n = 3:2), [78] which causes substantial sub-stantial interference and high observed iSCAT intensity.6.3.4 The iSCAT BackgroundOne of the most pernicious difficulties with wide-field iSCAT imaging is the om-nipresence of a substantial background. This can be seen clearly in Fig. 9.1. Sincethe background is not uniform, and it pervades all images, it is essentially im-possible to remove mathematically. Referring to Ch. 2.1 and the general iSCATequations, the contribution from the reflection r is critical:[28]IiSCAT = |Ei|2{r2−2r |s|cosϕ} (6.1)The r2 term represents the background (and reference) signal, from the reflec-tion of the cover slip. The observed intensity pattern in the background is causedby the non-uniform illumination of the out-of-focus cover slip.3 Thus, the back-ground contribution is physically inseparable from the data, and mathematically3See Fig. B.1 for the illumination patterns.59extremely difficult to attenuate.However, looking at data collected using the single-channel iSCAT configura-tion,4 the background observed with the wide-field configuration is conspicuouslyabsent. Keeping Eq. (6.1) in mind, one can deduce that the background is notactually gone, but rather uniform - and therefore much easier to treat mathemati-cally. This is, of course, because in the old configuration, samples were uniformlyilluminated.This knowledge presents a possibility: if data could be collected on the two-color instrument using a similar single-channel process to that of the old instru-ment, alongside the wide-field channel, then perhaps the intensity variations in thebackground can be accounted for and removed. The development of such a system,and some of the challenges it faces, are addressed in Ch. 9.1.4Figs. 4.6, 4.2, 4.3, 5.2, 8.360Figure 6.3: Emission spectrum of QTH10 tungsten-halogen lamp. Data fromThorlabs, Inc. [76]Figure 6.4: Schematic of Ko¨hler illumination setup, from lamp (top) to sam-ple (bottom). A: Field lens (biconvex). B: Field diaphragm. C: Infinityspace. D: Condenser diaphragm. E: Condenser lens (aspheric). F:Sample.61Figure 6.5: Two Raman spectra of a marine aerosol sample (integration time:400 ms, taken sequentially). The spectrum on the left features a cosmicray artifact, at pixel no. 1303.Figure 6.6: Top: Raw Raman spectrum of poly(methyl methacrylate)(PMMA), collected from a microtome section of plasticized neural tissue(integration time: 25 sec). The spectrum exhibits a large glass back-ground.Bottom: The same spectrum, with the glass background mathematicallyremoved using SDVM and DWT.62Figure 6.7: iSCAT image of a 30 nm gold nanoparticle in aqueous solution.Field of view approx. 6.5 by 5.5 mm, nanoparticle PSF diameter approx.450 nm. Contrast enhanced.Figure 6.8: Optically expanded iSCAT images showing lines from Group 7,Element 6 of a 1951 USAF resolution test chart, the smallest lines onthe chart. Line pair width is approximately 4.4 mm. False color; fieldsof view approx. 17 mm square.Figure 6.9: Schematic of the 1951 USAF resolution test chart used for theabove images; total size approx. 4 mm square. The pattern conforms tothe MIL-STD-150A standard. The imaged lines are circled in red.63Chapter 7Designing the User InterfaceWork to develop a User Interface (UI) proceeded in concert with the constructionand optimization of the instrument’s optical train. We coded this interface as aVirtual Instrument (VI), using LabVIEW 2014. This chapter outlines the designprocess and the functionality of the VI.7.1 Communication ProblemsOne confronts a number of challenges when designing control interfaces fromscratch. The most urgent is setting up communication between the hardware andthe software. Most hardware comes with basic software designed by its respec-tive original equipment manufacturer (OEM), which is useful for basic testing. Theproblem with such software is that it is typically designed to be stand-alone andcannot be easily integrated with other software, owing to its reliance on propri-etary OEM drivers and source code. As previously alluded to, this is the reason thatthe Scientific Imaging Toolkit (SITK) drivers are needed: to communicate betweenPrinceton Instruments hardware (i.e. the spectrometer) and the LabVIEW soft-ware. Again, the new VI has its base in the SITK example VIs; these examples weredesigned to be used as building blocks, so to speak, that incorporate the nitty-grittyof calling Dynamic Link Library (DLL) drivers into handy sub-VIs.Little in the way of starting material came with the other hardware, includingin particular the piezo system and the iSCAT and brightfield cameras. Each has its64own way of communicating with the computer, requiring unique protocols. Forexample, the piezo drivers use serial port communications, the iSCAT camera usesNational Instruments’ proprietary image acquisition (IMAQ-DX) drivers, and theRaman spectrometer uses specialized DLL drivers.Combining all these protocols into a single program presents a challenge, butone that LabVIEW is well suited to handle. By cordoning off routine hardwarecommunication code blocks into sub-VIs that can be called as needed, the variouscommunication protocols become modular, and thus much easier to incorporateinto code as needed. Figure 7.1 shows one such example. The code block ThorlabMDT 693 in the center of the image contains the actual communication protocols,and the inputs can be wired as needed. On the left-hand side of the block, the inputsfrom top to bottom are: the port number for the piezo driver (blue wire, defined as0 for COM1), a string indicating the piezo axis to move (pink box and wire, x-axis),and NewVal, the user-input desired position value (orange wire). On the right-hand side of the block, the output is the actual position value after moving, readback from the driver. Throughout the Operations VI, there are several times thiscommunication procedure must be called, so defining it as a sub-VI is by far themost efficient way to handle it.One common problem that arises from having a multitude of different commu-nication protocols is communication conflict. For example, when the software isactively reading from the iSCAT camera, it cannot communicate with any additionalhardware, because the live display must operate within a while loop. Determininghow and when such conflicts might arise is a very important part of the softwaredesign, and will be addressed in the next section.7.2 Constructing a Unified InterfaceThe final challenge of designing a control interface is to bring everything togetherinto a convenient UI. First, the software should be able to execute any and allfunctionalities that users may need. This requires judiciousness on the part of thedesigner, so that superfluous tools are excluded and future needs are accounted for.The details of these functionalities will be reviewed in the next section.As functionalities are being developed, they need to be put together in a way65Figure 7.1: Event structure showing the case for moving the x-axis piezodriver to a user-input value (input into the numeric control Set x), us-ing a sub-VI.Figure 7.2: Serial communication contained within the sub-VI in the aboveimage.66that is easy-to-use and, more importantly, efficient. This again requires judicious-ness, in order to lay out the information hierarchy presented to the user by the UI. Italso requires the designer to practice quality control, in order to predict what couldgo wrong and determine how the software handles errors.The easiest way to organize the information hierarchy of the VI was by using atab control structure. That way, each separate data collection mode could be givenits own tab pane, without too much clutter from other irrelevant features. The VIhas ten tabs in total: three for initializing and setting up the spectrometer, four forRaman data collection and processing , two for iSCAT data collection, and one forbrightfield data collection.Having thus divided the VI, each tab’s own information hierarchy had to be laidout. As mentioned in the previous section, protection against creating communi-cation conflicts was necessary; this was by and large done cosmetically, by simplydisabling various controls during situations where their use could cause errors. Thetabs also had to be designed in such a way so as to facilitate ease-of-use. This in-cluded adding such features as progress bars, as well as organizing controls anddisplays in a logical manner.As an example of a typical design decision, consider the piezo system. At itsmost basic, it can be used to reposition the sample on the objective by changingthe voltage being fed to any of its three drivers (see Fig. 7.1). Simply havingthe ability to make that adjustment within the VI is not enough, however, as someapplications may require multiple movements in sequence. This is best addressedthrough movement automation, so that the software will automatically move thepiezo position along a user-set line, or raster the position over a user-set area, bothwith a user-set number of steps.Aside from programming that automation process, deciding where to put thepiezo controls involves both knowledge of how the instrument might be used, aswell as design and programming considerations. The location of the controls isvery important on both the front panel and the block diagram; for example, thoughthey appear similar, the piezo movement controls for iSCAT and Raman are differ-ent. In the iSCAT Viewer tab, the piezo controls can only be used while the softwareis reading data from the iSCAT camera. As previously indicated, this is the casebecause of the live display must operate within a while loop. If, on the block di-67agram, the piezo controls were located outside the while loop, they would not beaccessible during viewing of iSCAT live video. This is not an ideal arrangement, asusers typically want to see how their piezo adjustments affect the iSCAT image inreal time. Therefore, the piezo controls were placed inside the while loop, so thatthey could be used concurrently with the iSCAT camera.On the other hand, in the Raman Collection tab, the piezo controls can only beused while the software is not reading data from the spectrometer. This is becauseit is highly undesirable for a sample to move during Raman data collection. Thus,in order to prevent users from accidentally destroying the validity of their Ramandata, the piezo controls are made accessible only while Raman data is not beingrecorded.Many such design decisions were made throughout the course of building theOperations VI, due to the numerous functionalities that it provides.7.3 Functionality of the User InterfaceThis section will outline some of the features of the Operations VI. These featureswere gradually added and refined over time, both as the need arose, and as futureneeds were predicted. The next chapter (Ch. 8) will describe the standard operatingprocedures for how the instrument and VI are used to collect data.7.3.1 iSCAT FeaturesThe flexibility of the IMAQ-DX drivers allowed for a host of options to be addedto the iSCAT data collection process. Chief among them is the ability for a livedisplay of iSCAT information being read from the camera. That display includescosmetic options to apply false color and to zoom in on the image, as well as moresubstantive additions, such as a superimposed scale bar, and the ability to re-centerthe image on a selected point.Additionally, iSCAT raster scan functionality was added, much like that in theold instrument. This can be both a two-dimensional area scan, or a one-dimensionallinear trace. Options for how to record iSCAT data were added as well, allowingusers to choose between video, still images, or text arrays.687.3.2 Raman FeaturesThe SITK building blocks include functions for initializing the spectrometer, as wellas very basic spectrum capture features. Since the initialization of the CCD cameraand setup of the spectrometer are fairly involved hardware operations, they wereincluded more or less as-is. These include setting up exposure time and monochro-mator position, as well as more advanced settings that are not routinely used butcould prove useful in exceptional circumstances.In addition, SITK’s focusing function was retained, but with the addition of anoption to view the entire CCD chip, rather than just a binned portion of it. This wasuseful both for focusing on a sample, but also for aligning and tuning the confocalRaman branch of the instrument.To maximize the instrument’s Raman data collection capacity, several augmen-tations to those basic functions were added. In the previous instrument, the collec-tion of rastered Raman maps was routine, albeit very lengthy (see Ch. 4.2). Thisability was added as an option to the new VI. Another key addition was the optionof doing some on-line processing, or of exporting the collected data into a separatededicated processing VI. The online processing methods that were included are,in sequence, Discrete Wavelet Transform (DWT), Savitzky-Golay filtering, zeroing,and normalization.7.3.3 Other FeaturesAs the brightfield imaging system is not intended to be used to collect routinedata, but rather simply as a qualitative imaging tool, its features in the VI are few.Brightfield images can be viewed live and be recorded as text arrays.Error handling is another important feature of any software. As previouslymentioned, measures were taken to try to prevent users from causing certain typesof errors. But, there are other errors that could not be prevented by the software atall; the only thing the software could do is display a relevant error message. Thesetypes of errors include hardware being powered off, or having been unpluggedfrom the computer.Unfortunately, the error messages that are displayed by such errors frequentlyare very cryptic or technical in their wording, and of course, offer no suggestion69for how to go about resolving the problem. In order to provide a resource foraddressing those types of problems, a list of possible error messages was compiled,along with information about what caused them and how the underlying problemscould be fixed.70Chapter 8Data Collection and ProcessingTechniquesThis chapter will cover some basic data collection procedures, as well as somebasic data processing techniques.8.1 Data Collection: Procedures and PracticesOne of the biggest advantages of the present instrument, as alluded to in Ch.1.3, is the facility of sample preparation. Since Interferometric Scattering Mi-croscopy (iSCAT) is at its core an optical microscopy method, sample preparationis essentially identical to that used by more traditional microscopy methods: glassslides and coverslips, though with the addition of index-matched immersion oil, asrequired by the objective. Samples need not undergo chemical alterations, thoughsome physical alterations, such as resizing, may be necessary.When imaging animal cells, having them squished by a coverslip is generallynot desirable, as the cells will rupture, or at the very least, distort. To get aroundthis problem, we purchased a microscope well slide specifically designed to imagelive cell cultures, and modified it for our needs. This well slide was not neededto image plant samples on a cellular level (Figs. 5.1, 6.2), as they have rigid cellwalls.71Figure 8.1: iSCAT image of graphene deposited on nickel. Its deposition pro-file is non-uniform. False color; field of view approx. 150 mm square.Figure 8.2: Processed Raman spectrum of graphene deposited on nickel, 8sec integration. Peak assignments shown; the intensity ratio of G (1582cm−1) to G’ (2700 cm−1) peaks - approx. 2:1 - indicates the grapheneis multilayered. [79, 80] Processed using DWT, Savitsky-Golay filter,zeroing, and normalization.728.1.1 iSCAT Data CollectionCollecting iSCAT data is quite straightforward. The most significant challenge isactually focusing on the sample; this can be difficult if the sample is very small,or if the sample does not have a large refractive index morphology. An exampleof such would be tissue consisting mostly of water; this would produce little in theway of iSCAT signal as the interference would be largely uniform throughout thesample.Since the iSCAT focal plane is quite narrow, on the order of microns, it is some-times necessary to use the piezo actuators to focus the sample. This is made easywith the video iSCAT display in the software;1 however, if a sample is not flat - asis often the case - then only a small part of the sample may be contained withinthe focal plane. Collecting reliable iSCAT data in such a circumstance typicallyrequires a raster scan along the z axis. This presents its own difficulty, insofar asthe out-of-focus parts of the sample will contribute scattered light to the partiallyin-focus image, which may obfuscate regions of interest to the user.Once focused, iSCAT data can be recorded in a number of ways. Most com-monly, it is saved as a still image in Portable Network Graphics (PNG) format. Thisis the most convenient form for storage and transmission. Still images can simplybe captured from the video iSCAT display, in a “what you see is what you get” sit-uation, or they can be collected at points along a raster scan. The average full-size(2048 by 2048 pixel) image is approximately 4 MB in size. There are many im-age processing techniques that can be applied to improve image quality; the nextsection (Ch. 8.2) will discuss some of these techniques. The PNG format applieslossless compression to the recorded bitmap; the end result is an array of 8-bit pixeldata.iSCAT data can also be stored as 30 Frames per Second (FPS) video data inAudio Video Interleave (AVI) format. Again, video can be recorded both from thevideo iSCAT display or of a raster scan. Video is useful for live or otherwise mobilesamples. Additionally, during a raster scan, recording video can be preferable tostoring scores of still images. A nine-second video occupies 42 MB, whereas 270still images (equal to the number of frames in the video) would occupy 1 GB;1See Chs. 7.3.1.73however, a raster scan that produced nine seconds of video would almost certainlyconsist of only a few dozen images, rather than hundreds. Writing an AVI file usesthe Motion JPEG codec, such that each frame is stored as a JPEG image withinthe AVI file. The JPEG format applies lossy compression to the recorded bitmap,thereby reducing its quality; the result is many arrays of 6-bit pixel data interleavedat a rate of 30 per second.The third data storage method is as a double-precision floating point array.This format is substantially different, not least because it entirely lacks any visualaspect. The array storage is designed to maximize the dynamic range of the imageby iteratively adding multiple accumulations - the numeric equivalent of still imagecaptures - thus increasing the difference in pixel intensity between the darkest andbrightest regions. The default number of accumulations is 200. The array storageis also useful since it lends itself to mathematical processing techniques that can bemore difficult to apply to images. A floating point array is the most true-to-life datastorage method, because it is neither encoded nor compressed. When comparing toPNG and AVI formats (8 and 6 bits per pixel), the floating point array stores data at64 bits per pixel, with a vastly superior dynamic range. Therefore, for applicationswhere rigorous data processing is required, floating point arrays are ideal.8.1.2 Raman Data CollectionObserving Raman data is a challenge, due to the inherent weakness of the Ramaneffect. It is quite often the case that samples are either not Raman-active at all, orare only very weakly so. Samples might only produce a signal in certain region,and the signal may be so obfuscated by background noise or fluorescence that it isnot even discernible without advanced processing.Thus, getting a sample perfectly in focus for collecting a Raman spectrum isoften very difficult. The best practice is usually to use the focusing display toattempt to find a signal, then set up a raster scan along the z axis in an attempt torefine the precise focal plane.2Finding a Raman signal can be assisted by iSCAT. By using the video iSCATdisplay to localize a region of interest on a sample, the Raman probe can then be2See Ch. 7.3.2.74directed to that region, and then rastered as desired. This is another of the bigadvantages that the present instrument offers.When it comes to storing Raman data, there is really only one option: a float-ing point array. Because the readout from the spectrometer’s CCD is typically fullybinned vertically, it consists of a one-dimensional array 1,340 numbers long. Thespectrometer can be set up to record multiple frames per capture, which has a sim-ilar effect as the floating point array accumulations mentioned previously, exceptthat in this case each frame is stored as a separate row in the file, rather than beingadded to the previous one-dimensional array.8.2 Common Processing TechniquesThis section will outline some of the data processing techniques used to clean upboth iSCAT and Raman signals. The details of these techniques are very well-reported in the literature.8.2.1 Processing iSCAT DataThe manner in which iSCAT data are processed depends largely on how the dataare stored. As a floating point array, any number of processing methods can beapplied. PNG images are treated as intensity arrays, but because of their encoding,are open to different methodologies. AVI movies typically must be separated intotheir component frames for processing, which can be a lengthy process.The goal of the processing is typically to remove noise from the iSCAT image;the background, discussed in Ch. 6.3.4, is not so easily removed.One of the most common noise removal techniques for image processing is themedian filter. In essence it functions by replacing the intensity of a given pixel withthe median intensity of that pixel’s neighbors, in one or two dimensions. The radiuscan be extended to include several neighboring pixels. [81] This has the effect ofabating random noise or shot noise, but at the cost of image clarity. Overcorrection(i.e. too large a radius) can effectively blur an image beyond recognition. Giventhat random noise is not highly prevalent in wide-field iSCAT images, the medianfilter is used sparingly.Another, more sophisticated noise removal technique is the two-dimensional75Discrete Wavelet Transform (DWT). The DWT algorithm, originally developed forone-dimensional data sets, decomposes a signal into a set of waveforms with arange of frequencies (termed wavelets), the sum total of which approximates thesignal. [82] Depending on the needs of the user, several of the wavelets at the highand low ends of the frequency range are rejected, and the signal is re-composedfrom the remainder. Low-frequency wavelets typically contain persistent back-ground contributions, whereas high-frequency wavelets typically contain junk data,such as shot noise. [51, 82] The technique can be tailored to the user’s needs, andexpanded into two dimensions, thus proving to be useful for de-noising an iSCATimage.8.2.2 Processing Raman SignalsThere are a number of algorithms that are routinely used to process Raman spectra;Chapter 6.3.2 briefly mentions some techniques for background removal, makingreference to Figs. 6.5 and 6.6. Fig. 8.2 shows an example of a highly processedspectrum.Noise and Background RemovalAs discussed previously, DWT is a fine methodology for de-noising Raman spectra.[51, 83] Another complementary technique is the Savitsky-Golay filter, which usespolynomial approximation to smooth a signal over a preset window. [82] As withiSCAT images, care must be taken to avoid over-smoothing, as that might removemeaningful data that is not easily visible.Background removal is also an important step in Raman signal processing. Asdiscussed in Ch. 6.3.2, many signals contain large background contributions. Sim-ple mathematical subtraction of a preset background spectrum is insufficient, asthe background level tends to vary between spectra. Thus, more elaborate meth-ods are required, such as Second-Derivative Variance Minimization (SDVM). SDVMfunctions by computing the difference of the second derivatives of a signal spec-trum minus a scaled background spectrum. These second derivatives are smoothedusing the previously mentioned Savitsky-Golay filter, and the resulting differencespectrum is compared to an arbitrary threshold for an acceptable background con-76tribution. The scaling factor is recalculated and the subtraction iterates until thethreshold is met. [84] However, as can be seen in Fig. 6.6, the result is often quitenoisy, and further processing may be needed.ChemometricsIn Raman data sets, meaning is so often hidden that a host of chemometric tech-niques must be employed to elucidate anything from a set of spectra. One of themost basic chemometric techniques is PCA, which functions by transforming a dataset onto an orthogonal basis set where each variable (termed principal components)is a linearly independent contribution to the set’s overall variance. The algorithmworks such that the first principal component represents the largest contribution tothe overall variance, the second principal component the second largest, and so on.[86] Figure 8.3 shows an example of a practical application of PCA, where the firsttwo principal components provide useful information that would not otherwise beapparent.3The simplicity of PCA limits its utility as a multivariate processing method, es-pecially when the user wishes to quantitatively correlate spectral features to otherreal-world variables, using a classification model. In such situations, more ad-vanced processing methods are needed, typically involving regression analysis.The Partial Least-Squares Regression (PLS) is perhaps the best-known of thesemethods. Given a matrix of measured signals X and a matrix target propertiesY, the regression can be mathematically represented as:Y= Xb+ e (8.1)where b is the matrix of regression coefficients, and e is the error matrix. [87, 88]Y contains known classification information,X represents experimentally observedspectra, and b is determined through the model. PLS projects both data matricesonto a new basis set that maximizes the covariance between the dimensions of Xand Y, such that the latter can be predicted by linear regression from the former;this is the aforementioned classification model. It is often the case that X contains3See also Figs. A.1 and A.2.77Figure 8.3: Data set showing magnesite adsorbed to a polystyrene bead, col-lected using the single-channel configuration (Ch. 4). See Fig. 4.4 fora single Raman spectrum of the bead, and Figs. A.1 and A.2 to com-pare the principal component maps to unprocessed Raman peak maps.Magnesite is a carbonate mineral, having the formulaMgCO3 in its de-hydrated form. [85]Left: iSCAT map of sample, approx. 20x12 mm.Right, Top: Map of principal component 1 based on collected Ramanspectra, processed using PCA. PC1 accounts for 64.7% of the variancewithin the Raman data set. Given its clear correlation with the iSCATmap (left), it is reasonable to infer that PC1 represents the polystyrenebead within the data set.Right, Bottom: Map of principal component 2 based on collected Ra-man spectra. PC2 accounts for 7.30% of the variance within the Ramandata set. PC2’s concentration around the margin of the bead indicatesthat it represents the adsorbed magnesite within the data set.78many more dimensions than does Y, but PLS is well suited to handle that since,like PCA, it determines which dimensions are relevant to the target properties andwhich are not. [89] See Fig. 3.2 for an example of a PLS model.When these methodologies fail to produce meaningful results, still more ad-vanced processing techniques can be employed, such as Template-Oriented Ge-netic Algorithm (TOGA) and Orthogonal Signal Correction-Support Vector Ma-chine (OSC-SVM). These techniques are based on machine learning, which fallsoutside the scope of this thesis; it is sufficient to simply assert that TOGA andOSC-SVM are better at discarding junk data from calibration models than is PLS.[45, 90]Such a variety of data analysis methods opens the instrument described hereinto a whole host of possible uses. From simple imaging to providing quantitativedata about chemical and physical properties of a sample, the instrument is well-suited to meet the needs of contemporary microscopy.79Chapter 9Future ProspectsAs the operations of the instrument became routine, that left time to pursue severalnew prospects. One of them was the development of a confocal iSCAT channel,at least at a proof-of-concept level, that would function as an alternative to thecurrent wide-field channel. As previously discussed in Ch. 6.3.4, the omnipresentbackground in wide-field iSCAT images is somewhat problematic, and developinga complementary confocal channel could be a way to address the problem.9.1 Moving to Confocal iSCAT: A Clearer PictureA confocal iSCAT channel would use pinholes - one after the laser and one beforethe detector - and a scanning galvanometric mirror system, rather than Acousto-Optic Deflector (AOD)s. Such a system would represent a significant alteration tothe instrument (as well as outlay of funds), so a reasonable facsimile was devised.This pseudo-confocal arrangement uses a single pinhole and an single-channel de-tector. The development of the pseudo-confocal iSCAT channel is outlined in thissection.9.1.1 Challenges Facing Wide-Field iSCATThe iSCAT background problem has been addressed in Ch. 6.3.4; however, this isnot the only difficulty facing wide-field iSCAT.Figure 9.1 shows what a typical out-of-focus iSCAT image looks like. This im-80Figure 9.1: Irregular illumination in an unfocused iSCAT sample, illustratingthe large background contribution present in all wide-field images. Notethe similarity of the pattern with that in subfigure C of Figure B.1. Thebackground light itself consists out of focus reflections from the glasscover slip.age is instructive in a number of ways, even if it contains no usable sample data.First, the camera’s sensor is underfilled; this is not overly concerning, though, as itis only slightly underfilled, and all the relevant information is captured. However,the irregularity of the illumination area is relevant, as it determines what parts of asample can be observed. The irregularity, both in shape and in intensity distribu-tion, means that certain parts of a sample may be better illuminated than others. Forinstance, the lower half of the illuminated area appears slightly brighter than theupper half. Additionally, the pervasive intensity pattern caused by the AOD units isclearly visible, and bears a striking resemblance to the pattern in subfigure C - thesine wave - of Fig. B.1.11See Ch. 5.1.2 for a thorough discussion of this subject.81A further complicating factor is positional uncertainty in the illumination beam.Without the ability to know where on the image plane the beam is at any given time,it is impossible to correct for any variations in illumination power over the samplecaused by the AOD crystals, or any other unforeseen factors.As mentioned previously, all these factors cause the irregular background thatpervades wide-field iSCAT images; imagine Fig. 9.1 overlaid on every sample im-age. Being able to remove that pattern is a highly desirable goal. Because of thesheer complexity and variability of the background pattern, though, it would beimpossible (or, at least, very unwise) to simply designate an arbitrary backgroundfile and perform a pixel-by-pixel subtraction to remove it from a sample image.The advantage of introducing a confocal iSCAT channel, using a single-channeldetection method,2 would be introducing the ability to have a positionally certain,point-by-point reference channel that could be used concurrently with an iSCATsignal channel. The reference/signal coupling allows for background suppressionand for correction of laser power fluctuations; more generally phrased, it allowsthe iSCAT signal to be normalized, both temporally and positionally. Being ableto normalize the channel in such a way would increase the instrument’s ability toresolve smaller features, due to the reduction in the system’s Point Spread Function(PSF).9.2 Designing a Confocal iSCAT ChannelThe basic design principles for a single-channel iSCAT detection system havingalready been developed, there was not much difficulty in their translation to thenew configuration. The detectors would have to be placed before the AOD system;some of the scattered light returning from the sample does pass through the AODunits backwards, thus, adding photodiode detectors there was feasible.In addition to detectors, a way to read position information into the computerwas needed. The function generators’ waveform outputs are translated into beamposition by the Voltage-Controlled Oscillator (VCO) units and the AOD crystals; ifthose waveforms could be mathematically related to beam position, simply acquir-ing those waveforms with a Data Acquisition (DAQ) card would do the trick. Of2See Chs. 3.3.1 and 4.1.82course, since the waveform frequencies are in the kHz range, a very fast DAQ cardwould be necessary.9.2.1 Choosing the Right DetectorThe first step in designing the configuration was to choose detectors that wouldactually work with the system. Because of the large number of optics that the beammust pass through, by the time it reaches the photodetector it is exceptionally weak.Thus, highly sensitive detectors were necessary.The Nirvana 2007 autobalanced photoreceiver (New Focus Inc., Santa Clara,CA, USA), from the old single-channel configuration, was one possibility for adetector. We also purchased two fixed-gain silicon transimpedance-amplified pho-todiodes from Thorlabs, Inc. (model name PDA10A), one for the signal channeland one for the reference channel. Both, however, proved unsuitable.The Thorlabs photodiodes could not detect a signal at all, as their bias voltageoverwhelmed any photocurrent generated on the detector itself. The autobalancedphotoreceiver also had difficulty generating an output signal, since the referencebeam was far too powerful for the autobalancing circuitry. Recall from Ch. 3.3.1that the signal beam should be roughly half as powerful as the reference beam in or-der to achieve maximum common-mode rejection; this was practically impossiblegiven the optical train, even with the addition of several additional filters. Althoughsome small autobalanced signal could be detected from the photoreceiver, it wasso minimal that it was essentially useless.Eventually, a pair of much more sensitive Silicon Photomultiplier (SiPM) de-tectors were purchased (model name MicroFC-SMA-10050, SensL TechnologiesLtd., Cork, Ireland). SiPM detectors are orders of magnitude more sensitive thantransimpedance-amplified photodiodes; thus, they function quite well in the low-intensity confocal iSCAT design. Testing the detectors proved their ability to detectthe very weak iSCAT signal beam.9.2.2 Confocal Channel ImplementationSetting up the confocal iSCAT channel required some creative thinking. The firststep was to construct a calibration model to relate beam position on the image83plane with function generator outputs. This was done by switching the functiongenerators to output fixed voltages, and recording the resulting spot’s location onthe wide-field CMOS detector.The initial design, with only one DAQ card (having two input channels), hadbeen to use one input channel to read from the signal detector, and the other fromthe reference detector. It very quickly became clear that there was a critical flawwith this design; even though we had the calibration model, it was impossible toactually use, since there was no way to know the phase of the function generators’waveforms.The initial solution to this problem was to switch the function generators todeliver waveform bursts upon receiving a Transistor-Transistor Logic (TTL) signal.They could deliver a maximum of 30,000 waveforms per burst, which at their fre-quencies near 40 kHz, equated to around 760 ms per burst. The assumption wasthat each generated burst would have identical phase, so that we would then be ableto determine the waveform’s phase based on time information alone. However, thisdid not turn out to be the case; when the DAQ card was reconnected to read out fromthe function generators, it was clear that the phase was not constant for the start ofeach burst.The only way to account for this phase problem was thus to purchase a secondDAQ card so that the waveforms from the function generators could be recordedconcurrently with the signals from the photodetectors. Only then would the cal-ibration model be useful. Having two cards would also obviate the need to usewaveform bursts.Setting up a second DAQ card required precise synchronization between thetwo. At first, we set up software timing through the LabVIEW interface, but thatproved unreliable; there were delays between when the cards started acquisition.Then, we wired one card to trigger the other using hardware timing and a TTLsignal, after receiving a digital bit change from the software; this arrangementworked well, as the DAQ cards’ hardware clocks are better suited to such tasks.849.2.3 Reconstructing the Confocal ImageSubstantial mathematical manipulation is necessary to acquire an image from thissetup. The DAQ cards simply record large arrays of time-coded values from thefunction generators and photodetectors. This data must then be reconstructed intoan image.A critical problem is that the data is discontinuous; simply reshaping it wouldyield an array of discrete points. One method to make the reconstructed imageappear more realistic is to apply a constrained Gaussian blur to each data point, inorder to simulate the PSF of the system. An alternative to a Gaussian blur would beto use a bilateral filter, which would preserve the edges of a feature while removingnoise. There are also many stochastic methods reported in the literature for similarimage reconstruction. [10, 25, 71, 77]9.3 Looking ForwardThe implementation of the confocal iSCAT channel is far from complete; as withall instrumental designs, it will undergo many refinements and adjustments beforeit can be called finished. The design and construction processes involved drawfrom many branches of knowledge; laser physics, chemistry, optics, and softwaredesign, to name a few. As the instrument described herein transitions from beinga construction project to being a data collector, one can begin to consider where itmight be useful.The advantages of wide-field Interferometric Scattering Microscopy (iSCAT)and confocal Raman, namely label-free and non-destructive sample preparation,and capability for video-rate imaging alongside precise chemical characterization,give the instrument described herein a unique ability to investigate and character-ize a wide variety of samples. Pulp and paper samples, as described in Chapter3.1, are prime examples. Non-destructively imaging live cells in solution, rapidlycharacterizing surface materials such as graphene, and investigating the uptake ofgold nanoparticles by cancerous cells versus healthy cells are more examples ofthe potential this instrument holds.It remains an open question, however, if instruments, such as the one describedin this thesis, could ever be considered to be “complete”.85Bibliography[1] C. Connolly, “A review of medical microscopy techniques,” Sensor Review,vol. 25, pp. 252–258, Dec. 2005. → pages 2[2] D. J. Stephens and V. J. Allan, “Light microscopy techniques for live cellimaging,” Science, vol. 300, pp. 82–86, Apr. 2003. → pages 2[3] E. Thomsen and M. Thomsen, “Darkfield microscopy of livingneurosecretory cells,” Cell. Mol. Life Sci., vol. 10, pp. 206–207, May 1954.→ pages 2[4] H. E. Rosenberger, “Differential interference contrast microscopy,”Interpretive techniques for microstructural analysis, pp. 79–104, 1977. →pages 2, 3[5] F. Zernike, “Phase contrast, a new method for the microscopic observationof transparent objects,” Physica, vol. 9, pp. 686–698, July 1942. → pages 2[6] W. Lang, Nomarski Differential Interference Contrast Microscopy,pp. 1353–1354. Springer Science + Business Media, 1982. → pages 3[7] T. G. Rochow and P. A. Tucker, “Interference microscopy,” Introduction toMicroscopy by Means of Light, pp. 221–231, 1994. → pages 3[8] M. G. L. Gustafsson, “Nonlinear structured-illumination microscopy:Wide-field fluorescence imaging with theoretically unlimited resolution,”Proceedings of the National Academy of Sciences, vol. 102,pp. 13081–13086, Sept. 2005. → pages 4, 43[9] M. G. L. Gustafsson, D. A. Agard, and J. W. Sedat, “I5M: 3D widefield lightmicroscopy with better than 100nm axial resolution,” J. Microsc., vol. 195,pp. 10–16, July 1999. → pages 4, 4286[10] B. Huang, H. Babcock, and X. Zhuang, “Breaking the diffraction barrier:Super-Resolution imaging of cells,” Cell, vol. 143, pp. 1047–1058, Dec.2010. → pages 4, 6, 85[11] E. Klimov, W. Li, X. Yang, G. G. Hoffmann, and J. Loos, “Scanningnear-field and confocal Raman microscopic investigation of p3Ht-PCBMsystems for solar cell applications,”Macromolecules, vol. 39,pp. 4493–4496, June 2006. → pages 4, 13[12] L. Schermelleh and R. Heintzmann, “A guide to super-resolutionfluorescence microscopy,” The Journal of cell , vol. 190, pp. 165–175, July2010. → pages 4, 6[13] R. Bo¨hme, M. Richter, D. Cialla, P. Rsch, V. Deckert, and J. Popp, “Towardsa specific characterisation of components on a cell surface-combinedTERS-investigations of lipids and human cells,” J. Raman Spectrosc.,vol. 40, pp. 1452–1457, Oct. 2009. → pages 4, 5[14] K. Domke and B. Pettinger, “Studying surface chemistry beyond thediffraction limit: 10 years of TERS,” ChemPhysChem, vol. 11,pp. 1365–1373, May 2010. → pages 4, 5[15] C. Blum, T. Schmid, L. Opilik, S. Weidmann, S. R. Fagerer, and R. Zenobi,“Understanding tip-enhanced Raman spectra of biological molecules: acombined Raman, SERS and TERS study,” J. Raman Spectrosc., vol. 43,pp. 1895–1904, June 2012. → pages 5[16] J. Chen, M. Badioli, and A. P, “Optical nano-imaging of gate-tunablegraphene plasmons,” Nature, 2012. → pages 5[17] S. Lal, S. Link, and N. Halas, “Nano-optics from sensing to waveguiding,”Nature photonics, 2007. → pages 5[18] D. A. Skoog, F. J. Holler, and S. R. Crouch, Principles of InstrumentalAnalysis. Brooks/Cole, 6th ed., 2007. → pages xi, 5, 15[19] Z. Fei, A. Rodin, G. Andreev, W. Bao, M. AS, M. Wagner, L. Zhang,Z. Zhao, M. Thiemens, and G. Dominguez, “Gate-tuning of grapheneplasmons revealed by infrared nano-imaging,” Nature, vol. 487, no. 7405,pp. 82–85, 2012. → pages 5[20] M. Qazilbash, M. Brehm, G. Andreev, and A. Frenzel, “Infraredspectroscopy and nano-imaging of the insulator-to-metal transition invanadium dioxide,” Physical Review B, 2009. → pages 587[21] J. Samson, G. Wollny, and E. Bru¨ndermann, “Setup of a scanning near fieldinfrared microscope (SNIM): imaging of sub-surface nano-structures ingallium-doped silicon,” Physical Chemistry , 2006. → pages 5[22] S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit bystimulated emission: stimulated-emission-depletion fluorescencemicroscopy,” Opt. Lett., vol. 19, p. 780, June 1994. → pages 6[23] S. W. Hell, “Far-field optical nanoscopy,” Science, vol. 316, pp. 1153–1158,May 2007. → pages 6[24] M. A. Schwentker, H. Bock, M. Hofmann, S. Jakobs, J. Bewersdorf,C. Eggeling, and S. W. Hell, “Wide-field subdiffraction RESOLFTmicroscopy using fluorescent protein photoswitching,”Microsc. Res. Tech.,vol. 70, no. 3, pp. 269–280, 2007. → pages 6, 44[25] J. Tønnesen and U. V. Na¨gerl, “Superresolution imaging for neuroscience,”Exp. Neurol., vol. 242, pp. 33–40, Apr. 2013. → pages 6, 85[26] C. G. Galbraith and J. A. Galbraith, “Super-resolution microscopy at aglance,” J. Cell Sci., vol. 124, pp. 1607–1611, May 2011. → pages 6[27] J. Lippincott-Schwartz and S. Manley, “Putting super-resolutionfluorescence microscopy to work,” Nat. Methods, vol. 6, pp. 21–23, Jan.2009. → pages 6[28] J. Ortega-Arroyo and P. Kukura, “Interferometric scattering microscopy(iSCAT): new frontiers in ultrafast and ultrasensitive optical microscopy.,”Physical Chemistry Chemical Physics : PCCP, vol. 14, no. 45,pp. 15625–15636, 2012. → pages 8, 9, 10, 11, 21, 22, 44, 59[29] J. Ortega-Arroyo, Investigation of Nanoscopic Dynamics and Potentials byInterferometric Scattering Microscopy. PhD thesis, St. Hugh’s College,University of Oxford, 2015. → pages 9, 10, 11[30] E. Atılgan and B. Ovryn, “Reflectivity and topography of cells grown onglass-coverslips measured with phase-shifted laser feedback interferencemicroscopy,” Biomed. Opt. Express, vol. 2, pp. 2417–37, July 2011. →pages 10[31] J. H. Seinfeld and S. N. Pandis, Atmospheric Chemistry and Physics: FromAir Pollution to Climate Change. John Wiley & Sons, Inc., 2 ed., 2006. →pages 1188[32] R. H. Webb, “Confocal optical microscopy,” Rep. Prog. Phys., vol. 59,pp. 427–471, Mar. 1996. → pages 12[33] N. J. Everall, “Confocal Raman microscopy: Performance, pitfalls, and bestpractice,” Appl. Spectrosc., vol. 63, pp. 245–262, Sept. 2009. → pages 13, 26[34] N. J. Everall, “Confocal Raman microscopy : common errors and artefacts,”The Analyst, vol. 135, no. 10, pp. 2512–2522, 2010. → pages 13[35] E. M. Ali, H. G. Edwards, and I. J. Scowen, “In-situ detection of singleparticles of explosive on clothing with confocal Raman microscopy,”Talanta, vol. 78, pp. 1201–1203, May 2009. → pages 13[36] A. Belu, C. Mahoney, and K. Wormuth, “Chemical imaging of drug elutingcoatings: Combining surface analysis and confocal Raman microscopy,” J.Controlled Release, vol. 126, pp. 111–121, Mar. 2008. → pages 13[37] T. E. Bridges, M. P. Houlne, and J. M. Harris, “Spatially resolved analysis ofsmall particles by confocal Raman microscopy: depth profiling and opticaltrapping,” Anal. Chem., vol. 76, pp. 576–584, Feb. 2004. → pages 13[38] J. Choi, J. Choo, H. Chung, D.-G. Gweon, J. Park, H. J. Kim, S. Park, andC.-H. Oh, “Direct observation of spectral differences between normal andbasal cell carcinoma (BCC) tissues using confocal Raman microscopy,”Biopolymers, vol. 77, no. 5, pp. 264–272, 2005. → pages 13[39] J. V. Glenn, J. R. Beattie, L. Barrett, N. Frizzell, S. R. Thorpe, M. E.Boulton, J. J. McGarvey, and A. W. Stitt, “Confocal Raman microscopy canquantify advanced glycation end product (AGE) modifications in bruch'smembrane leading to accurate, nondestructive prediction of ocular aging,”The FASEB Journal, vol. 21, pp. 3542–3552, Nov. 2007. → pages 13[40] M. P. Houlne, C. M. Sjostrom, R. H. Uibel, J. A. Kleimeyer, and J. M.Harris, “Confocal Raman microscopy for monitoring chemical reactions onsingle optically trapped, solid-phase support particles,” Anal. Chem., vol. 74,pp. 4311–4319, Sept. 2002. → pages 13[41] W. Schrof, E. Beck, R. Kniger, W. Reich, and R. Schwalm, “Depth profilingof UV cured coatings containing photostabilizers by confocal Ramanmicroscopy,” Prog. Org. Coat., vol. 35, pp. 197–204, Aug. 1999. → pages1389[42] P. Caspers, G. Lucassen, and G. Puppels, “Combined in vivo confocalRaman spectroscopy and confocal microscopy of human skin,” Biophys. J.,vol. 85, pp. 572–580, July 2003. → pages 13[43] L. Chrit, C. Hadjur, S. Morel, G. Sockalingum, G. Lebourdon, F. Leroy, andM. Manfait, “In vivo chemical investigation of human skin using a confocalRaman fiber optic microprobe,” J. Biomed. Opt., vol. 10, no. 4, p. 044007,2005. → pages 13[44] K. Klein, A. M. Gigler, T. Aschenbrenner, R. Monetti, W. Bunk, F. Jamitzky,G. Morfill, R. W. Stark, and J. Schlegel, “Label-free live-cell imaging withconfocal Raman microscopy,” Biophys. J., vol. 102, pp. 360–368, Jan. 2012.→ pages 13[45] N. Tavassoli, Z. Chen, A. Bain, L. Melo, D. Chen, and E. R. Grant,“Template-oriented genetic algorithm feature selection of analyte waveletsin the Raman spectrum of a complex mixture,” Anal. Chem., vol. 86,pp. 10591–10599, Nov. 2014. PMID: 25260090. → pages xv, 18, 79[46] N. Wistara and R. Young, “Properties and treatments of pulps from recycledpaper. part I. physical and chemical properties of pulps,” Cellulose, vol. 6,pp. 291–324, 1999. → pages 18[47] L. Safranyik and B. Wilson, The mountain pine beetle: a synthesis ofbiology, management and impacts on lodgepole pine. Pacific ForestryCentre, Canadian Forest Service, Natural Resources Canada, 2006. → pages19[48] TEMAP, “Fibre Morphology.” Online, 2007. TEMAP is a division ofCanfor Pulp Ltd. (Vancouver, BC, Canada). → pages 19[49] P. Watson and M. Bradley, “Canadian pulp fibre morphology: superiorityand considerations for end use potential,” The Forestry Chronicle, vol. 85,pp. 401–408, June 2009. → pages xiii, 19[50] TEMAP, “Premium reinforcement pulp: Glossary of terms.” Online, n.d. →pages 19[51] N. Tavassoli, W. Tsai, P. Bicho, and E. R. Grant, “Multivariate classificationof pulp NIR spectra for end-product properties using discrete wavelettransform with orthogonal signal correction,” Anal. Methods, vol. 6,pp. 8906–8914, July 2014. → pages 20, 7690[52] U. P. Agarwal, R. R. Reiner, and S. A. Ralph, “Estimation of cellulosecrystallinity of lignocelluloses using Near-IR FT-Raman spectroscopy andcomparison of the Raman and Segal-WAXS methods,” J. Agric. Food.Chem., vol. 61, pp. 103–113, Jan. 2013. → pages 20[53] G. Downes, R. Meder, C. Hicks, and N. Ebdon, “Developing and evaluatinga multisite and multispecies NIR calibration for the prediction of kraft pulpyield in eucalypts,” Southern Forests: a Journal of Forest Science, vol. 71,pp. 155–164, June 2009. → pages 20[54] N. Dura´n and R. Angelo, “Infrared microspectroscopy in the pulp andPaper-Making industry,” Appl. Spectrosc. Rev., vol. 33, pp. 219–236, Aug.1998. → pages 20[55] P. Fardim, M. M. C. Ferreira, and N. Dura´n, “Determination of mechanicaland optical properties of eucalyptus kraft pulp by NIR spectrometry andmultivariate calibration,” J. Wood Chem. Technol., vol. 25, pp. 267–279, Oct.2005. → pages 20[56] A. J. Hobro, J. Kuligowski, M. Dll, and B. Lendl, “Differentiation of walnutwood species and steam treatment using ATR-FTIR and partial least squaresdiscriminant analysis (PLS-DA),” Anal. Bioanal.Chem., vol. 398,pp. 2713–2722, Sept. 2010. → pages 20[57] S. S. Kelley, T. G. Rials, R. Snell, L. H. Groom, and A. Sluiter, “Use of nearinfrared spectroscopy to measure the chemical and mechanical properties ofsolid wood,”Wood Sci. Technol., vol. 38, May 2004. → pages 20[58] J. J. Workman, “Infrared and Raman spectroscopy in paper and pulpanalysis,” Appl. Spectrosc. Rev., vol. 36, pp. 139–168, June 2001. → pages20[59] D. B. Chase, “Fourier transform Raman spectroscopy.,” J. Am. Chem. Soc.,vol. 108, pp. 7485–7488, Nov. 1986. → pages 20, 21[60] P. Matousek, M. Towrie, and A. W. Parker, “Fluorescence backgroundsuppression in Raman spectroscopy using combined Kerr gated and shiftedexcitation Raman difference techniques,” Journal of Raman Spectroscopy,vol. 33, no. 4, pp. 238–242, 2002. → pages 20[61] A. C. Albrecht, “On the dependence of vibrational Raman intensity on thewavelength of incident light,” The Journal of Chemical Physics, vol. 55,no. 9, p. 4438, 1971. → pages 2191[62] J. E. Pemberton and R. L. Sobocinski, “Raman spectroscopy withhelium-neon laser excitation and charge-coupled device detection,” J. Am.Chem. Soc., vol. 111, pp. 432–434, Jan. 1989. → pages 21[63] P. A. Laplante, Comprehensive Dictionary of Electrical Engineering. CRCPress, 2nd ed., 2005. → pages 21[64] New Focus, Inc., 2630 Walsh Av., Santa Clara, CA, USA 95051, NirvanaAuto-Balanced Photoreceivers - Model 2007 & 2017 User’s Manual, n.d. →pages 22, 23, 29[65] T. M. Niebauer, J. E. Faller, H. M. Godwin, J. L. Hall, and R. L. Barger,“Frequency stability measurements on polarization-stabilized he–ne lasers,”Appl. Opt., vol. 27, pp. 1285–1289, Apr. 1988. → pages 22[66] J. Dubessy, M.-C. Caumon, F. Rull, and S. Sharma, Instrumentation inRaman spectroscopy, vol. 12 of EMU Notes in Mineralogy, ch. 3,pp. 83–172. Mineralogical Society, 2012. → pages 26[67] E. Vignati, M. Facchini, M. Rinaldi, C. Scannell, D. Ceburnis, J. Sciare,M. Kanakidou, S. Myriokefalitakis, F. Dentener, and C. O'Dowd, “Globalscale emission and distribution of sea-spray aerosol: Sea-salt and organicenrichment,” Atmos. Environ., vol. 44, pp. 670–677, Feb. 2010. → pages 37[68] C. L. Haynes, A. D. McFarland, and R. P. V. Duyne, “Surface-EnhancedRaman spectroscopy,” Anal. Chem., vol. 77, pp. 338–346, Sept. 2005. →pages 38, 57[69] B. Pettinger, G. Picardi, R. Schuster, and G. Ertl, “Surface-enhanced andSTM-tip-enhanced Raman spectroscopy at metal surfaces,” SingleMolecules, vol. 3, pp. 285–294, Nov. 2002. → pages 38, 56, 57[70] D. Karadaglic´ and T. Wilson, “Image formation in structured illuminationwide-field fluorescence microscopy,”Micron, vol. 39, pp. 808–818, Oct.2008. → pages 42, 43[71] E. B. van Munster, L. J. van Vliet, and J. A. Aten, “Reconstruction of opticalpathlength distributions from images obtained by a wide field differentialinterference contrast microscope,” J. Microsc., vol. 188, pp. 149–157, Nov.1997. → pages 42, 85[72] S. Schlcker, M. D. Schaeberle, S. W. Huffman, and I. W. Levin, “Ramanmicrospectroscopy: a comparison of point, line, and wide-field imaging92methodologies,” Anal. Chem., vol. 75, pp. 4312–4318, Aug. 2003. → pages42[73] J. R. Swedlow and M. Platani, “Live cell imaging using wide-fieldmicroscopy and deconvolution,” Cell Struct. Funct., vol. 27, no. 5,pp. 335–341, 2002. → pages 42[74] I. Toytman, K. Cohn, T. Smith, D. Simanovskii, and D. Palanker, “Wide-fieldcoherent anti-Stokes Raman scattering microscopy with non-phase-matchingillumination,” Opt. Lett., vol. 32, p. 1941, June 2007. → pages 42[75] R. H. Zander and P. M. Eckel, “Bryophyte flora of North America,” in Floraof North America North of Mexico (F. of North AmericaEditorial Committee, ed.), vol. 27, Flora of North America Association,2007. → pages 52[76] Thorlabs Inc., “QTH10 Emission Spectrum.” Online, n.d. → pages 61[77] S. Ayas, G. Cinar, A. D. Ozkan, Z. Soran, O. Ekiz, D. Kocaay, A. Tomak,P. Toren, Y. Kaya, I. Tunc, H. Zareie, T. Tekinay, A. B. Tekinay, M. O.Guler, and A. Dana, “Label-Free Nanometer-Resolution imaging ofbiological architectures through Surface Enhanced Raman Scattering,” Sci.Rep., vol. 3, p. 2624, Sept. 2013. → pages 57, 85[78] F. L. McCrakin, E. Passaglia, R. R. Stromberg, and H. L. Steinberg,“Measurement of the thickness and refractive index of very thin films andthe optical properties of surfaces by ellipsometry,” Journal of Research ofthe National Bureau of Standards A, vol. 67A, pp. 363–377, July 1963. →pages 59[79] D. Graf, F. Molitor, K. Ensslin, C. Stampfer, A. Jungen, C. Hierold, andL. Wirtz, “Spatially resolved Raman spectroscopy of single- and few-layergraphene,” Nano Lett., vol. 7, pp. 238–242, Feb. 2007. → pages 72[80] L. Malard, M. Pimenta, G. Dresselhaus, and M. Dresselhaus, “Ramanspectroscopy in graphene,” Phys. Rep., vol. 473, pp. 51–87, Apr. 2009. →pages 72[81] D. R. K. Brownrigg, “The weighted median filter,” Communications of theACM, vol. 27, pp. 807–818, Aug. 1984. → pages 75[82] V. J. Barclay, R. F. Bonner, and I. P. Hamilton, “Application of wavelettransforms to experimental spectra: smoothing, denoising, and data setcompression,” Anal. Chem., vol. 69, pp. 78–90, Jan. 1997. → pages 7693[83] D. Chen, Z. Chen, and E. Grant, “Adaptive wavelet transform suppressesbackground and noise for quantitative analysis by Raman spectrometry,”Anal. Bioanal.Chem., vol. 400, pp. 625–634, Feb. 2011. → pages 76, 108[84] Y. L. Loethen, D. Zhang, R. N. Favors, S. B. G. Basiaga, and D. Ben-Amotz,“Second-Derivative variance minimization method for automated spectralsubtraction,” Appl. Spectrosc., vol. 58, pp. 272–278, Mar. 2004. → pagesxiv, 77, 109[85] I. M. Power, S. A. Wilson, A. L. Harrison, G. M. Dipple, J. McCutcheon,G. Southam, and P. A. Kenward, “A depositional model forhydromagnesite–magnesite playas near Atlin, British Columbia, Canada,”Sedimentology, vol. 61, pp. 1701–1733, June 2014. → pages 78[86] S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,”Chemom. Intell. Lab. Syst., vol. 2, pp. 37–52, Aug. 1987. → pages 77[87] J. H. Kalivas, “Multivariate calibration, an overview,” Anal. Lett., vol. 38,pp. 2259–2279, Nov. 2005. → pages xiv, 77[88] Y.-H. Yun, D.-S. Cao, M.-L. Tan, J. Yan, D.-B. Ren, Q.-S. Xu, L. Yu, andY.-Z. Liang, “A simple idea on applying large regression coefficient toimprove the genetic algorithm-PLS for variable selection in multivariatecalibration,” Chemom. Intell. Lab. Syst., vol. 130, pp. 76–83, Jan. 2013. →pages 77[89] D. Chen, Z. Chen, and E. R. Grant, “Adaptive multiscale regression forreliable Raman quantitative analysis,” The Analyst, vol. 137, no. 1,pp. 237–244, 2012. → pages 79[90] O. Svensson, T. Kourti, and J. F. MacGregor, “An investigation oforthogonal signal correction algorithms and their characteristics,” J.Chemom., vol. 16, no. 4, pp. 176–188, 2002. → pages xiii, 79[91] A. Palm, “Raman spectrum of polystyrene.,” The Journal of PhysicalChemistry, vol. 55, pp. 1320–1324, Aug. 1951. → pages 97[92] P. Gillet, C. Biellmann, B. Reynard, and P. McMillan, “Raman spectroscopicstudies of carbonates part I: High-pressure and high-temperature behaviourof calcite, magnesite, dolomite and aragonite,” Phys. Chem. Miner., vol. 20,May 1993. → pages 9894Appendix ARaman Map Reader Program:RMR.VIThis appendix contains images of the LabVIEW Virtual Instrument (VI) used toread iSCAT-Raman maps generated by the old instrument, as well as the MATLABcode of dependencies. LabVIEW’s dataflow programming language - G - cannotbe translated to code, so images of the block diagram and its various structures arethe only way it can be recorded.The use of this VI are outlined in Chapters 3.5.2 and 4.2. The first dependencyDWT Single.VI contains code to apply a simple Discrete Wavelet Transform (DWT)to the data. The second dependency SDVM Matlab.VI applies Second-DerivativeVariance Minimization (SDVM) to Raman spectra to remove background contribu-tions.95Figure A.1: RMR.VI in use, showing the Raman distribution of a characteristic polystyrene peak near 1000 cm−1. [91]The polystyrene peaks dominate the acquired spectrum and reconstructed map (top left). The Raman exposuretime was 25 seconds. See also Figs. 4.4 and 8.3.97Figure A.2: RMR.VI in use, showing the Raman distribution of a characteristic magnesite peak near 1095 cm−1. [92]Any magnesite peaks are overwhelmed by polystyrene and glass fluorescence, and cannot be observed in theacquired spectrum without processing. The Raman exposure time was 25 seconds. See also Figs. 4.4 and 8.3.98Raman Map Reader EXE.vi Block DiagramFigure A.3: Load File (Sequence 1 of 2) - Detect and Load Files into Memory, Apply Median Filter100Figure A.4: Load File (Sequence 2 of 2) - Read Cursor Position and Update Display, Apply Processing (SVDM andDWT; See Below for Block Diagrams of Processing VIs)101Figure A.5: Raman Map Cursor Move102Figure A.6: Raman Map Cursor Release103Figure A.7: Raman Spectrum Cursor Move104Figure A.8: Use Custom Background File?105Figure A.9: Stop Program106Figure A.10: Error Handling - File(s) Not Found107A.2 DWT Single.VINote: the dwt rebu function was developed by Daniel Da Chen, [83] a formerstudent in the research group, and is beyond the scope of this thesis.%%%%%%%%%%%%%%%%%% MATLAB code: DWT Single.VI%% Discrete Wavelet Transform%% LabVIEW inputs: data% LabVIEW outputs: wavedata%p=path;path(p,'C:\...\VIs\') %Actual path to dwt_rebu.m omitted[wavedata, wcoefs]=dwt_rebu(data,'sym5',5,2); %Execute DWT using sym5 wavelet108A.3 SDVMMatlab.VIThis code is based on a previously published SDVM methodology. [84]%%%%%%%%%%%%%%%%%% MATLAB code: SDVM Matlab.VI% by Ashton Christy, 18 Nov 2013%% Second-Derivative Variance Minimization%% LabVIEW inputs: Signal -> I ; Background -> A0% LabVIEW outputs: Bf -> Data%% I = (c00-c)A0 + B% signal = scale (best guess - modifier) * background + data% => B = I - (c00-c)A0%c0=1; %Initial Guess (minimum)c00=10000; %Initial Guess (maximum)c0t=c0; %Indexn=1; %Index% Input I, A0 from LabVIEW (signal, background)sdI=std(I); %Calculate signal stdevsdA0=std(A0); %Calculate background stdevwhile abs(c00-c0t)>0.001 %Guess max/min background scale until within tolerancec0t=c00;Btest=I-(c0)*A0; %Guess 1, minimum scale, c=0c01=(-sdI/(-sdA0/sqrt(1+(std(Btest)/sdA0)ˆ2))); %Change of variance WRT guessBtest2=I-(c0-2*c01)*A0; %Guess 2, maximum scale, c=2*c01c02=2*c01-(sdI/(-sdA0/sqrt(1+(std(Btest)/sdA0)ˆ2))); %Change of variance WRT guessc00=(c01+c02)/2; %Average max/min to get best guessend109c=[0.001:0.001:c02]; %Initialize c array (scale modifier)B=zeros(length(c),length(I)); %Initialize B arrayvarB=zeros(length(c),1); %Initialize variance arrayc=c';while n<=length(c) %For each value of c...B(n,:)=I-(c00*c(n))*A0; %...compute BvarB(n)=var(B(n,:)); %...compute variancen=n+1;end[minV,minVi] = min(varB); %Find optimal c value based on minimum varianceBf = I-(c00-c(minVi))*A0; %Calculate B output with best c00 ans c values110Appendix BMATLAB Code for WaveformSimulationThis appendix contains the MATLAB code for the waveform simulations used tomodel AODs and wide-field iSCAT imaging, as discussed in Ch. 5.1.2. Fig. B.1shows the results of the simulations, as well as experimental observations validat-ing those results.Experimental results were collected using the SensL photodiode and recon-structed in MATLAB using data retrieved from the function generators. See Ch.9.1 for a description of this process. Resulting images have been cropped for clar-ity.111Figure B.1: Results of MATLAB simulations (top; reproduction of Fig. 5.3),and corresponding experimental observations (bottom).A: Sawtooth wave.B: Sine wave.C: Triangle wave.D: Square wave.112%%%%%%%%%%%%%%%%%% MATLAB simulation for AOD drivers% by Ashton Christy, 17 Jul 2014%% Function generator outputs -> VCO outputs -> deflection angles -> illuminated pixel% Vx(t),Vy(t) -> wx(t),wy(t) -> theta,phi -> x,y%clear;clc;close all;N=13000; %Number of simluation pointsm=1:N;x=zeros(N,1);y=zeros(N,1);color=1:N; %Set up colors for final outputcolormap('hsv');% Frequency, phase, for Xfx=40;px=0;% Frequency, phase, for Yfy=39.0;py=pi/2;Vx=RCwavetri(fx,px,N); % Calculate X function; see next pageVy=RCwavetri(fy,py,N); % Calculate Y functionfor n=1:Ndx(n)=Vx(n); % Update change in x,ydy(n)=Vy(n);endfor n=2:Nx(n)=x(n)+dx(n-1); % Update x,yy(n)=y(n)+dy(n-1);endfigure(1); scatter(x,y,250,color,'.')113%%%%%%%%%%%%%%%%%% MATLAB function for AOD driver simulations% by Ashton Christy, 17 Jul 2014%function [y_sim]=RCwavetri(f,p,N)num = 2;den = [1 2];H_s = tf(num, den);w_o=(f*0.1)+0.9; %Freqt = linspace(0, 13000, N);x_sim = sawtooth(w_o*t+p); %Replace with desired waveform (sin, square, etc)y_sim = lsim(H_s,x_sim,t);114

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0306442/manifest

Comment

Related Items