A Study of Spatial Encoding and Decoding in MRIbyThomas Andrew DepewB.A.Sc., Queen’s University, 2004M.A.Sc., The University of British Columbia, 2008A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Physics)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)April, 2015c©Thomas Andrew Depew, 2015AbstractSpatial encoding is a key feature (perhaps the key feature) of Magnetic ResonanceImaging (MRI). Since the discovery that magnetic field gradients could be used tolocalize signal, researchers have sought methods to acquire higher quality images,with greater efficiency. This dissertation investigates three approaches for encodingand decoding the information of MRI to provide useful images for clinical diagnosis.Partial Parallel Imaging (PPI) introduced a framework that enabled faster dataacquisition, allowing reduction in scan time or improved resolution. We present aregionally optimized implementation of the popular PPI technique GeneRalised Au-to-calibrating Partial Parallel Acquisition (GRAPPA). The regional implementationprovides images with lower data recovery error and reduced residual aliasing artifact.We assess a number of image quality metrics that would allow automated selectionof the optimal reconstructed image.A number of physical quantities can be mapped via comparison of two imageswith different sensitivities. When the contrast between these images is smoothlyvarying, the information may be captured using only a fraction of the k-space datarequired for full image reconstruction. We present a technique to provide robustrecovery of relative information maps between images from minimal k-space data. Theeffectiveness of the technique is demonstrated through application to phase contrastMRI data.Many MRI applications are limited by acquisition in 2D multislice mode. Iniithis regime, the slice direction typically suffers lower resolution than the in-planedirections. We present three strategies to improve through-plane resolution. Therelative merits of each technique are investigated, and the performance is quantifiedwith standard measures. The implications of the potential artifacts resulting fromeach technique are discussed.iiiPrefaceAll research presented herein was completed by Thomas A. Depew and ProfessorQing-San Xiang. Some supplementary data were provided from prior research:In Chapter 3 the 4-coil abdominal scan data sets ABDax1 and ABDax2 wereacquired on a 1.5T GE scanner at St. Pauls Hospital, Vancouver, BC, Canada, aspart of the research work performed by Q-S. Xiang published in [1].In Chapter 4 the circular water phantom data were obtained on a 4.7T Brukersystem at the National Institutes of Health, Bethesda, MD, USA, as part of theresearch work performed by Q-S. Xiang and F.Q. Ye published in [2].ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi1 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Fundamentals of MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.1 A Brief History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.2 Nuclear Magnetic Resonance . . . . . . . . . . . . . . . . . . . . . . . 42.2.1 The Zeeman Interaction . . . . . . . . . . . . . . . . . . . . . 42.2.2 Bulk Magnetization & Signal Detection . . . . . . . . . . . . . 52.2.3 Bloch Equations and Relaxation . . . . . . . . . . . . . . . . . 92.3 Gradients and Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . 122.3.1 Selective Excitation . . . . . . . . . . . . . . . . . . . . . . . . 14vTABLE OF CONTENTS2.3.2 Frequency and Phase Encoding . . . . . . . . . . . . . . . . . 152.3.3 k-Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Optimization of PPI . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.1 Introduction to Partial Parallel Imaging . . . . . . . . . . . . . . . . 203.1.1 The Receiver Coil Array . . . . . . . . . . . . . . . . . . . . . 223.1.2 SMASH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.1.3 SENSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.1.4 GRAPPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.2 rGRAPPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.2.1 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.2.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.2.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.2.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463.3 Optimization Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 503.3.1 Differential Energy . . . . . . . . . . . . . . . . . . . . . . . . 533.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553.3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594 Extracting LSF Information from k-space . . . . . . . . . . . . . . . 644.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644.2 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664.2.1 Mathematical Approach . . . . . . . . . . . . . . . . . . . . . 674.2.2 Application to Phase Contrast MRI . . . . . . . . . . . . . . . 704.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79viTABLE OF CONTENTS5 Through-Plane Resolution Enhancement . . . . . . . . . . . . . . . 825.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825.2 Magnitude Based Enhancement . . . . . . . . . . . . . . . . . . . . . 845.2.1 SURESLIDE . . . . . . . . . . . . . . . . . . . . . . . . . . . 865.2.2 SEER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885.3 Phase Based Enhancement . . . . . . . . . . . . . . . . . . . . . . . . 905.3.1 SLENDER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905.4 Qualification of Enhanced Images . . . . . . . . . . . . . . . . . . . . 945.5 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005.6.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005.6.2 Real Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1085.6.3 Artifacts from Enhancement Techniques . . . . . . . . . . . . 1125.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124Appendix A The Moore-Penrose Pseudoinverse . . . . . . . . . . . . . 133viiList of Tables3.1 Image Quality Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 525.1 Summary of results for box image N = 2. The edge width measurementsare given in terms of pixels in the high resolution image I0. . . . . . . . . 1055.2 Summary of results for GE resolution phantom N = 2. The edge widthmeasurements are given in terms of pixels in the high resolution image I0. 110viiiList of Figures2.1 The Zeeman effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 The net magnetization . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 Precession of magnetization in a magnetic field . . . . . . . . . . . . . 72.4 The rotating frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.5 The Free Induction Decay . . . . . . . . . . . . . . . . . . . . . . . . 92.6 T ∗2 relaxation caused by spin dephasing . . . . . . . . . . . . . . . . . 112.7 Hahn spin echo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.8 Gradient fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.9 Achieving spatial localization with gradients. . . . . . . . . . . . . . . . 132.10 Selective excitation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.11 Frequency and Phase encoding . . . . . . . . . . . . . . . . . . . . . . 152.12 A simple pulse sequence . . . . . . . . . . . . . . . . . . . . . . . . . 162.13 Image space and k-space . . . . . . . . . . . . . . . . . . . . . . . . . 172.14 k-space trajectory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.15 Various k-space trajectories. . . . . . . . . . . . . . . . . . . . . . . . . 193.1 PPI Cartesian sub-sampling. . . . . . . . . . . . . . . . . . . . . . . . 223.2 The coil sensitivity profile . . . . . . . . . . . . . . . . . . . . . . . . 233.3 Signal recovery in SMASH imaging . . . . . . . . . . . . . . . . . . . 253.4 Aliased pixels in the SENSE reconstruction . . . . . . . . . . . . . . . 27ixLIST OF FIGURES3.5 The GRAPPA reconstruction . . . . . . . . . . . . . . . . . . . . . . 303.6 Sensitivity profile variation in a 2-coil MR experiment. . . . . . . . . . . 323.7 rGRAPPA vs. GRAPPA calibration and synthesis . . . . . . . . . . . 333.8 Simulation Datasets a) Shepp image and b) Brain image. . . . . . . . . . 353.9 ABDax1 coil images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.10 Effect of undersampling on abdominal scan ABDax1 . . . . . . . . . 373.11 Effect of region width on rGRAPPA weights and RE. . . . . . . . . . 383.12 Abdominal scan ABDax1 reconstruction, R = 2, NACS = 16 . . . . . 403.13 Brain coil images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.14 RE as a function of region width d (Brain simulation). . . . . . . . . . . 413.15 Brain image simulation reconstruction, R = 2, NACS = 16 . . . . . . 423.16 d = 1 Abdominal scan ABDax1 reconstruction . . . . . . . . . . . . . 433.17 Abdominal scan ABDax1 reconstruction, R = 3, NACS = 24 . . . . . 443.18 Abdominal scan ABDax1 reconstruction, R = 4, NACS = 24 . . . . . 453.19 ABDax1 RE curve displaying 2 local minima . . . . . . . . . . . . . 463.20 ABDax1 reconstruction (R = 4, NACS = 24) with local minima . . . 473.21 RE and weighting factors for different image objects. . . . . . . . . . 493.22 a) A rSoS image I(x, y) and b) corresponding gradient map G(x, y). . . . 513.23 Multiple applications of the GRAPPA operator . . . . . . . . . . . . 533.24 Possible k-space coverage patterns for DE data consistency matching. 543.25 Image Metrics vs RE. . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.26 Metric Performance. a) Real Data. b) Simulation. . . . . . . . . . . . 573.27 RE and DE metrics as a function of d. . . . . . . . . . . . . . . . . . 583.28 The impact of normalization on metric values. . . . . . . . . . . . . . 604.1 Example of phase contrast between two acquisitions. . . . . . . . . . . . 704.2 Kernel found using truncated k-space . . . . . . . . . . . . . . . . . . 714.3 Phase mapping from small fraction of k-space . . . . . . . . . . . . . 72xLIST OF FIGURES4.4 Shepp-Logan phantom with simulated (parabolic) phase difference map 764.5 Refocusing Ratio (RR) of estimated phase maps (Shepp-Logan) . . . 774.6 Water phantom PLACE data with linear phase difference map . . . . 784.7 Refocusing ratio (RR) of estimated phase maps (Water Phantom) . . 794.8 Oversized kernel effects . . . . . . . . . . . . . . . . . . . . . . . . . . 805.1 Partial volume effect . . . . . . . . . . . . . . . . . . . . . . . . . . . 835.2 Acquisition of staggered low resolution images for resolution enhancement 855.3 The SURESLIDE approach . . . . . . . . . . . . . . . . . . . . . . . 865.4 The SEER approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 885.5 The SLENDER approach . . . . . . . . . . . . . . . . . . . . . . . . . 905.6 RDF roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945.7 Full resolution reference images; Simulation . . . . . . . . . . . . . . 965.8 Full resolution reference image; Real Data . . . . . . . . . . . . . . . 975.9 Positions of SNR and Edge Width measurements . . . . . . . . . . . 985.10 Brain image simulation N = 2 enhancement results . . . . . . . . . . 1015.11 Brain image simulation N = 2 enhancement results (zoomed) . . . . . 1025.12 Brain image simulation N = 4 enhancement results . . . . . . . . . . 1035.13 Brain image simulation N = 4 enhancement results (zoomed) . . . . . 1045.14 Simulated box image N = 2 enhancement results . . . . . . . . . . . 1055.15 Fittings to determine edge widths . . . . . . . . . . . . . . . . . . . . 1065.16 Lesion sphere phantom simulation results . . . . . . . . . . . . . . . . 1075.17 GE phantom N = 2 enhancement results . . . . . . . . . . . . . . . . 1085.18 GE phantom N = 2 enhancement results (zoomed) . . . . . . . . . . 1095.19 In-plane ramp width . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115.20 GE phantom N = 2 enhancement in-plane . . . . . . . . . . . . . . . 1115.21 SEER enhancement artifact . . . . . . . . . . . . . . . . . . . . . . . 1135.22 The entries of the matrix H†. . . . . . . . . . . . . . . . . . . . . . . . 114xiLIST OF FIGURES5.23 Improving H† conditioning with smoothing . . . . . . . . . . . . . . . 1145.24 Improved SURESLIDE reconstruction with HS† . . . . . . . . . . . . 1155.25 Choosing the optimal smoothing width for SURESLIDE enhancement 1165.26 SLENDER α tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . 1175.27 “Over-tuning” of α . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1185.28 Effect of slice profile on α and β . . . . . . . . . . . . . . . . . . . . . 118xiiList of AbbreviationsAC Autocorrelation.DE Differential Energy.EG Gradient Energy.H Image Entropy.HG Gradient Entropy.RE Relative Error.TV Total Variation.ACS Auto-Calibration Signal.CS Compressed Sensing.EPI Echo-Planar Imaging.f-MRI Functional MRI.FE Frequency Encoding.FID Free Induction Decay.FOV Field of View.xiiiLIST OF ABBREVIATIONSFSPGR Fast SPoiled GRadient.FT Fourier Transform.GRAPPA GeneRalised Auto-calibrating Partial Parallel Acquisition.gSMASH Generalised SMASH.hpGRAPPA High Pass GRAPPA.MR Magnetic Resonance.MRA Magnetic Resonance Angiography.MRI Magnetic Resonance Imaging.NMR Nuclear Magnetic Resonance.ORACLE Off-Resonance Artifact with ConvoLution in k-spacE.PC Phase Contrast.PE Phase Encoding.PLACE Phase Labeling for Additional Coordinate Encoding.PPI Partial Parallel Imaging.RDF Rotational Differential Field.RF radio-frequency.rGRAPPA Regional GRAPPA.rSoS Root Sum-of-Squares.xivLIST OF ABBREVIATIONSSEER Simple Edge Enhancing Refinement.SENSE SENSitivity Encoding.SLENDER SLice ENhancement DividER.SMASH SiMultaneous Acquisition of Spatial Harmonics.SNR Signal-to-Noise Ratio.SR Super-Resolution.SURESLIDE SUper-REsolution by SLIce DEconvolution.TCP Truncated Conjugate Product.xvAcknowledgementsFirst and foremost, I owe a great debt to my supervisor, Professor Qing-San Xiang.His extensive knowledge and his infinite patience have provided me with a healthyenvironment in which to explore interesting physical problems in the field of MRI.He has provided academic and professional guidance that has allowed me uniqueopportunities to engage with the MRI community on an international stage. Hispassion for seeking solutions that are both practical and elegant is a characteristic tobe emulated, and one that I hold in high regard.I would like to extend my appreciation also to my research committee; ProfessorsAlex MacKay and Stefan Reinsberg both surrendered their time to contribute to mystudies. Their insight and experience provided valuable feedback and enhanced thequality of this work. I am thankful to Professors Piotr Kozlowski, Roger Tam andDeborah Giaschi who all agreed to read my dissertation and contributed with inter-esting and inspiring comments and questions. Professor R. Todd Constable providedan excellent summary of discussion points and a comprehensive critique of the workwhich was very helpful during the revision process.I would like to thank the group at the MacKay Lab for putting up with me andallowing me temporary squatting privileges. The friendly group environment andmeetings fostered excitement and ideas.I am very appreciative of the funding and support provided by BC Children’sHospital and the University of British Columbia, without which this research wouldxviACKNOWLEDGEMENTSnot have been possible.I would like to thank my family; my parents William and Catherine, who havesupported and guided me throughout my life and academic career, my brother Davidand sister Sarah for the shared experiences and growth. My friends (you know whoyou are) who have provided the much needed perspectives and distractions throughoutthis era of my life. Finally, I would like to thank my partner, Zabrina, for the strengthand support with which she imparted me through good times and... well, the timesthat weren’t quite as good (not to mention her stupendous logistical travel assistanceand baked goods). You are a constant reminder that no matter where I am and whatI am doing, I will always have a home.xviiChapter 1OutlineMagnetic Resonance Imaging (MRI) provides a non-invasive tool for imaging thehuman body. Since the techniques for creating images of the human body by exploit-ing the Nuclear Magnetic Resonance (NMR) phenomenon were first proposed in the1970s, MRI has experienced widespread use and significant advancement. The natureof MRI data acquisition and the physical interaction of magnetic fields with biologicaltissue invites a variety of possible techniques to improve image resolution, reduce scantime and provide additional physiological information. In this dissertation, we studymethods for producing signal localization (imaging) and techniques to enhance theinformation retrieved from the images.Chapter 2 provides an introduction to basic NMR and MRI principles that pro-vide the foundation for the subsequent chapters. Following this background, thedissertation is divided into three main sections:Chapter 3 concerns the field of Partial Parallel Imaging (PPI), in which spatialinformation from receiver coil sensitivities aid in signal localization. The conceptsof PPI are introduced and the foundational techniques are reviewed. We suggestan optimized version of the data driven PPI technique GeneRalised Auto-calibrat-ing Partial Parallel Acquisition (GRAPPA). The physical underpinnings inspiring the1CHAPTER 1. OUTLINEmodification of the standard implementation are discussed, and the optimized methodRegional GRAPPA (rGRAPPA) is shown to demonstrate improved reduction of resid-ual aliasing artifact compared to the standard approach. We also introduce a numberof image quality metrics that can be used to assess the efficacy of the data recoveryin the absence of a fully sampled reference image. The maximum gradient energy,minimum image entropy and minimum autocorrelation sum were found to provideoptimal results over a wide range of acquisition parameters. The metrics suited forthe GRAPPA framework may be applicable to other data-driven PPI techniques aswell.Chapter 4 presents a novel technique for estimating low frequency contrast infor-mation between scans. The mathematical framework behind the convolution fittingtechnique is described. The ability of the convolution kernel to capture relative k-space information is tested in the context of Phase Contrast (PC) MRI. The kernelfitting procedure was found to robustly estimate relative phase maps between im-ages even when sampling only a fraction of the full k-space and in regions of lowSignal-to-Noise Ratio (SNR).Chapter 5 presents strategies for improving the through-plane resolution of 2Dmultislice Magnetic Resonance (MR) data. We introduce three unique approachesfor achieving through-plane resolution enhancement with either magnitude or com-plex data. Each technique is applied in simulated and real MRI experiments. Theeffectiveness of resolution enhancement is measured using an edge width calculation,and the SNR properties of each technique are compared. Artifacts specific to eachtechnique are identified and the strengths and weaknesses of each approach are dis-cussed.Finally, in Chapter 6 we provide concluding remarks and make recommendationsfor future work.2Chapter 2Fundamentals of MagneticResonance Imaging2.1 A Brief HistoryThe development of MRI was dependent on the discovery of the NMR phenomenon;the interaction that occurs when certain atomic nuclei, immersed in a static magneticfield, are subjected to an oscillating magnetic field. NMR was first discovered anddescribed by Isidor Rabi in 1938 [3]. Rabi and his group developed the techniqueto observe NMR and used it to determine the magnetic moment of sodium nuclei.The discovery that the response of nuclei was dependent on the local magnetic fieldallowed further characterization of the chemical environment of nuclei. In 1946 FelixBloch and Edward Mills Purcell refined the technique to investigate the structure ofmolecules in liquids and solids [4, 5, 6].The first step towards creating an image using gradients was accomplished byHerman Carr in his 1952 thesis [7], although the images were only 1-dimensional.Finally, in 1974 Paul Lauterbur developed the technique to create true 2D imagesusing magnetic field gradients and created one of the first in vivo demonstrations of3CHAPTER 2. FUNDAMENTALS OF MRIMRI. Around the same time, Sir Peter Mansfield had been using gradients to ac-quire spatial information in solid state NMR [8] and had developed the mathematicalframework that ultimately became the basis for k-space (Mansfield and Lauterburjointly received the Nobel Prize in Physiology or Medicine in 2003 for their contribu-tions). These advancements culminated in the realization of applying NMR to createclinically relevant images.2.2 Nuclear Magnetic ResonanceIn order to experience the effects of NMR, a nucleus must possess a non-zero nuclearmagnetic moment. This occurs when the nucleus is composed of an odd numberof protons or neutrons. The nucleons possess an intrinsic quality called spin thatis the property by which they interact with magnetic fields. For the purposes ofthis dissertation, we consider the spin to be a fundamental property of the particle;although we could decompose the nucleons further, the underlying physics is beyondthe scope of this thesis.The combined spin of all particles in the nucleus contribute to the magnetic mo-ment of the nucleus. A nucleus with total spin angular momentum of I will have amagnetic momentµ = γ~I. (2.1)Here both µ and I are vectors, the scalar γ is the unique ratio of the magnetic momentto the spin angular momentum for a given particle and is known as the gyromagneticratio, ~ is the Planck constant divided by 2pi.2.2.1 The Zeeman InteractionIn the absence of any external magnetic fields the orientation of the magnetic momentµ is random. When immersed in a static magnetic field it experiences a torque that4CHAPTER 2. FUNDAMENTALS OF MRItends to align the moment in the direction of the applied field (B). The energy ofthis system isE = −µ ·B. (2.2)In the NMR system, the static field is conventionally aligned in the z direction(B0 = B0zˆ). Substituting Equation 2.1 results in a set of allowed energy levelsEm = −γ~B0mz, (2.3)given the possible values of the spin quantum number mz = I, I − 1, . . . ,−I. In thecontext of MRI, we are normally concerned with the H atom (the proton), whereinI = 1/2. Thus there are two allowed energy levels corresponding to the valuesm− = −1/2 and m+ = 1/2. This splitting of the energy levels for the nucleus ina static magnetic field is known as the Zeeman effect. The Zeeman splitting for theI = 1/2 system is shown in Figure 2.1.∆EB = 0 B = B0mz = −1/2mz = +1/2Figure 2.1: The Zeeman effect; in a magnetic field B0 the energy levels of the I = 1/2system are split into two levels separated by ∆E = γ~B0.2.2.2 Bulk Magnetization & Signal DetectionAlthough the energy splitting induced by the static field that is the basis of oursignal is a quantum phenomenon, the NMR and MRI experiments deal with the netmagnetization resulting from large numbers of nuclei (on the order of Avogadro’snumber, 6.022 × 1023). At this point, we transition from quantum mechanics to a5CHAPTER 2. FUNDAMENTALS OF MRIclassical picture.The net magnetization is the sum total of all magnetic moments in a sampleM =∑Nn µnV , (2.4)where V is the sample volume.In the absence of the static field, the nuclei are randomly distributed (Figure 2.2a)and the net magnetization is zero.(a) (b)Figure 2.2: The net magnetization is the vector sum of all spin magnetic moments. a)With no field present, the spins are oriented randomly and uniformly. b) When an externalfield is applied, the spins will tend to align in the direction of the applied field.With the strong static field applied, one might suppose that the nuclei must aligneither parallel or anti-parallel to the field (Figure 2.1). However molecular motionwill still provide enough local fluctuations to cause the spins to orient in a nearlyuniform spherical distribution. There is an overall preference among nuclei to alignparallel to the direction of B0, as this state (mz = +1/2) is energetically preferred.The result is a net magnetization aligned with the static field.At thermal equilibrium, the ratio of the low and high energy populations of nuclei6CHAPTER 2. FUNDAMENTALS OF MRIwill be determined by the ratio of this energy gap to the average thermal energy:N+N−= e−γ~B0kBT . (2.5)The amplitude of the detectable signal is proportional to the net magnetization, whichis dependent on the population difference (N+ −N−). Thus it is desirable to have alarge net magnetization requiring a larger B0 field for a given temperature. However,in imaging applications larger static fields present other challenges, such as contrastinhomogeneity.The torque experienced by the nuclear spins induces precessional motion of themagnetic moments, summarized by the following equation of motion:dMdt = γM×B0. (2.6)This motion is analogous to the precession of a spinning top in a gravitational field.Solving Equation 2.6 shows that the magnetization will precess about the field B0 atan angular frequencyω0 = γB0. (2.7)ω0 is known as the Larmor frequency. The precessional motion of M about the fieldis depicted in Figure 2.3.B0ω0MFigure 2.3: The net magnetization M will precess with angular frequency ω0 when placedin a magnetic field B0.7CHAPTER 2. FUNDAMENTALS OF MRITransitions to the higher energy state are achieved by radio-frequency (RF) irra-diation (typically a pulse) at the resonant frequency, ω0. This matches the energyrequired for the low to high transition (∆E = γ~B0). The RF pulse is applied by acurrent carrying coil surrounding the sample and oriented such that the resulting elec-tromagnetic field will be perpendicular to B0. In the laboratory frame, the appliedRF pulse can be described by a complex rotating magnetic field in the x-y planeB1 = B1e−iω0t, (2.8)where the real and imaginary parts are typically assigned to the x and y directions,respectively.B1 results in a very complicated motion in the lab frame O(x, y, z) and a transfor-mation to the rotating frame O′(x′, y′, z) is often introduced to simplify the picture.With frame O′ rotating at an angular frequency ωrf about z, we introduce a termthat will nullify that static field when ωrf = ω0. When on resonance, the effectivefield in the rotating frame is simply B1 (Figure 2.4).B0ωrfzB0(1− ωrfω0 )BeffB1xx′yy′x′zMB1x′zωrf < ω0 ωrf = ω0y′O O′ O′y′Figure 2.4: In the rotating frame, an on resonance pulse produces an effective field in thex′ direction. M will rotate into the xy-plane to align with the applied field B1.The ultimate effect of the applied RF field is a rotation of the net magnetizationM into the xy-plane.After the magnetization has been tipped into the transverse plane the RF pulse8CHAPTER 2. FUNDAMENTALS OF MRIis turned off. The magnetization precesses about the z axis and will induce an alter-nating current in a receiver coil surrounding the magnetization and linking the field(Faraday’s Law). The signal that is measured is the transverse magnetization as thesystem returns to the equilibrium state; this is known as the Free Induction Decay(FID) (Figure 2.5).tV (t)Figure 2.5: The Free Induction Decay (FID); the signal produced by the transverse mag-netization and measured by a receiver coil.The behaviour of the FID depends on the properties of the matter and preparationof the magnetization which will be discussed in the next section.2.2.3 Bloch Equations and RelaxationFollowing excitation of the system with B1, Equation 2.6 indicates that the magneti-zation will precess indefinitely about B0. This is not realistic, as energy loss will causethe system to return to the equilibrium state (Equation 2.5). The Bloch Equations:dMx(t)dt = γ [M(t)×B0(t)]x −Mx(t)T2dMy(t)dt = γ [M(t)×B0(t)]y −My(t)T2(2.9)dMz(t)dt = γ [M(t)×B0(t)]z −Mz(t)−M0T1are a set of phenomenological equations of motion that characterize the time evolutionof the magnetization as it “relaxes” back to equilibrium [4]. Bloch introduced thetime constants T1 and T2 based on the observed relaxation of the longitudinal (Mz)9CHAPTER 2. FUNDAMENTALS OF MRIand transverse (Mxy) magnetization, respectively. Solving the Equations 2.9:Mx(t) = Mx(0) cos(ω0t)e−t/T2My(t) = My(0) sin(ω0t)e−t/T2 (2.10)Mz(t) = M0 + [Mz(0)−M0] e−t/T1M0 is the total magnitude of the magnetization available at equilibrium (due to thedifference N+ −N− in the field B0). In general, T2 ≤ T1.T1 characterizes the time it takes for the longitudinal component of the magneti-zation (Mz) to return to equilibrium. T1 relaxation is the result of nuclei exchangingenergy by interacting with their surroundings. For this reason it is also referred to asthe spin-lattice relaxation time. On a microscopic level, T1 arises from the redistri-bution of populations of nuclear spin states due to a variety of interactions (dipolarcoupling, chemical shielding, etc.). These interactions are orientationally dependentand influence the local fields experienced by spins. Moreover, thermal motion causesfluctuations in the local fields. The consequence for MRI is that different tissues willhave different T1 values; T1s of fluids are generally longer (1500-2000ms), water basedtissues are intermediate (400-1200ms) and fat based tissues are shorter (100-150ms).One can determine a theoretical value relating the value of T1 to any number ofdesired interactions, but that is beyond the scope of this thesis.T2, characterizes the decay of the transverse component of magnetization (Mxy)as it rotates about B0. In an actual FID, this decay of Mxy is confounded by B0magnetic field inhomogeneity. Losses due to both the random relaxation processescontributing to T2, as well as dephasing of spins due to B0 field inhomogeneity, arecharacterized by an effective relaxation time T ∗2 :1T ∗2= 1T2+ γ∆B0. (2.11)10CHAPTER 2. FUNDAMENTALS OF MRIThere are no quantum mechanical exchange interactions, rather the loss of transversemagnetization is due solely to the decoherence of nuclear spins precessing at differentrates. The decay of signal due to this decoherence is illustrated in Figure 2.6.t = 0, in-phaseω0increasing t→ dephasing(a) (b) (c) ω < ω0(lagging)ω > ω0(leading)Figure 2.6: T ∗2 relaxation caused by spin dephasing: differences in local magnetic fieldsexperienced by spins cause dephasing. Spins experiencing a larger magnetic field will haveω > ω0 and rotate faster, leading other spins (green arrow). Spins in a smaller field willhave ω < ω0 and lag behind (red arrow). Over time there is significant decoherence of therotating spins causing the transverse magnetization to decay.The value T2 (the true quantity of interest in terms of characterizing the tissue)represents time-dependent, irreversible effects such as microscopic motion of spins inthe sample, and may be isolated from the main field inhomogeneity. Hahn demon-strated that signal lost to field inhomogeneity could be recovered with his classic spinecho experiment [9]. Figure 2.7 shows the essence of the spin echo experiment withspin phase diagrams.11CHAPTER 2. FUNDAMENTALS OF MRIT2(a) (b) (c) (d)T ∗2(e)2τz z z zyFigure 2.7: T2 and T ∗2 . The evolution of magnetization in the rotating frame duringa typical Hahn echo pulse sequence. a) The magnetization is tipped into the transverseplane with a 90◦x pulse. b) The magnetization is allowed to precess and dephasing occursby the T2 relaxation mechanisms. c) After time τ , a 180◦x pulse is applied, flipping themagnetization about the y-axis. While the relative positions of the spins are reversed, theirrate of precession remains the same bringing them back in phase with each other. d) Attime 2τ the signal will be refocused.A set amount of time τ after application of the 90◦ pulse, a 180◦ pulse is applied tothe sample, and it is allowed to evolve for another period τ . Since the inhomogeneity∆B0 is essentially constant, the dephasing that occurs during the first period τ isreversed by the second after reorientation of the spins. The result is refocusing ofthe signal dephased due to ∆B0. Signal lost to true T2 relaxation (i.e. thermalprocesses), on the other hand, is not recoverable. Figure 2.7e depicts the resultingsignal evolution. A measure of T2 can be extracted from fitting an exponential to thesignal peaks at times t = 2nτ , where n is a non-negative integer.2.3 Gradients and ImagingFor many NMR applications, variation in the magnetic field experienced by the spinsis an undesired effect that must be mitigated or removed. This very effect is exploitedin MRI, achieving spatial localization of signal with the use of gradient magnetic fields.The MRI gradient field is designed to provide a linear change in the local field as a12CHAPTER 2. FUNDAMENTALS OF MRIfunction of position r. The gradient field BG is always parallel to the static field B0,only its strength varies as a function of r (Figure 2.8).B0xyzFigure 2.8: The x- and y-gradient fields provide a field that is in the direction of the staticfield B0 but has a linearly increasing strength along the respective direction.The effect of BG(r) is to distribute the Larmor frequency of nuclei as a functionof their position rω(r) = ω0 + γr ·BG. (2.12)This effect is shown in Figure 2.9 for a gradient along x. A collection of spins dis-tributed along the x-axis will possess the same frequency if the only field present isB0. The resulting FID will contribute to a single peak at the frequency ω0 (Figure2.9a). With a gradient field applied BG(x) = xGx (Figure 2.9b), the spins experienceB0GxB(x)xB0B(x)x ω0 ωωω0a)b) S(ω)S(ω)Figure 2.9: Achieving spatial localization with gradients.13CHAPTER 2. FUNDAMENTALS OF MRIa different field, and therefore will exhibit a distinct frequency depending on theirx-position, ω(x) = γ (B0 + xGx). The size of the peak will depend on the number ofnuclei at a given position x. The resulting frequency spectrum can be decomposedto determine the location of origin for each nuclei contributing to the signal.By utilizing gradients in every direction (Gx, Gy, Gz) one may encode a uniquelocation in 3D space into the signal frequency. A typical MRI experiment follows 3encoding steps; slice selection (z-location), phase encoding (y-location) and frequencyencoding (x-location).2.3.1 Selective ExcitationMRI data is frequently acquired as a set of slices using a technique called selectiveexcitation or slice selection. In this case, magnetization in a thin region ∆z is tippedinto the xy plane to produce signal. With a z-gradient field applied, the spins willpossess a range of precession frequencies ω(z) = γ(B0 +Gzz). The magnetization ata desired location can be excited by a B1 pulse with the appropriate frequency ωrf(Figure 2.10).∆zzB1(ωrf )Gzω(z) = ωrfω(z) > ωrfω(z) < ωrfFigure 2.10: Selective excitation.The thickness of the slice ∆z is determined by the bandwidth of the B1 pulse.14CHAPTER 2. FUNDAMENTALS OF MRIIdeally B1 is a sinc pulse that will result in a rectangular response excitation; in thiscase, all affected spins will rotate by the same angle (90◦, for example). In practicehowever, the excitation is rarely perfect due to the apodization of the pulse and spatialnon-uniformity of B1. These effects are of some importance in Chapter 5.2.3.2 Frequency and Phase EncodingAfter excitation, the slice selection pulse is turned off and the locations of the excitedspins are encoded by applying gradients in the remaining directions (x, y). In a generalacquisition, location in one direction (normally x) is encoded during the acquisitionof the FID. This step is known as Frequency Encoding (FE) since the informationdictating the location of the signal is encoded in the spins frequency. The y locationis encoded in the spins relative phase in a separate step, hence Phase Encoding(PE). Figure 2.11 demonstrates the technique for encoding in the x (FE) and y (PE)directions.PExyFEFigure 2.11: Position along the x-axis is encoded in frequency, while position along they-axis is encoded in the phase.The FID will contain frequencies based on the applied x-gradient whereω(x) = γ(B0 + xGx). (2.13)15CHAPTER 2. FUNDAMENTALS OF MRIThe main difference between FE and PE is that the FE step occurs during the ac-quisition of signal, while PE occurs in a preparatory step before the FID is collected.An example of a simple pulse sequence showing the timing of slice select, frequencyand phase encoding is shown in Figure 2.12.GzRFGyGxACQFigure 2.12: A simple 2D pulse sequence; the slice selection (Gz) is done during rf excita-tion. Phase encoding (Gy) follows to allow y localization. Frequency encoding (Gx) is doneduring acquisition of the FID.When considering the standard Cartesian 2D acquisition after slice selection, thegradients applied in the x and y directions encode the position of the local magneti-zation. During the FID acquisition, the FE gradient Gx imparts a phase of γxGxt atposition x and time t. The PE step is normally encoded earlier in the pulse sequencecontributing a phase factor depending on the duration T of the Gy gradient; γyGyT .2.3.3 k-SpaceThe concept of k-space, introduced by Ljunggren and Twieg in 1983 [10, 11], isextremely useful in understanding acquisition of MRI data. The information of MRIis acquired by traversing k-space and subsequently transformed to create the image.The pulse sequence prescribes a specific combination of gradients and RF pulsesdesigned to provide the desired spatial encoding. The magnetization induces a timedependent signal s(t) that is picked up by the detector. s(t) depends on the timing16CHAPTER 2. FUNDAMENTALS OF MRIand duration of pulses and gradients, described by∫ t0 G(τ) · rdτ and the spatialdistribution of magnetization, M(r). In generals(t) =∫M(r)e−i[ω0t+γt∫0G(τ)·rdτ]d3r. (2.14)To simplify Equation 2.14 we introduce a spatial encoding parameter k:k(t) = γt∫0G(τ)dτ, (2.15)and we digitize and demodulate the signal by the resonance frequency ω0 (this isessentially shifting the frequency spectrum by −ω0). Equation 2.14 can thus bewritten in terms of the parameter k:S(k) =∫M(r)e−ik·rd3r. (2.16)S(k) and M(r) constitute a Fourier Transform (FT) pair; S is referred to as k-spaceand contains the coefficients of the spatial harmonics that compose the image (Figure2.13).(a)← 2DFT →(b)Figure 2.13: Image space and k-space; a) magnitude image of a brain and b) the corre-sponding k-space representation.17CHAPTER 2. FUNDAMENTALS OF MRIk(t) can be considered the trajectory through k-space that results from the pre-scribed gradient sequence.Assuming the 2D acquisition from Section 2.3.2, the x and y components arekx = γGxtky = γGyT (2.17)and Equation 2.16 becomesS(kx, ky) =∫∫M(x, y)e−i(xkx+yky)dxdy. (2.18)The distribution of magnetization (and therefore the image) is recovered via 2D in-verse FT of the k-space signal.Due to the nature of the data acquisition (sampling in the spatial frequency do-main), the Field of View (FOV) of the MR image is related to the sampling resolutionin k-space. The smallest increment in k-space is the inverse of the FOV in that di-rection:FOVx =1∆kx, FOVy =1∆ky. (2.19)Figure 2.14 illustrates the trajectory traced in k-space for a simple gradient se-quence.GyGxACQ1 2 3 41234kxkyFigure 2.14: k-space trajectory: Gradients Gx and Gy allow the traversal of k-space. Steps1-4 show the effect of applying gradients to collect the necessary sampling of k-space. 2DIFT of the acquired data produces the image.18CHAPTER 2. FUNDAMENTALS OF MRIThere are a number of different trajectories made possible by manipulating thegradients in different ways during the pulse sequence. Some examples of differentk-space trajectories are shown in Figure 2.15. Different trajectories may be useful forparticular applications but can also result in unique artifacts that must be considered.To obtain an image without any aliasing effects, one must have complete k-spacekxkykxkykxkyFigure 2.15: Various k-space trajectories.coverage. This will be a point of interest in Chapter 3, where we seek to reduce thetime taken to collect full k-space data using supplemental information.19Chapter 3Optimization of Partial ParallelImaging Techniques3.1 Introduction to Partial Parallel ImagingWhen compared with other medical imaging modalities, a significant limitation forMR scans is the time needed to acquire a clinically relevant image. For standardCartesian acquisitions, the time cost of spatial encoding in one direction (x, the FEdirection) is effectively nil. In the other directions (y for a 2D acquisition or y andz for 3D acquisitions), each unique PE position requires a separate excitation, allow-ing the longitudinal magnetization to recover before subsequent points are acquired.There are a few exceptions, such as Echo-Planar Imaging (EPI) [12] in which the PEdirection is fully sampled during a single excitation, but these techniques suffer fromother issues, such as geometric distortion, ghosting and limited in-plane resolution[13, 2, 14].For the high resolution images demanded in some clinical settings, there mustbe a large number of PE points, resulting in inconvenient or even impractical totalscan times. Motion due to breathing, heartbeat and other physiological effects can20CHAPTER 3. OPTIMIZATION OF PPIintroduce ghosting artifacts if the scan time is not much faster than the motion.Applications such as cardiac imaging [15] and Functional MRI (f-MRI) [16] requirerapid acquisition to obtain images free of motion artifacts. These demands must beweighed against limitations of the MR system. Gradients are limited by maximumamplitude and slew-rate. In addition, high gradient strength and rapid switching cantrigger peripheral nerve stimulation in patients [17].The introduction of Partial Parallel Imaging (PPI) provides a means for surpassingthe limitations of using gradients for spatial encoding. PPI exploits the intrinsicspatial variation of receiver coil sensitivities to supplement the signal localizationnormally obtained with PE steps, thus reducing the amount of time required toobtain full k-space coverage. This time savings may be useful in mitigating artifactsdependent on acquisition time, or simply increasing MR scanner efficiency to minimizepatient discomfort. In real-time applications, such as cine MR or f-MRI, PPI may beused to increase temporal resolution as well.In a Cartesian acquisition, accelerated scans are achieved by reducing the numberof PE steps while maintaining the same outer k-space limits. The degree of sub-sampling is characterized by the reduction factor, R; the k-space data is acquired atevery Rth ky position. When R > 1 this results in a violation of the Nyquist samplingtheorem and the image is aliased. Figure 3.1 shows the effect of uniform sub-samplingwith different R values in a Cartesian acquisition.PPI techniques exploit the spatial variation in relative sensitivity relationships inan array of receiver coils to recover the missing ky lines. A pedagogical approach tounderstanding PPI includes an understanding of the nature of the surface receiver coiland the introduction of the receiver coil array. The pioneering techniques SENSitivityEncoding (SENSE), SiMultaneous Acquisition of Spatial Harmonics (SMASH), andGeneRalised Auto-calibrating Partial Parallel Acquisition (GRAPPA) provide thefoundation for most parallel imaging approaches.21CHAPTER 3. OPTIMIZATION OF PPIFigure 3.1: PPI Cartesian sub-sampling. From left to right, the images are reconstructedusing variable fractions (1/R) of k-space data. Fully sampled k-space (R = 1) will have noaliasing, while larger values result in R aliased images.3.1.1 The Receiver Coil ArrayAchieving localization using the receiver coils relies on inherent spatial variation ofthe coils sensitivity profile. The MRI signal in Equation 2.16 does not account forthe receiver sensitivity, which is in general spatially varying [18, 19]. The sensitivityof the coil C(x, y) will modulate the magnetization and Equation 2.18 becomes:S(kx, ky) =∫∫C(x, y)M(x, y)e−i(xkx+yky)dxdy (3.1)The sensitivity is a result of the properties of the receiver coil. The reciprocityprinciple dictates that the sensitivity is related to the magnetic field produced by acoil, this in turn is described by Biot-Savart Law. The magnetic field B at a position22CHAPTER 3. OPTIMIZATION OF PPIr away from a constant current source Ids isB(r) = µ04pi∫CIds× r|r|3 (3.2)For a current loop (coil), Equation 3.2 may be integrated around the path of the loopto determine the net field in a given location. Figure 3.2 shows the strength of themagnetic field typical for a surface coil (illustrated by the dotted blue line).Figure 3.2: The coil sensitivity profile; a surface receiver coil will be more sensitive tosignal close to the center of the coil.Surface coils are tailored to measure signal in a specific location and provide higherSNR in a smaller region when compared with a full volume “body coil”; the body coilis sensitive to a larger volume of noise. Body coils generally have more uniform RFtransmission and reception profiles than surface coils [59]. In the following sectionswe explore how PPI takes advantage of the spatially non-uniform nature of surfacecoils.23CHAPTER 3. OPTIMIZATION OF PPI3.1.2 SMASHSiMultaneous Acquisition of Spatial Harmonics (SMASH) [20], uses the spatiallyvariant nature of surface coils to replace the application of PE gradients in a veryliteral sense. The coil profiles are combined in such a way as to emulate the effectof small PE steps. This approach requires careful arrangement of coil geometry tofacilitate the formation of spatial harmonics across the image FOV.Consider a coil with a spatial sensitivity mapping C1(x, y). The coil sensitivitymodulates the MR signal generated by spin-density ρ(x, y), resulting in the acquiredk-space signalS1(kx, ky) =∫∫ρ(x, y)C1(x, y)e−i(kxx+kyy)dxdy. (3.3)Introducing multiple coils, one may combine the signal from each to create a compositesignalS(kx, ky) =∫∫ρ(x, y)∑jCj(x, y)e−i(kxx+kyy)dxdy. (3.4)where j denotes the coil number.With the appropriate coil array geometry, one can introduce weighting factors,wj, to the individual coil signals to create sinusoidal modulations in the compositesignal. First, the wj’s are chosen such that the sensitivity profiles combine to matcha spatial harmonic of the formeim∆kyy =∑jwmj Cj(x, y), (3.5)where ∆ky = 2pi/FOV is the fundamental spatial frequency in the PE direction. Theindex m is an integer chosen to produce the desired spatial harmonic over the FOV.This step (illustrated in Figure 3.3a for the zeroth and first order spatial harmonics) isdone by fitting the coil sensitivity maps to the desired spatial harmonic and thereforerequires accurate knowledge of the coil sensitivity profiles.24CHAPTER 3. OPTIMIZATION OF PPIw=1 w=1w=1w=1 w=-1 w=11 ky0 ky(m=0)(m=1)C C C1 2 3S SS123 Sw j1acquiredreconstructedmissinga) b)Figure 3.3: Signal recovery in SMASH imaging. a) Combining properly weighted sensitivityprofiles (dotted lines) from an array of coils produce spatial harmonics (solid lines). b)The k-space signals from each coil, modulated by the weights are combined to produce acorresponding shift in the composite k-space.Once the weights are known, they may be used to create a composite signalSm(kx, ky) =∫∫ρ(x, y)∑jwmj Cj(x, y)e−i(kxx+kyy)dxdy. (3.6)possessing a modulation corresponding to chosen weighting factors. Inserting Equa-tion 3.5 into Equation 3.6Sm(kx, ky) =∫∫ρ(x, y)e−im∆kyye−i(kxx+kyy)dxdySm(kx, ky) =∫∫ρ(x, y)e−i[kxx+(ky+m∆ky)y]dxdySm(kx, ky) = S(kx, ky + ∆ky) (3.7)Equation 3.7 elucidates the fact that the modulation of the acquired k-space lines bythe mth spatial harmonic is equivalent to a shift in k-space by m∆ky.The recovery of missing lines in SMASH is accomplished by weighted combinationof the individual coil signals (Figure 3.3b),∑jwmj Sj(kx, ky) = S(kx, ky +m∆ky). (3.8)25CHAPTER 3. OPTIMIZATION OF PPIIt should be noted that Equation 3.8 represents an approximate relationship, de-pending on the ability of the coil array to accurately emulate the required spatialharmonics (Equation 3.5). If one can faithfully represent M different spatial harmon-ics, a reduction factor of R = M is theoretically possible.The SMASH reconstruction results in a uniform noise distribution [21], but theability of the coil geometry to emulate order-m spatial harmonics is crucial in therecovery of missing signal offset by m∆ky. This can be severely limiting in practicalapplications as the array typically must be arranged in a linear fashion spanning thePE direction. Furthermore, the selection of weighting factors to match the desiredspatial harmonics relies on fitting to known coil profiles. This introduces the needfor a calibration scan which requires additional acquisition time and inaccuracies infitting can lead to errors in the recovered ky lines.3.1.3 SENSEThe SENSitivity Encoding (SENSE) reconstruction takes place entirely in the imagedomain. Reduced k-space data is acquired with multiple coils and aliased images areunfolded using the sensitivity maps from each coil. Equation 3.9 allows the pixel-by-pixel reconstruction of an acquisition with a reduction factor R = 2. This isshown schematically in Figure 3.4. The solid pixels in the reduced FOV representa sum of signal from the local pixel as well as aliased signal from a pixel locatedFOV2 away. That is, at the point (x1, y1), there is combined signal from the locations(x1, y1) and (x′1, y′1) = (x1, y1 + FOV2 ) in the full FOV. The same situation occursat the location (x2, y2). However, the contributions from each location vary in theintermediate images acquired from different coils.The actual signal, ρ(x, y), can be recovered from the aliased images I1(x, y) and26CHAPTER 3. OPTIMIZATION OF PPIFigure 3.4: Aliased pixels in the SENSE reconstruction. The hollow pixels are folded tothe locations of the solid pixels at a distance FOV2 for R = 2.I2(x, y) provided the coil sensitivities C1(x, y) and C2(x, y) are known:I1(x, y) = C1(x, y)ρ(x, y) + C1(x, y + FOV2)ρ(x, y + FOV2)I2(x, y) = C2(x, y)ρ(x, y) + C2(x, y + FOV2)ρ(x, y + FOV2). (3.9)If the coil sensitivity maps are perfectly accurate, inversion of Equation 3.9 providesan exact reconstruction of the image, ρ(x, y).The SENSE reconstruction is spatially localized and provides a robust reconstruc-tion with minimal constraints on the physical coil geometry. This makes SENSE theultimate localized parallel reconstruction technique. However, the nature of the re-construction results in a different SNR at each pixel location; the numerical conditionof the unfolding equation determines the local noise properties. Like SMASH, SENSErequires a calibration scan to provide accurate coil sensitivity maps for the unfoldingstep. These coil maps are difficult to estimate with a high degree of accuracy inpractice.27CHAPTER 3. OPTIMIZATION OF PPI3.1.4 GRAPPATo address the shortcomings of the original SMASH and SENSE algorithms, manyvariations have been formulated. To alleviate the problems of coil geometry depen-dency, modifications such as Tailored SMASH [22] and Generalised SMASH (gS-MASH) [23] were proposed. In Tailored SMASH, the fitting was reformulated toconsider additional degrees of freedom, allowing a fit to a spatial harmonic withC(x, y) 6= 1. This relaxed some of the rigid requirements on the coil profiles, andimproved reconstructions in non-ideal practical settings. gSMASH went a step fur-ther to remove the highly restrictive geometric constraints on the coil array. In thisapproach, the coil sensitivities C(x, y) were represented by the sum of their Fourierspace coefficients. By using the Fourier space representation of the coil profiles, the in-termediate step of fitting to spatial harmonics specific to the image FOV was avoidedand the requirement of a matching coil geometry was relaxed.The remaining inconvenience of requiring a calibration scan was addressed byAUTO-SMASH [24]. In this approach, information about the coil sensitivity is ex-tracted from relationships between acquired signal and extra Auto-Calibration Signal(ACS) lines. This information is then used to create a composite k-space signal at thedesired ky location, synthesizing the missing PE line. In the AUTO-SMASH approach,the weighting factors for m spatial harmonics are computed with m ACS lines. Toimprove the calibration and the resulting artifact removal, VD-AUTO-SMASH [25]included a larger ACS region in the fitting procedure.These advancements ultimately paved the way for GeneRalised Auto-calibratingPartial Parallel Acquisition (GRAPPA). GRAPPA is essentially a more general ex-tension of VD-AUTO-SMASH in which the missing k-space data are generated in acoil-by-coil reconstruction, using a larger subset of ACS lines. This has the benefit ofmitigating phase cancellation effects and allowing the final coil images to be combinedin some optimal way. Moreover, the ACS lines may be included in the individual coil28CHAPTER 3. OPTIMIZATION OF PPIdatasets, further improving the quality of the final reconstructed image.In the GRAPPA algorithm, missing PE lines for a single coil are estimated using alinear superposition of the surrounding acquired lines from all coils. The recovery ofthe missing information can be broken down into two main steps; the calibration step,in which the linear weights relating target and source data are calculated using theextra acquired ACS lines and the synthesis step, in which the missing data is recoveredusing the fitted weights. Figure 3.5a shows the typical GRAPPA acquisition coverageand the respective locations for the calibration and synthesis steps.The linear summation to reconstruct the missing target signal Si at location ky inthe ith coil dataset isSi(ky) =Nc∑j=1Nm∑m=1wmijSj{ky + [(m−Nm/2)R− 1]∆ky}, (3.10)where, j is the source signal coil, Nm is the number of shifted PE lines used in thecombination, R is the reduction factor. There is a unique weighting factor wmij relatinglines for each coil pairing and relative offset (m∆ky). The number and position ofthe source lines relative to the target line is known as the GRAPPA kernel. Figure3.5b shows the GRAPPA kernel for a 3 coil acquisition with R = 2 and Nm = 4(m = −3,−1, 1, 3). The magenta lines represent the weighting factors linking thetarget and source data.Equation 3.10 is commonly written in the more compact matrix form:Stgt = GSsrc. (3.11)Stgt and Ssrc represent concatenations of the missing (target) and acquired (source)signal data, respectively. The matrix G contains the weighting factors to be usedin the signal combination. While Equation 3.10 represents the computation requiredto reconstruct a single data point at location ky in the ith coil data set, the matrix29CHAPTER 3. OPTIMIZATION OF PPIkycoilcalibrationsynthesiskykxa) b)ky + 3∆kyky +∆kyky −∆kyky − 3∆kyacquiredmissingACSrecoveredFigure 3.5: The GRAPPA reconstruction. a) 3-coil, R = 2 undersampled k-space with 2ACS lines. b) The GRAPPA kernel, with Nc = 3, Nm = 4 resulting in 12 unique weights.formulation conveniently expresses the relationship between all target and sourcedata.For the calibration step, the acquired ACS lines are inserted as the target signalmatrix, Stgt. G may then be determined by solving the systemG→ argmin (||SsrcG− SACS||2) , (3.12)where, || · ||2 represents the L2 norm. Equation 3.12 represents an over-determinedsystem that may be solved by invoking the Moore-Penrose pseudoinverse (†, see Ap-pendix A for more) of the source data matrix, Ssrc:G = S†srcSACS (3.13)The conditioning of Equation 3.13 is further improved by inclusion of multiple ACSlines from central k-space in the fitting.Following calibration, the shift invariant kernel is used to synthesize missing lines30CHAPTER 3. OPTIMIZATION OF PPIin the undersampled regions of k-space for that coil (green dots in Figure 3.5a). Thisprocess is repeated until missing data in each coil k-space is fully recovered. The finalimage is created by combining the individual, complex coil images Ij(x, y) in a RootSum-of-Squares (rSoS)rSoS =√∑jIj(x, y)I∗j (x, y) (3.14)where, ∗ denotes the complex conjugate. The rSoS is typically used as it provides arobust method for component coil image combination [26].3.2 rGRAPPAA number of improvements to the GRAPPA algorithm have been proposed, attempt-ing to minimize reconstruction errors, reducing noise and residual aliasing artifact.Such approaches include regularization techniques [27, 28, 29], iterative reconstructiontechniques [30] and filtering [31]. We focus our attention on improving the GRAPPAreconstruction by careful consideration of the calibration and synthesis of missingdata in terms of the underlying physical system.3.2.1 TheoryIn standard GRAPPA algorithms, the weighting factors are calculated using informa-tion spanning the entire FE range of the image domain. If we consider a weightingfactor wij that links PE lines in the ith and jth coil, the weight depends on the localrelative relationship between the sensitivities of the two coils. Since typical surfacecoils have spatially varying sensitivities, a single weight will not be accurate for everylocation x of the image domain. Figure 3.6 illustrates this fact. In the standardGRAPPA approach, the resulting weighting factors represent an average weight thatprovides the lowest least square error between target and source lines across the entireFOV. It has been shown that allowing the weighting factors to vary as a function of31CHAPTER 3. OPTIMIZATION OF PPIFigure 3.6: Sensitivity profile variation in a 2-coil MR experiment.x increases the accuracy of the resulting recovered lines.A number of approaches have accounted for this variation in a few different ways.Including extra kx points in the kernel provides some sensitivity to FE variationof relative coil profiles [32]. This increases the computation time when invertingEquation 3.11 and, if only a few extra kx lines are used, only low frequency coilprofile variation can be represented. Other approaches used a set of smoothly varyingbasis functions to represent the weighting factors as a function of x [33, 34]. Theseimplementations also result in higher computation times and added complexity to theweight calculation.We have found, given the picture in Figure 3.6, that the best approach is to simplytarget the GRAPPA fitting in a limited region along the x direction. To facilitate thisthe GRAPPA calibration and synthesis steps are considered in hybrid (x, ky) space.In the standard GRAPPA implementation the entire ky line is considered in the fittingof a single set of weights (Figure 3.7a). The resulting weights are an average acrossthe FOV and fitting in either k-space or hybrid space will give the same values. Thesubsequent synthesis step applies the average weighting values (wij(m)) to all pointsalong x. This is shown schematically in Figure 3.7c.The rGRAPPA calibration is restricted to data in a smaller region, d (Figure 3.7b),recovering unique weighting factors to be applied to the data in that region. The32CHAPTER 3. OPTIMIZATION OF PPIlocation specific weights wmij (x, ky), are more appropriate for the local coil sensitivityrelationships. The resulting synthesis employs the regionally targeted weights torecover missing data with greater accuracy.kyxGRAPPA rGRAPPAdcoilcoilcoil coila) b)calibrationsynthesisweightweightc) d)kyxFigure 3.7: a) Standard GRAPPA and b) rGRAPPA calibrations. c) An average weightingvalue results from the standard GRAPPA calibration fit and is applied to recover all datain the line, ky. d) rGRAPPA implementation in which the weight values are allowed to varyalong x providing a unique weight for each synthesized region.The localized fitting produces more accurate weights using fewer source data anddoes not require selection of a basis set to model the weight variation (as in [34]).Instead, the optimal weight is found for a given region and used to synthesize thelocal data, accounting for coil profile variation across the FOV. This is the key tothe improved function of rGRAPPA. The reconstruction employs a sliding windowapproach; weights are computed in the sliding window region and overlapping syn-thesized signal is averaged. This has a combined effect of allowing a more localized33CHAPTER 3. OPTIMIZATION OF PPIrepresentation of weighting factors and additional noise averaging.The computation of local weighting factors wmij (x, ky) is facilitated by workingwith the image in hybrid space (x, ky). Truncated ky lines representing a limitedrange in the x-direction result in a set of Nx weights. Equation 3.10 becomes:Si(x, ky) =Nc∑j=1Nm∑m=1wmij (x)Sj{x, ky + [(m−Nb/2)R− 1]∆ky}, (3.15)where we have introduced the label x to identify the location. Since the data containnoise, a fit targeting a single point x and the corresponding source data can besignificantly corrupted. Increasing the region width (d) of the fitting data resultsin better conditioning in Equation 3.12. Selecting the optimal width d balances atrade off between noise and accuracy of the local weighting factors. This optimalsize depends on the relative coil geometry and noise characteristics of the acquisitionsetup.3.2.2 MethodSimulated rGRAPPA acquisitions were created by selecting every Rth ky line fromfully acquired, in vivo, multi-coil datasets. NACS lines in central k-space were kept forthe calibration. The accuracy of the data recovery for a given d can be quantified bythe Relative Error (RE) between the recovered image I(x, y) and the fully sampledreference image I0(x, y):RE =√∑x,y [I(x, y)− I0(x, y)]2√∑x,y I0(x, y)2. (3.16)In the rGRAPPA reconstruction, I and I0 are rSoS images. The RE was calculated forrecovered images over the reconstruction parameter space (d) to identify the optimalregion width.34CHAPTER 3. OPTIMIZATION OF PPIDatasets ABDax1 and ABDax2 are 2 slices from an axial abdominal scan, 4 coilacquisition on a 1.5T General Electric (GE) scanner, 256 × 256 matrix, TR/TE =120/2.12ms, 5mm slice thickness.Simulated datasets were also created using images of a Shepp-Logan phantom(Shepp) and a brain (Brain), shown in Figure 3.8. Coil images were sampled from(a) (b)Figure 3.8: Simulation Datasets a) Shepp image and b) Brain image.the static pictures using Gaussian sensitivity profiles with random, low frequencyphase modulation. Complex Gaussian white noise was added to each coil imageand the resulting k-space datasets were used to generate the desired undersampledacquisitions for rGRAPPA reconstruction.Computational ComplexityThe calibration and synthesis steps of the GRAPPA algorithm represent operationsusing large data matrices and present the bulk of the computational processing. Foran implementation with Nc coils, a reduction factor R, a full k-space of Nx × Nymatrix size, Nm kernel positions and NACS ACS lines we may summarize the orderof operations required for each step.In the calibration step, the Stgt matrix is organized into Nc rows, one for eachcoil, containing NACS ACS lines. Each line has length d, the size of the region usedfor reconstruction (for standard GRAPPA d = Nx, the number of FE points in the35CHAPTER 3. OPTIMIZATION OF PPIacquisition). The SACS matrix has size Nc×(NACSd). The corresponding source datain the signal surrounding the ACS lines make up the matrix Ssrc. Each ACS line is fitto Nm surrounding lines from all Nc coils, resulting in a matrix size (NcNm)×(NACSd).To determine G, we calculate the pseudoinverse of the matrix Ssrc and multiplyby SACS. The source data provides a heavily over-determined system; there are NcNmunknowns (the weights), determined by fitting NACSdNcNm data points. The numberof floating point operations (flops) in the calibration step (Equation 3.13) isflopscal = N2cNmNACSd+ 2N2cN2mNACSd+N3cN3m, (3.17)in terms of complex multiplications.For the synthesis, there are Ns = floor{Ny(1−1/R)}−NACS lines to be recovered(floor{·} rounding down to the nearest integer). In this case, the size of Stgt isNc× (Nsd). The corresponding source data matrix Ssrc will be size (NcNm)× (Nsd),and the number of flops required for synthesis (Equation 3.11) isflopssyn = N2cNmNsd. (3.18)For the standard GRAPPA implementation, d = Nx, the total number of acquiredpoints in the kx direction. In the rGRAPPA calibration, the optimal region width istypically much smaller than the full FOV (d << Nx). However, there are Nx−(d−1)unique calibration steps in rGRAPPA, resulting in an increase in the number ofcomputations. When d = 1, there is no overlap, and the number of operations ispractically equal to the standard GRAPPA case (d = 256).To contrast these computational complexities, the number of operations for astandard MRI reconstruction is limited to a single 2D Fast Fourier Transform (FFT).The popular Cooley-Tukey FFT algorithm results in N logN operations for a lengthN 1D FFT [35]. Extending to 2D requires NxNy log(NxNy) flops. For an image of36CHAPTER 3. OPTIMIZATION OF PPIsize 256× 256 this translates to 7.3× 105 computations. A GRAPPA reconstruction(with parameters Nm = 4, Nc = 4 and NACS = 24) registers 5.2× 106 computations,and with an optimized d = 8 rGRAPPA implementation 4.2× 107.3.2.3 ResultsThe four component coil images for the first dataset (ABDax1 ) are shown in Figure3.9. The corresponding fully acquired rSoS image is shown in Figure 3.10a. Figure(a) (b) (c) (d)Figure 3.9: ABDax1 coil images.3.10b shows the aliasing effect after removal of k-space lines to emulate an acceleratedacquisition with R = 2 and NACS = 16.(a) (b)Figure 3.10: Effect of undersampling on abdominal scan ABDax1 a) fully sampled rSoSimage, b) aliased rSoS image due to missing PE lines.37CHAPTER 3. OPTIMIZATION OF PPIFirst we examine the benefits of spatially varying weighting factors in the rGRAPPAreconstruction over the initial GRAPPA method. The missing k-space lines weresynthesized using region widths, d = 1 → 256 and the resulting rSoS images werecompared to the gold-standard image using full k-space data.For the ABDax1 dataset, a plot of the RE is shown in Figure 3.11a. The results(a)(b)Figure 3.11: Effect of region width on rGRAPPA weights and RE. a) RE of reconstructedimage as a function of the weighting region width d. The optimal d = 7 pixels provides thelowest RE between reconstructions and the fully sampled gold-standard. b) The variationof weighting values along the x direction; GRAPPA d = 256 (orange), d = 7 pixels (red),d = 1 pixel (blue). Note: the x Position is taken to be at the center of the calibration regionfor d = 7.indicate that rGRAPPA provides a better reconstruction than standard GRAPPA38CHAPTER 3. OPTIMIZATION OF PPIfor virtually all region widths, d. The optimal d value (providing the lowest RE) ismuch smaller than the GRAPPA equivalent region width, d = 256. In this case, asmall region with d = 7 is optimal for computing accurate weights, suggesting thatthe local variation of weighting factors is indeed significant. At the lower end of dvalues, the RE is seen to increase rapidly (near d = 1, 2 pixels). This is due to noisein the k-space data corrupting the fitting between source and calibration data.Figure 3.11b shows an example of the variation of the weighting factors. Thereal part of one of the complex weighting factors, w, is plotted as a function ofthe x position. The GRAPPA reconstruction computes a single weight across theFE direction (orange line). When only a single column of data (d = 1) is used, theweighting values fluctuate rapidly due to noise (blue line). The optimal result (d = 7)is a compromise between the two extremes (red line).The optimal rGRAPPA reconstruction provides better aliasing artifact suppres-sion than the standard GRAPPA reconstruction. The resulting rSoS images for AB-Dax1 are displayed in Figure 3.12. The reduction of aliasing artifact is particularlyevident in the difference maps (Figures 3.12b and 3.12d). The difference map for eachreconstruction setup is displayed with the same scale for comparison.Similar behaviour is seen in the simulated data sets. The four component coilimages from the simulated dataset Brain are shown in Figure 3.13. The resultingRE curve for the Brain dataset is shown in Figure 3.14. In this case, the optimalregion width is lower, at d = 5 pixels. This effect is due in part to the fact that thesimulated images have less severe noise. It is possible that the idealized, simulatedcoil sensitivity profiles also contribute. Compared with the ABDax1 real data case(Figure 3.11a) the Brain image reconstruction at small d values experiences a lessdramatic increase in RE.The Brain image rSoS reconstructions are shown in Figure 3.15. Once again,visible reduction of aliasing artifact is seen in the rGRAPPA rSoS image and difference39CHAPTER 3. OPTIMIZATION OF PPI(a) (b)(c) (d)Figure 3.12: Abdominal scan ABDax1 reconstruction, R = 2, NACS = 16 a) GRAPPAreconstruction and b) difference map. c) Optimal (d = 7) rGRAPPA reconstruction and d)difference map.(a) (b) (c) (d)Figure 3.13: Brain coil images.40CHAPTER 3. OPTIMIZATION OF PPIFigure 3.14: RE as a function of region width d (Brain simulation).map (Figures 3.15c and 3.15d, respectively), compared with those of the standardGRAPPA implementation (Figures 3.15a and 3.15b, respectively).With d = 1, the region is too small and noise begins to corrupt the reconstruction.While this does not have terribly adverse effects at low accelerations (R = 2 Figure3.16), the reconstruction suffers more as we move to higher R values.The ability of rGRAPPA to reduce aliasing artifact is also shown for higher Rfactors. Figures 3.17 and 3.18 show results for the real dataset ABDax1 with R = 3and R = 4, respectively. In general, the optimal d for reconstruction did not varysignificantly for different R, changing only by a few points.41CHAPTER 3. OPTIMIZATION OF PPI(a) (b)(c) (d)Figure 3.15: Brain image simulation reconstruction, R = 2, NACS = 16 a) GRAPPAreconstruction and b) difference map. c) Optimal (d = 5) rGRAPPA reconstruction and d)difference map.42CHAPTER 3. OPTIMIZATION OF PPI(a) (b)Figure 3.16: d = 1 Abdominal scan ABDax1 reconstruction, R = 2, NACS = 16 a)rGRAPPA reconstruction and d) difference map.43CHAPTER 3. OPTIMIZATION OF PPI(a) (b)(c) (d)Figure 3.17: Abdominal scan ABDax1 reconstruction, R = 3, NACS = 24 a) GRAPPAreconstruction and b) difference map. c) Optimal (d = 10) rGRAPPA reconstruction andd) difference map.44CHAPTER 3. OPTIMIZATION OF PPI(a) (b)(c) (d)Figure 3.18: Abdominal scan ABDax1 reconstruction, R = 4, NACS = 24 a) GRAPPAreconstruction and b) difference map. c) Optimal (d = 9) rGRAPPA reconstruction and d)difference map.45CHAPTER 3. OPTIMIZATION OF PPI3.2.4 DiscussionFor this study, the RE was chosen as the gold-standard metric for the reconstruction,since the goal of rGRAPPA is to provide a final image that most accurately matchesthe fully sampled reconstruction. The optimal region width d is thus determined bythe rGRAPPA reconstruction that results in the minimum RE value. In general,the optimal d value was much smaller than the full FOV range (i.e. the standardGRAPPA reconstruction), validating the rGRAPPA framework. The optimal d valueseems to increase slightly for higher acceleration factor (Figure 3.17).For some data and R factors, the RE minimum was found at a relatively larged value, and the general rule d << Nx was not upheld. In such cases, there wasalso a local minimum at a small region width (Figure 3.19). Figures 3.20a & 3.20bFigure 3.19: ABDax1 RE curve displaying 2 local minima; at ds = 9 pixels and dL = 120pixels.display the local minimum, small region (ds) reconstruction & difference map, respec-tively. Figures 3.20c & 3.20d display those of the global minimum, large region (dL)reconstruction.For ds the primary contribution to the RE is noise amplification. Conversely,the RE is owed primarily to residual aliasing artifact in the dL reconstruction. Ingeneral, noise-like error distributed over the entire FOV is preferable to local residualartifact since it is less likely to resemble anatomical structure or pathology. In cases46CHAPTER 3. OPTIMIZATION OF PPI(a) (b)(c) (d)Figure 3.20: ABDax1 reconstruction (R = 4, NACS = 24) with local minima in RE a) ds =9 rGRAPPA reconstruction and b) difference map. c) dL = 120 rGRAPPA reconstructionand d) difference map.where there are two such competing RE minima, the smaller d value is most likelyoptimal. However, in certain cases the noise may be so great as to render the imagediagnostically useless. It is not well understood why the two minima occur in such away, and may be an interesting point for future study.The fact that the optimal d value was normally much smaller than the full FOVsuggests that the weight variation occurs on a relatively small scale. As such, thehybrid space approaches such as rGRAPPA and others [33, 34] are well suited to47CHAPTER 3. OPTIMIZATION OF PPItake advantage of this type of spatial variation. A k-space based kernel (as in [32])would require extensive support to be sensitive to the same scale of variation. Inother words, to account for spatial variation on the order of a few points along x, thekernel would require inclusion of a very large number of kx points. This results inextremely large computational costs to provide essentially the same information asthe hybrid space calibration techniques.This tendency to a small optimal d does not seem to depend on the accelerationfactor, R. If the number of ACS lines NACS is kept constant, the data used forcalibration are virtually identical for any reasonable R, although the kernel fittingrelationships in terms of relative (x, ky) positions can vary. Decreasing NACS cancompromise the conditioning of the pseudoinverse problem (Equation 3.12). In con-ditions with poor SNR, this is likely to be more detrimental to rGRAPPA than thestandard implementation, but with our tests the conditioning was typically sufficientto get an accurate result, even with smaller d.The coil geometry and SNR conditions seem to be more important to the deter-mination of optimal region width than the imaged object. In simulation the impactof the underlying object to the weighting factors could be tested; using identicalcoil setups, the Brain and Shepp-Logan reconstructions were optimal with the samed, although the actual RE values were different. It is also worth noting that therGRAPPA weighting factors are dependent on not only the coil sensitivity relation-ships, but the underlying subjects contrast information. This is shown in Figure 3.21and could be reflected in the RE curve variation. In the end, the variation of optimald found between different R factors for a given subject, as well as between differentsubjects was quite low.In addition to the variation of weighting factors across the FOV one might verywell suppose that the optimal d should also depend on x. Probing this possibility wefound that there was indeed some variation of d across the FOV but the variability of48CHAPTER 3. OPTIMIZATION OF PPI(a)(b)Figure 3.21: a) RE curves for two different objects with identical simulated coil profiles;d = 4 is optimal for both (4-coil R = 2, NACS = 24). c) Real value of one weighting factoralong x showing a dependency on image subject.the d values was small in regions with reasonable SNR. Significant changes occurredin regions with very little available signal, where a larger d could allow a dramaticimprovement in the conditioning of Equation 3.12. However, since these regionsare largely composed of noise, there is little impact on the overall quality of thereconstruction. The additional computational cost required to account for a variableoptimal d provided little benefit in terms of the total RE.The number of flops required for an optimal rGRAPPA reconstruction of d = 8was found to be roughly one order of magnitude larger than the standard GRAPPA49CHAPTER 3. OPTIMIZATION OF PPIimplementation (d = 256). This corresponds to a minimal impact on a moderncomputing system; on the research system (GNU Octave v3.4.3, Intel i5 2.30 GHzCPU, 3.8GB RAM) the difference between these computation times was∼0.5 seconds.As such the rGRAPPA implementation allows reasonable reconstruction times forclinical settings.A nice feature of the segmented approach of rGRAPPA is that it can be imple-mented in parallel with other strategies. The High Pass GRAPPA (hpGRAPPA)technique, for example [31], improves the calibration of weighting factors by com-puting them in a sparse domain (after high-pass filtering). This technique could becombined with rGRAPPA, implementing the high-pass calibration on each windowregion, taking advantage of sparsity to improve the calibration and synthesis of datatherein. It remains to be seen if the gains provided by rGRAPPA and hpGRAPPAwould combine synergistically, or result in diminishing returns.3.3 Optimization MetricsSince the RE requires a fully sampled reference scan, it is not a viable metric for apractical PPI application. There are a number of alternatives that have been proposedthat allow quantification of various MRI artifacts and do not require a reference imagefor comparison. Implementations in auto-correction of motion artifacts [36, 37] haveapplied a variety of such a posteriori quantification metrics. Artifacts due to motion,such as blur and ghosting are similar in nature to PPI artifact [15]. Using in vivoand simulated data, we compare the RE metric to a number of widely used imagequality metrics in selecting the optimal region width, d.The final reconstruction is a rSoS image composed of bright and dark regions withno remaining phase information. The typical PPI artifact due to Nyquist violationis aliasing or N/R ghosting (see for example Figure 3.1), where N is the number of50CHAPTER 3. OPTIMIZATION OF PPIpixels in the PE FOV and R is the reduction factor. If synthesis of missing lines is notperfect, there will be residual aliasing artifact. Furthermore, noise can be amplifiedin signal synthesis when the fitting matrix is not well conditioned.Table 3.1 lists the image fidelity metrics considered for the determining optimalregion width in a practical setting (without knowledge of the full k-space data). Theperformance of each metric is rated by comparing the optimal d value identified by themetric versus the optimal d value provided by the RE. While this study specificallytreats the GRAPPA style PPI reconstruction, the metrics should provide similarbehaviour for other Cartesian k-space recovery methods.The image gradient G(x, y) is used frequently, due to the nature of PPI recon-struction artifact; Cartesian sub-sampling and errors in the regeneration of missingky lines result in regular ghosting, increasing the amount of “edges” in the image. Anexample image and corresponding gradient map is shown in Figure 3.22.(a) (b)Figure 3.22: a) A rSoS image I(x, y) and b) corresponding gradient map G(x, y).A number of the metrics found in Table 3.1 can be computed in slightly differentways; for example, those involving gradients can be computed with only the y-gradientapplied (since the reduction of lines is along the PE direction). The metric may alsobe computed from normalized image, or gradient maps. The normalized versions of51CHAPTER 3. OPTIMIZATION OF PPIMetric Symbol Formula ReferenceGradientEnergyEG∑xy G(x, y)2∑xy(G(x,y)∑xy |G(x,y)|)2[37]EGy∑xy Gy(x, y)2∑xy(Gy(x,y)∑xy |Gy(x,y)|)2[36, 37]TotalVariationTV∑xy |G(x, y)| [39]TVy∑xy |Gy(x, y)| [36]Entropy H ∑xyI(x,y)∑xy |I(x,y)|log I(x,y)∑xy |I(x,y)|[36]GradientEntropyHG∑xyG(x,y)∑xy |G(x,y)|log G(x,y)∑xy |G(x,y)|[A]HGy∑xyGy(x,y)∑xy |Gy(x,y)|log Gy(x,y)∑xy |Gy(x,y)|[36]Auto-correlationAC2D∑ab P (a, b) [36]ACy∑b P (0, b) [36]ACpk∑b P (0, b), b = [±Ny/R] [36]DifferentialEnergy DE∑n[S(n)− S0(n)][S(n)− S0(n)]∗ [40, 41, 42]where, G(x, y) =√Gx(x, y)2 +Gy(x, y)2Gx(x, y) = I(x+ 1, y)− I(x− 1, y), Gy(x, y) = I(x, y + 1)− I(x, y − 1)P (a, b) = ∑xy I(x, y)I(x− a, y − b)Table 3.1: Image Quality Metrics52CHAPTER 3. OPTIMIZATION OF PPIa metric appear to the right of the standard version in Table 3.1, where applicable.3.3.1 Differential EnergyMost of the image metrics are applied to the final rSoS image. The DifferentialEnergy (DE) (Table 3.1) metric provides a slightly different measure and warrantsextra discussion. This measure was inspired by a study of the GRAPPA algorithmthat reinterpreted the application of the matrixG (Equation 3.13) as an operator [43].The so-called GRAPPA operator, G, exploits the translation invariance of the kernel,shifting data in k-space with repeated applications, much like the ladder operators ofquantum mechanics, to produce ky lines shifted by a desired amount.Consider the R = 2 case shown in Figure 3.23. After one application of G wereconstruct the missing lines. Applying the G operator on the newly synthesized linescreates an estimate of ky lines shifted once more, which just happens to land back inthe original positions. Thus, after 2 applications of the G operator, we have recoveredFigure 3.23: Comparison of recovered lines to acquired reference lines after subsequentapplications of the GRAPPA G operator. The solid lines are acquired data (normallyacquired or ACS data); the dotted lines are omitted data and the dashed lines are recovered(estimated) via the GRAPPA algorithm. After 2 applications of G we have recovered theinitial acquired signal lines.estimates of lines in the position of the originally acquired lines. Comparison of thesynthesized lines S to the originals S0 will provide a measure of the accuracy of thereconstruction. For the R = 2 case for example, we may compare S↔ G2S0.The set of possible comparisons used to calculate DE are shown in Figure 3.24.The case shown is for R = 2, but these comparison classes may be extended to higher53CHAPTER 3. OPTIMIZATION OF PPIR factors.Acquired S(k)A: GACS B: G2 C: GACS +G2RE EquivalentAcquiredMissingSynthesized (G)Synthesized (G2)Figure 3.24: Possible k-space coverage patterns for DE data consistency matching. In apractical acquisition, only the acquired lines (S(k)) are known. The RE compares finalimages created by using the recovered data as well as lines from the fully sampled k-space.Comparison A matches lines after application of G, but only in the ACS region. ComparisonB matches lines after application of G2. C is the sum of A and B, using all available datafor comparison.In simulation, when the “missing” lines that we synthesize are known, we maycompute theDE between the lines synthesized by the algorithm with their true values.This DE uses the same source data for comparison as the RE (RE Equivalent). In apractical setting, we may not make this comparison, but may complete a comparisonon a small subset of this data, which is known in the ACS region. This is comparisonA (GACS in Figure 3.24), computing the DE between lines synthesized with a singleapplication of G and the known ACS lines at the corresponding position.54CHAPTER 3. OPTIMIZATION OF PPIComparison B (G2 in Figure 3.24) applies the G twice to predict the signal atthe position of the original acquired lines. While providing a comparison over thefull region of k-space (rather than limited to the ACS region) there may be addi-tional errors imparted by compounding effects from the multiple applications of G.Comparison C provides is the sum of comparisons A and B, comparing all availablesynthesized data points without using any reference data.3.3.2 ResultsA representative plot for each metric class is shown in Figure 3.25. For the plots inFigure 3.25, the imaging subject was ABDax2, with R = 2 and NACS = 24.The image quality metrics were tested on multiple datasets using reconstructionsof R = 2, 3, 4. The average disparity between the optimal region width predicted bythe RE and the optimal region width predicted by the metric M were compared, thatisδd = |dM − dRE| (3.19)The results across ABDax1 and ABDax2 are shown in Figure 3.26a. Simulateddatasets (a brain image and Shepp-Logan phantom) with identical simulated coilsetups were also tested. The results of these two datasets for R = 2, 3, 4 are shownin Figure 3.26b. The metrics are ordered from best predictor of optimal d value interms of the mean distribution of δd for the real data. The RE and DERE measuresare shown for comparison, but are not viable metrics since they use data from thefully sampled reference.Amongst the real data assessments, the best performing metric was the (normal-ized) EGy in terms of predicting the same optimal d as the RE. However, whenR > 2, it was found that the global maximum was located at d = 256 (the standardGRAPPA result). In this case, the local maximum was selected for inclusion in the55CHAPTER 3. OPTIMIZATION OF PPI(a) (b)(c) (d)(e) (f)Figure 3.25: Image Metrics vs RE.final average (refer to Section 3.2.4). This result could be ensured by limiting thereconstruction parameter space to a fraction of the overall FOV (i.e. d < Nx/3).The results were similar for the total (normalized) EG. Care must be taken whenconsidering the EG maximum as a metric choice.The Autocorrelation (AC) metrics also performed well, with the AC2D in second56CHAPTER 3. OPTIMIZATION OF PPI(a) (b)Figure 3.26: Metric Performance. a) Real Data. b) Simulation. The TV measures areomitted due to poor performance.place. The sum in ACy restricted the correlation lags to be y-directional and wasa reasonable predictor as well. The peak region autocorrelation, ACpk, specificallytargeted FOV/R ghosting lags, by summing the autocorrelation entries in a smallregion surrounding the expected peak locations.The Image Entropy (H) was a good performer across all R values, returninga lower standard deviation amongst all acquisition setups than the AC2D, althoughslightly higher δd. The Gradient Entropy (HG) and y-specific Gradient Entropy (HGy)also performed reasonably well.In terms of the DE measures, the most accurate (after RE equivalent) was theDEB(= DEG2). However, this had some large deviations for certain reconstructions.This is likely due to the relative flatness of the curve over a large range of d values(Figure 3.27c). When the DEG2 sum was restricted to signal outside of the ACSregion (making a better analog to the RE equivalent), the result was even poorerperformance.The Total Variation (TV ) measures showed some ability to predict optimal d atR = 2, however they completely failed to identify a minimum at higher R. Theminimum values were at d = 256 for both TV and TVy. For this reason, the TV57CHAPTER 3. OPTIMIZATION OF PPImeasures were omitted from the comparisons in Figure 3.26.The results for DE measures were more varied; Figure 3.27 shows the variouspossibilities for the R = 2 case. The RE and DERE curves are very similar, which is(a) (b)(c) (d)Figure 3.27: RE and DE metrics as a function of d for reconstructed images. a) The REequivalent comparison. b) Comparison A, recovered lines in ACS region. c) Comparison B,recovered lines after 2 applications of G. d) Comparison C, combining A and B.expected since the source data for comparison are identical. The DEA(= DEG1acs)results in an expected optimal d = 1. This is an interesting feature, that suggeststhe optimal reconstruction has a unique weighting factor for each x. However, this islikely due to the fact that it is limited to the ACS region, where the SNR is relativelyhigh.The DEB(= DEG2) predicts a larger optimal region width. In this measurement,the synthesized lines are reconstructed from a second application of the G operator.This does not act on acquired lines, but rather the lines recovered after the first data58CHAPTER 3. OPTIMIZATION OF PPIapplication of G. Any errors in the first data synthesis may be propagated in thesecond operation.The DEC(= DEG1G2) is simply DEA + DEB. This provides a comparison be-tween recovered lines and all possible acquired lines. The smaller optimal d valuespredicted by this metric may be due to the fact that the comparisons in the ACS re-gion contribute the majority of the sum. It should also be noted that the comparisonin the ACS region provides comparisons that are not consistent with the RE, sincethe acquired ACS lines are inserted prior to final reconstruction.3.3.3 DiscussionInitially it was assumed that the Gradient Energy (EG) would provide an ideal op-timization metric, having previously found success in deghosting applications [37].The gradient of an image is generally smaller when there are more uniform signalregions and fewer edges. For the rGRAPPA implementation, errors in missing lineregeneration result in increased aliasing, thereby increasing the number of edges inthe reconstructed image. Higher noise will also contribute to higher Gradient Energy(EG) values, thus we expect that the ideal final image will have minimum EG.It was found that there was poor correlation between RE and the raw EG. Thismay be due to overall image intensity variation; as the size of the edge increases,the contribution to EG increases. The image intensity variation (Figure 3.28a), canpotentially influence the EG calculation (Figure 3.28b).The intensity variation for different reconstruction parameters is inherent to theGRAPPA algorithm, since the synthesis of missing k-space lines does not conserveenergy. This can increase or decrease the value of EG without necessarily addingmore edges (stronger aliasing artifact). This factor may be addressed by using thenormalized EG.Division by the total amount of signal in the gradient maps accounts for the effects59CHAPTER 3. OPTIMIZATION OF PPI(a) (b)Figure 3.28: The impact of normalization on metric values. a) The variation of overallimage intensity could affect metric outcomes. b) EG calculation without normalization.of the overall contrast variation, allowing us to compare EG from various differentreconstructions on equal footing. However, the normalized EG (i.e. Figure 3.25a)displayed an inverse correlation with the RE curve; the optimal reconstruction hasthe maximum gradient energy!It is possible that this is due to the quadratic nature of the EG measure; largeredges in G(x, y) contribute with a much greater impact than smaller ones. An imagewith a few large edges will have higher energy than one with many smaller ones. Fur-ther investigation suggested that the maximum normalized EG may be a reasonablemetric for selecting optimal width in a rGRAPPA reconstruction. Ultimately, themaximum normalized EGy gradient provided one of the best reconstructions in termsof RE. The EG metrics may encounter problems in images with low SNR; as noiselevels comparable to signal levels may heavily influence the overall EG, even whennormalized.TV has found significant success in MR reconstruction optimization. This metrichas been used extensively in Compressed Sensing (CS) MRI implementations [39, 44]to promote sparsity in certain domains of the image information. It has also beenused in conjunction with some PPI and undersampled reconstructions (e.g. [45]). Inthis study we used the gradient operator as the sparsifying transform; finding the TV60CHAPTER 3. OPTIMIZATION OF PPIof the image gradient G(x, y). We expect the gradient to be sparse, since it selectsonly the edges and should have small (effectively zero) signal in other regions.The TV measures are similar to the EG, but are a linear summation of G(x, y)(whereas the EG is quadratic). The TV metric relaxes the preference for large edgescompared to the EG measure. For the R = 2 case, the TV measures predictedthe optimal d with good accuracy. However, for a majority of reconstructions, boththe TV and y-specific (TVy) minima were local. In some reconstructions the lower(ds) minimum did not occur at all. For this reason it is not recommended as a goodquantifier for rGRAPPA optimization. The motion autocorrection study [36] reportedaverage performance of the TV .The idea of entropy has been applied to many imaging fields to restore noisyand incomplete data. The Maximum Entropy Method [46] is used successfully inmany fields (radio and X-ray astronomy, crystallography and geophysics), but is notgenerally applicable to MRI [47]. However, minimizing entropy has proven useful insome imaging applications [48]; in MR specifically, it has been successfully employedin motion autocorrection [36] and alignment of scan data in SENSE imaging [49].The entropy measure (H, Table 3.1) is smaller when more of the overall imageenergy is in fewer pixels. Minimizing the image entropy favours a reconstruction inwhich there are more dark (low signal) pixels. Increased aliasing artifacts and noisewould generally result in higher image entropy as more image energy is distributedto a greater number of pixels.We also found that the entropy of the gradient of the image was useful, a similarargument holds for HG and HGy. The entropy measures possess an implicit normal-ization to the total energy of the image, the need for which is reflected in the case ofthe EG. This appears to be crucial in such reconstructions in which missing k-spacelines are synthesized, since there is no constraint that energy be conserved in thereconstruction. The entropy measures are implicitly less sensitive to the sizes of the61CHAPTER 3. OPTIMIZATION OF PPIedges, favouring images with overall uniformity.The autocorrelation summations specifically target the aliased ghosts in the sub-optimal reconstructions. The most effective autocorrelation metric incorporated thesum of all lags in both x and y directions, outperforming the other measures targetedspecifically in the undersampled direction (ACy) or at the known location of FOV/Rsub-sampled ghosts (ACpk).Unlike the RE, the DE is calculated separately for each coil, directly from thecomplex data. The DE provides a unique way to identify the optimal region width(d), unlike other metrics computed using the rSoS image. A similar measurementwas used to help identify optimal reconstruction kernels in a standard GRAPPAimplementation [42]. However, in this computation, only the ACS region was used toenforce data consistency resulting in an assessment of the reconstruction parameterson a limited part of k-space. Further amendments [40, 41] utilized the translationinvariance of the reconstruction kernel to extend the comparison to the limits ofk-space, similar to the comparisons used in the present study.The DE can provide a comparison between the complex signal at any image or k-space location. The DE defined in Table 3.1 is invariant under Fourier transformation(a consequence of Parseval’s Theorem). As such, it may be used to compare signalin k-space, image space or hybrid space. A particularly attractive feature is that thiscomparison may be done before final reconstruction, potentially saving some post-processing time by doing “on-the-fly” optimization. The author notes that the EGmetrics should, in theory have similar characteristics.One of the limitations of the rGRAPPA approach is the lack of predictive powerof the metrics studied. Although some of these metrics have been shown to locate theoptimal region width d, it is impractical to require a complete set of reconstructionsfor all possible d values. A potential goal for future work would be to characterizethe RE (or other metric) curves in relation to the imaging setup, coil number and62CHAPTER 3. OPTIMIZATION OF PPIgeometry, and SNR characteristics to provide a way to predict a priori the optimalregion width for reconstruction.63Chapter 4Extracting Low Spatial FrequencyInformation from Partial k-space4.1 IntroductionMany MRI applications seek to exploit the behaviour of spins to encode relativeinformation between two scans. This can be achieved by preparing the spin systemsuch that it will be sensitive to some quantity of interest. The information may beextracted by examining the difference between a pair of scans that are prepared in aslightly different manner. A few examples include intensity non-uniformity correction,wherein the sensitivity variation of surface coils may be corrected by comparison witha reference scan, and Phase Contrast (PC) MRI, where information about variousphysical properties is encoded in the phase of the MR signal.Uniform intensity is important in quantitative MRI analysis and can be com-promised by B1 inhomogeneity. B1 inhomogeneity arises from interactions of theexcitation field with dielectric tissue as well as RF attenuation, causing non-uniformintensity in images [56, 57], especially at higher field. Quantitative measures areperformed on pixels that are assumed to have similar intensities when representing64CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACEthe same tissue. If the intensity is not uniform, it will corrupt the subsequent mea-surements [58]. Intensity non-uniformity correction methods compare the intensitybetween surface coils and a body coil reference image [59], which normally has a muchmore uniform spatial intensity profile. The surface coil reception sensitivity (B−1 ) canalso be estimated using a reference scan in which a phantom with well known, uniformspatial distribution of signal is measured prior to the real imaging object [60].In PC MRI, the estimation of relative phase maps is crucial to extracting accuraterepresentations of the desired information. Numerous physical properties such aschemical shift effects [50], flow or motion [51], temperature [52], and even tissueelasticity [53] can be encoded in the phase of MRI signal. B0 inhomogeneity canalso be encoded in signal phase [13]. B0 inhomogeneity is a common problem inmany MR applications, causing blurring, distortion and loss of contrast [54, 55]. EPIacquisitions are particularly affected [13, 14].The contrast difference maps in such applications are typically obtained by directcomparison of two fully sampled complex images acquired with different sensitivitiesfor some physical quantity of interest. A direct image space comparison provides poorresults in regions of low SNR. The additional scans require undesired extra acquisitiontime, and if full k-space data is not used, the images will be degraded by truncationartifact. However, when the spatial variation is smooth, as is often the case in suchsituations [61, 2, 13, 62], it may be represented by a smaller region of k-space.In this chapter, we present a convolution fitting technique that can provide accu-rate representation of the relative information maps using only a fraction of k-space.This approach is unlikely to work in certain applications where the contrast infor-mation is not slowly varying, such as cardiac or Magnetic Resonance Angiography(MRA) phase contrast imaging. However, when the low frequency assumption isvalid, the relative contrast may be represented with a compact Fourier expansion,allowing the information about the quantity of interest to be encoded with minimal65CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACEextra time. We test the efficacy of the convolution fit information map within aframework of PC MRI wherein the contrast information is smoothly varying.4.2 TheoryThe relative information between two images is often retrieved by an image spaceproduct operation. In general, we may have two images I1 and I2, that containa representation of an identical image, but there is a modulation representing thedesired quantity of interest,I1(x, y) = C(x, y)I2(x, y). (4.1)C(x, y) represents the complex image space contrast modulation and should not beconfused with the coil sensitivity of Chapter 3. In k-space, Equation 4.1 may berepresented by a convolution:S1(kx, ky) = h(kx, ky)⊗ S2(kx, ky) (4.2)In our PC MRI example, images I1 and I2 possess a relative phase difference mapdefined on r = xxˆ+ yyˆ, that may be obtained by conjugate product:I1(r) = ei∆θ(r)I2(r) (4.3)We are interested in determining ∆θ(r), which contains quantitative informationneeded for analysis or correction, such as B0 field inhomogeneity. The FT convolutiontheorem states that Equation 4.3 is equivalent toS1(k) = h(k)⊗ S2(k) (4.4)66CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACEWhen the information contained in ∆θ(r) is slowly varying, it may be capturedin a convolution kernel h of a relatively small size. The kernel is found using afitting procedure similar to that of GRAPPA [63] and Off-Resonance Artifact withConvoLution in k-spacE (ORACLE) [61]. h then represents the FT of the imagespace phase modulation between I1 and I2, namely ∆θ(r).4.2.1 Mathematical ApproachWe digress at this point to discuss the mathematical methods to determine the kernel,h that captures the relative relationship between two k-space datasets, f and g. Webegin with the convolution:f = h⊗ g, (4.5)where f and g contain k-space data from the two acquisitions, and h is the unknownconvolution kernel. We seek to represent Equation 4.5 as a matrix equation to obtainh using a least squares approach. We now examine the practical application of theconvolution operation as a matrix multiplication for 1D and 2D cases in detail.1D ConvolutionIn the 1D convolution case, our target and source data (f and g) are represented bylength N vectors f and g, respectively.f =[f1 f2 f3 . . . fN]g =[g1 g2 g3 . . . gN] (4.6)We desire to know h, the vector representing the convolution kernel that relates fand g. The convolution operation can be expressed using the Toeplitz matrix (see forexample [64]); since we seek the kernel h, we construct the convolution matrix from67CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACEthe known source data vector g:G1D =g1 g2 . . . gN 0 0 . . .0 g1 g2 . . . gN 0 . . .0 0 g1 g2 . . . gN . . .0 0 0 g1 g2 . . . . . .... ... ... ... ... ... . . .. (4.7)Then we may write Equation 4.5 asf = hG1D0f1f2f3...fN0T=h1h2h3...hkT g1 g2 g3 . . . gN 0 0 . . .0 g1 g2 g3 . . . gN 0 . . .0 0 g1 g2 g3 . . . gN . . .0 0 0 g1 g2 g3 . . . . . .... ... ... ... ... ... ... . . .(4.8)The number of rows of G1D must match the number of elements in the kernel, h.The size of G1D is K × (N + K − 1) and the zero padding in f positions the targetvector to the valid region of the convolution.2D ConvolutionThe 2D convolution is slightly more involved; f and g are both M × N matrices,in general, with respective entries fmn and gmn (m = 1, . . . ,M ; n = 1, . . . , N). Ina Cartesian partial k-space framework, M represents the number of PE lines of theacquired data and M < N . The kernel h is a K ×K matrix, where K ≤M .68CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACEAgain, we set up the convolution using the entries of the source data gmn. The firststep is to determine the row specific Toeplitz matrix representing a 1D convolutionalong the mth row.Gm =gm1 gm2 gm3 . . . gmN 0 0 . . .0 gm1 gm2 gm3 . . . gmN 0 . . .0 0 gm1 gm2 gm3 . . . gmN . . .0 0 0 gm1 gm2 gm3 . . . . . .... ... ... ... ... ... ... . . .. (4.9)The 2D convolution matrix can then be constructed as a block Toeplitz matrix usingthe Gm:G2D =G1 G2 G3 . . . GM 0 0 . . .0 G1 G2 G3 . . . GM 0 . . .0 0 G1 G2 G3 . . . GM . . .0 0 0 G1 G2 G3 . . . . . .... ... ... ... ... ... ... . . .. (4.10)The rows of the kernel matrix h are concatenated into a single row:h =[h11 h12 . . . h1K h21 h22 . . . hKK](4.11)Target data f is zero-padded to align the valid convolution region, and organizedinto a single vector of length (M + K − 1)(N + K − 1). The convolution is thenrealized by Equation 4.12:f = hG2D (4.12)Multiplying the target signal vector f by the pseudoinverse of the Toeplitz matrix G†69CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACE(either the 1D or 2D) allows computation of the kernel:h = fG† (4.13)Once the kernel h is known, inverse FT provides a representation of the lowspatial frequency contrast difference between the two images. This system is solvedin a manner similar to GRAPPA, seen in Chapter 3; weighting factors are acquiredthat correspond to relative k-space relationships in a local neighbourhood.4.2.2 Application to Phase Contrast MRIWe now apply the convolution fitting framework to find ∆θ(r) as alluded to in Equa-tion 4.3. With two fully sampled datasets, the phase difference map ∆θ(r) can beobtained from the argument of a pixel-by-pixel conjugate product, and is representedby the phasor P0:P0 = ei∆θ(r), ∆θ(r) = arg [I1(r)I∗2 (r)] (4.14)This is shown in Figure 4.1, the phases of the two sample images are shown on eitherside and the relative difference (in this case a bilinear ramp) phase map in the center.Figure 4.1: Example of phase contrast between two acquisitions.With only a small sub-sampling of k-space (Figure 4.2a), this “gold-standard”70CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACEcomputation of the phase is not available, but reasonable approximations can bemade if the phase difference between I1 and I2 is smooth. An example of a fittedkernel is shown in Figure 4.2b; the smoothly varying spatial information is capturedin a small region of k-space.f(k)f ′(k)(a) (b)Figure 4.2: Kernel found using truncated k-space. a) f ′(k) is a possible subset of the fullk-space f(k) from which to take sample data. b) The magnitude of the fitted kernel h,representing low spatial frequency information between two datasets.The undersampled data can simply be zero padded and Fourier transformed, com-puting the conjugate product phasor from truncated k-space, Pt(r). The estimatedphase map from truncated data is ∆θt(r)Pt(r) = ei∆θt(r), ∆θt(r) = arg[(FT −1{f ′1(k)}) (FT −1{f ′2(k)})∗] . (4.15)Alternatively, we may use the convolution fitting technique to determine h and obtaina representation of the phase, ∆θh(r). The estimated phase map from kernel fittingisPh(r) = ei∆θh(r) = FT −1{h(k)}, (4.16)where the kernel h has been zero-padded to the full k-space size.Since the truncated source data k-spaces f ′ and g′ will have sharp discontinuitiesbetween acquired and zero padded regions, Pt will suffer from truncation artifact71CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACE(Figure 4.3a). On the other hand, since the relative kernel information decays quicklyto zero for low spatial frequency contrast, the subsequent zero padding does not resultin a discontinuity and the resulting phasor Ph is much smoother (Figure 4.3b).(a) (b)Figure 4.3: Phase mapping from small fraction of k-space. a) Truncation and zero filling,b) estimation from convolution kernel.For Cartesian style acquisitions, limiting the span of k-space data in the FE direc-tion has little impact on scan time. In theory, the approach will work for any numberof truncated directions. For practical considerations, we consider that the truncationonly takes place in the number of ky lines. With this type of data, we may considerdoing the convolution fitting only in the ky direction after a 1D IFT. The impact ofrelative x variation on the k-space data does not modulate signal in the y directionof hybrid (x, ky) space. In this case a 1D kernel may be fit for each position xi andsubjected to IFT to recover the phase difference ∆θ(xi, y).4.3 MethodWe compare the phasors Pt and Ph representing the estimated phase difference mapsto the “gold-standard” phasor P0 created from fully sampled k-space. Simulated andreal MR data are used to evaluate the effectiveness of 1D and 2D convolution fittingsversus the truncated conjugate product.72CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACEThe simulation consists of a set of k-space data simulated from a 256×256 Shepp-Logan image and subjected to a known relative phase map. The image was modulatedby a random low-frequency phase map Pbgnd(x, y) to emulate background phase andtwo images were created:I1(x, y) = I0(x, y)Pbgnd(x, y)I2(x, y) = I0(x, y)Pbgnd(x, y)Pd(x, y)(4.17)where, Pd is the simulated phase difference map between I1 and I2. 1% Gaussiannoise was added. The truncated data was selected from the resulting k-space signalsS1(kx, ky) and S2(kx, ky) after FT. Both linear and parabolic phase maps Pd weretested.The real data is from a set of EPI acquisitions obtained for the geometric distortioncorrection technique Phase Labeling for Additional Coordinate Encoding (PLACE)[2]. The PLACE technique utilizes a pre-phase PE “blip” on one of two similarlyacquired EPI images to capture information about the distortion. The PE blip resultsin a relative linear phase ramp along the image y-direction. The EPI sequencesacquired complex images of a circular water phantom in a 4.7T vertical bore Brukerscanner, 2mm slice thickness, TR/TE = 2000/30ms and 128 × 128 matrix size. Theimparted linear phase difference between the PLACE acquisitions corresponds to lowfrequency phase information, making it a good candidate for estimation from limitedk-space data.A variety of k-space trajectories, sampled from the available target and sourcedata, were tested to determine the accuracy of the estimated phase maps given thesize of input data, location in k-space and convolution kernel size. Both 1D (K × 1)kernels fit in hybrid (x, ky) space, and 2D (K ×K) kernels fit in pure (kx, ky) spacewere tested for kernel sizes K = 2 → 32. For the 1D case, a unique kernel is foundfor each x position, zero-padded and fed into the appropriate position in Ph1D.73CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACEThe agreement between the estimated phasors and the gold-standard P0 is de-scribed by a refocusing ratio:RR =∣∣∣∑x,y C0(x, y)P ∗h/t(x, y)∣∣∣∑x,y∣∣∣C0(x, y)P ∗h/t(x, y)∣∣∣. (4.18)where C0(x, y) = T (x, y)P0(x, y). To isolate the phase difference information, athreshold mask T (x, y) is used to eliminate contributions to the summation fromnoise only regions, where the phase of P0 is totally random. The threshold mask isdefined:T (x, y) =1, if |I1(x, y)| < δ.0, otherwise,(4.19)where δ is chosen to be just above the noise level. A similar measure was used toqualify a nonlinear phase correction algorithm [65].The distribution of phase error in the resulting estimated phasor is shown in aphase difference map∆P (x, y) =√∣∣∣C0(x, y)P ∗h/t(x, y)∣∣∣2. (4.20)Computational ComplexityThe bulk of the computational costs in this technique lie in the kernel fitting step(Equation 4.13). The size of the target data matrix G depends on the size of thekernel K and the length of the input data vectors f and g. For the truncated Carte-sian acquisitions, f and g represent rectangular, M × N portions of k-space (Mrepresenting the truncated direction, M < N).For a 1D kernel, the Toeplitz matrix G1D will have a matrix size K× (M+K−1)74CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACEand the number of flops to compute the kernel isflops1D = 3K3 + (2M − 1)K2 + (M − 1)K. (4.21)This 1D kernel fitting must be done at each x location in (x, ky) hybrid space, forexample, requiring N times the amount of fittings indicated by Equation 4.21.For the 2D case, the Toeplitz matrix G2D is much larger, with a size K2 × (M +K − 1)(N +K − 1). The resulting number of flops for the 2D kernel computation isflops2D = K6 +K2(2K2 + 1)(M +K − 1)(N +K − 1). (4.22)For typical values (K = 8, N = 256 and M = 32) we have flops1D = 1.49 × 106and flops2D = 8.49× 107. For larger K, the 2D case increases quickly, owing to theterms of high degree in K.4.4 ResultsFigure 4.4 shows results for the simulated Shepp-Logan phantom with parabolic phasedifference between successive images. The phasor P0 representing the gold-standardphase map obtained from direct comparison of the images obtained from full k-spaceis shown in Figure 4.4a. The parabolic phase difference is visible in regions withappreciable signal (the region highlighted by threshold mask T (x, y), Figure 4.4e)with noisy phase data everywhere else. The truncated conjugate product approachresults in a phasor Pt with heavy truncation artifact (Figure 4.4b). The 1D and 2Dkernel fitted phasors (Figure 4.4c and d, respectively) exhibit no truncation artifactand smoothly transition into regions of low SNR.The phase difference maps for Pt, Ph1D and Ph2D are shown in Figures 4.4f-h,respectively. The difference maps show the spatial distribution of phase error of the75CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACEe f g hFigure 4.4: Shepp-Logan phantom with simulated (parabolic) phase difference map. a)Phase of gold-standard phasor P0, b) phase of truncated conjugate product phasor Pt (32ky-lines), c) phase of Ph1D (K = 8 kernel, 32 ky-lines), d) phase of Ph2D (K = 8 kernel, 32ky-lines) e) the threshold mask T (x, y) displaying signal region, f)-h) difference maps (∆P )between P0 and the estimated phasors shown in b)-d), respectively. Note: the differencemaps are restricted to the threshold region T (x, y).estimated phasor relative to P0, but only in the regions where the signal is abovethreshold (i.e. the signal regions in Figure 4.4e). The RR calculations and phasedifference maps ∆P (x, y) are computed after thresholding to remove the effect ofpure noise regions (seen in the periphery of Figure 4.4a).The results for the 1D and 2D kernel fittings show similarities in the y direction,and the distribution of error shown in the difference maps (Figure 4.4g and h) isroughly the same. The 1D fitting appears to be more sensitive to the underlyingstructure of the imaging object, whereas the 2D fitting is much smoother over thecentral voids (eyes) of the Shepp-Logan phantom.Figure 4.5 shows the RR results for the Shepp-Logan phantom simulation. Figure4.5a displays RR as a function of the kernel size K. The kernels were fit with a sampleof 32 central ky lines from the source k-space data. For the parabolic phase map, akernel size of K = 8 was essentially optimal; larger kernels provided diminishing76CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACEreturns at higher computation costs. At kernel sizes above K = 12, the RR valuebegan to decrease.Figure 4.5b shows the effect of source k-space data on the RR. The source datawas centered around ky = 0 and used to fit a kernel with size K = 8. As the sampledata size increases, the fitted phase map has a higher RR. Figure 4.5c shows the RRresulting from sampling k-space data from regions offset from ky = 0. These resultsindicate that the best results are obtained when the sample is taken from centralk-space. The kernel fit (both 1D and 2D) maps produce better results, especially for(a)(b) (c)Figure 4.5: Refocusing Ratio (RR) of estimated phase maps (Shepp-Logan) versus; a)kernel size (sample size 32 ky-lines), b) input sample size (kernel size K = 8) and c) offsetfrom central k-space.smaller input sample sizes. There does not appear to be a clear advantage of usingthe 2D kernel over the 1D kernel in terms of RR.Figure 4.6 shows the phase estimation results for the circular water phantom77CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACEha b de f gcFigure 4.6: Water phantom PLACE data with linear phase difference map. a) Phase ofgold-standard phasor P0, b) phase of truncated conjugate product phasor Pt (24 ky-lines),c) phase of Ph1D (K = 4 kernel, 24 ky-lines), d) phase of Ph2D (K = 4 kernel, 24 ky-lines)e) the threshold mask T (x, y) displaying signal region, f)-h) difference maps (∆P ) betweenP0 and the estimated phasors shown in b)-d), respectively. Note: the difference maps arerestricted to the threshold region T (x, y).PLACE acquisition. Figure 4.6e shows the threshold region T (x, y). The geometricdistortion of the circle is evident, however, here we are only concerned with predictingthe relative phase map (shown in Figure 4.6a).Once again, the truncated conjugate product Pt results in strong ringing artifact(Figure 4.6b). The kernel fittings, both 1D (Figure 4.6c) and 2D (Figure 4.6d),provide smoother phase map estimates, although there is some streaking noticeablein the 1D case. The relative phase contrast in this case is captured effectively by asmaller kernel (K = 4), due to lower spatial frequency content (Figure 4.7a). Thereis only a mild improvement in RR of the Ph over Pt in this case, possibly due to thesimple nature of the phase map and underlying object. The relative smoothness seenin the Ph difference maps (Figure 4.6g and h) is potentially a more desirable effect.As in the simulation example, central k-space is the best candidate for source data(Figure 4.7c).78CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACE(a)(b) (c)Figure 4.7: Refocusing Ratio (RR) of estimated phase maps (Water Phantom) versus; a)kernel size (sample size 24 ky-lines), b) sample size (kernel size K = 4) and c) offset fromcentral k-space. Truncations of sample data to central ky-space.4.5 DiscussionIn both simulation and real data, the convolution fittings provided better performancethan the Truncated Conjugate Product (TCP) in terms of RR and smoothness of theresulting phase map. The convergence to high RR was faster for the kernel fittings asthe sample size was increased. The RR should go to exactly 1.0 when the sample sizeis the full k-space (P0). The kernel fittings also outperformed TCP when utilizingperipheral k-space source data, although in general, it is best to use central k-spacedata due to the SNR advantage.The interesting features of the variable kernel size study, is that when K reaches acertain point, the RR actually decreases (Figures 4.7a and 4.5a). This is also evident79CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACEwhen looking at the resulting phase maps and difference maps for higher K values.For the water phantom, the optimal K size was only 4! Comparing the results forK = 4 and K = 8 it is apparent that larger kernels not only result in lower RR butgreater visible error in the estimated phase map and difference map (Figure 4.8). Aa b c de f g hK=4 K=8 K=4 K=8Figure 4.8: Oversized kernel effects. K = 4 size was optimal for a) 1D and c) 2D kernels,difference maps e) and g). K = 8 size resulted in a poorer fit for both b) 1D and d) 2Dkernels, difference maps f) and h). Results shown for water phantom data with 24 centralky-lines sample size.possible explanation for this is that the relative contrast map has been affected bythe underlying image structure. If we extend the kernel size to the full FOV, theconvolution operation becomes equivalent to the image space multiplication. It ispossible that a large enough kernel will become sensitive to the truncation effects ofthe reduced k-space limits, but this is not well understood at this point.For larger kernel and source data sizes, the 1D fitting technique is computationallymuch faster. For our PC MRI results, this comes at some cost to continuity alongthe orthogonal direction x, where the 2D fitting appears to generate smoother mapsin both directions. The suitability of 1D or 2D fittings may vary for different appli-cations; if the contrast variation is less smooth, a larger kernel is required and the 1Dapproach may be preferable to the more computationally intensive 2D approach.80CHAPTER 4. EXTRACTING LSF INFORMATION FROM K-SPACEDepending on the nature of the phase distortion, the convolution kernel can effec-tively determine an accurate phase difference map with a limited amount of k-spacedata. For certain applications, it could provide acquisition time savings for the extrascans, as full k-space data need not be acquired. This is the acquisition strategy usedin “keyhole” imaging [66, 67], where only central k-space is updated and peripheralk-space is sourced from a single reference scan. The initial keyhole techniques wereapplied in contrast enhanced MR angiography to improve temporal resolution.The convolution fitting provides a distinct advantage over pixel-by-pixel imagespace computations in regions of low signal, allowing one to extrapolate the contrastrelationship to regions of low signal without being corrupted by noise. This couldbe particularly useful for coil sensitivity and inhomogeneity estimation, as well as inquantitative mapping applications where divide-by-zero operations can have catas-trophic results.81Chapter 5Techniques for Through-PlaneResolution Enhancement in 2DMultislice MRI5.1 IntroductionIn many MR applications, high-resolution 3D images are desirable to facilitate effec-tive diagnoses. True 3D MR images using FE (along x) and PE (along y and z) to lo-calize signal in a full volume excitation may be obtained with isotropic, sub-millimeterresolution. However, in many cases this type of 3D acquisition can be difficult. T2-and diffusion-weighted imaging typically require long TR and TE times, which, alongwith the additional PE dimension, greatly increase the total scan time. In EPI ac-quisitions, the longitudinal magnetization may decay completely before traversingthe entire 3D k-space. Other constraints, such as patient motion may inhibit theallowable acquisition time, putting the desired sampling density out of reach. Whensignificant 3D volume coverage is required, the data may be acquired as a set of 2Dslices. This 2D multislice data usually suffers lower resolution in the through-plane82CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENT(z) direction.The through-plane resolution depends on the slice thickness, usually on the orderof a few mm [54]. Although obtaining thinner slices is possible, acquiring them inpractice presents a number of complicating factors. First, there is increased demandon the excitation bandwidth; the RF pulse needs a longer duration to achieve thenarrower bandwidth capable of producing a thinner slice. Furthermore, with a smallerslice volume there is less overall signal and a subsequent decrease in SNR.An additional consideration for the slice thickness is the impact on the in-planeresolution due to the partial volume effect. A thick slice will be sensitive to signalover a larger distance along z, and unable to distinguish detail variation on a scalesmaller than the slice thickness ∆z (Figure 5.1a). As in-plane resolution demands∆z ∆za) b)Figure 5.1: Partial volume effect. a) Thick slice picks up more signal along z, resulting inblurred structure. b) In-plane detail is better captured by the thin slice.increase, the slice thickness may present a bottleneck in resolving fine detail. The sub-sequent demand for thinner slices could result in decreased SNR, requiring averagingof multiple acquisitions to produce an image acceptable for clinical use.In this chapter, we present three different approaches to improving the through-plane resolution of 2D multislice MRI data. The techniques are applicable to virtually83CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTany 2D multislice sequence. Two require no sequence modification, but provide scal-able through-plane resolution gains at the cost of additional acquisitions. These tech-niques consider magnitude only data, although in principle, they may be applied tocomplex data as well; the phase simply provides no utility in achieving the resolutionenhancement. The third approach is a novel technique that uses phase informationto provide signal localization. The technique employs an additional slice encodinggradient requiring only a slight modification to the sequence.We begin by describing the theory and methods for each technique. The techniquesare tested using simulated and real MR data and the resulting enhanced images areevaluated using measures applied to similar resolution enhancement studies. Thestrengths and weaknesses of each technique are assessed and discussed.5.2 Magnitude Based EnhancementThe two magnitude based techniques are inspired by the principle of geometric Su-per-Resolution (SR), in which multiple lower resolution views of an object are com-bined to provide an image with higher resolution. SR techniques have been adaptedfrom the field of image processing and applied to radar, astronomical and other typesof data.The basic idea behind SR is to use multiple lower resolution images taken fromunique viewpoints and combine them into an image with increased spatial resolution.The SR algorithm gains additional spatial information from the differences betweenviewpoints. In the context of the present problem, we wish to improve the resolutionof 2D multislice MRI data in the through-plane direction, which is limited by theslice thickness.The application of multiple-frame SR to stacked 2D multislice MRI data is visu-alized in Figure 5.2. A number of 2D multislice datasets (henceforth referred to as84CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENT“slabs”), are acquired at positions staggered by a fraction of the slice thickness, ∆z(Figure 5.2). Information from the low resolution slabs, S1 and S2 is combined by the∆z2S1resolveS2∆zIEFigure 5.2: Acquisition of staggered low resolution images for resolution enhancement. Fora 2-fold increase in resolution, 2 slabs are acquired with a shift of ∆z2 .resolving algorithm to produce a final image with enhanced resolution (IE). One ofthe first techniques to improve through-plane resolution in MRI used this acquisitionapproach [69], employing an iterative back-projection [68] as the resolving algorithm.The problem has also been approached using orthogonal scans, in which eachdirection is provided with optimal in-plane resolution [70, 71], but this approachcan result in errors if the orthogonal views are not accurately registered. To furthercomplicate registration, geometric distortion due to chemical shift occurs primarily inthe FE direction, resulting in inconsistent datasets from orthogonal views. Rotationalviewpoints have also been attempted [72, 73] with promising results, but are prone tosimilar types of confounding issues. Both the orthogonal and rotational approachesrequire more than 2 scans at minimum to achieve the proper resolution enhancement.For further reading on the application of SR to the field of MRI, we direct the readerto [74].We thus focus our approach to obtaining similar slab acquisitions and applyingthe enhancement to the through-plane direction of the slab. This approach providesmultiple intrinsic benefits. Registration is not an issue, since the FE and PE direc-85CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTtions are the same for all data sets. The staggered slab acquisition strategy is scalable,allowing one to acquire more slabs to reach desired resolution. Finally, it requires nosequence modification and may be adapted easily for any 2D multislice MRI data. Wenow introduce the magnitude based resolution enhancement algorithms based on thestaggered slab approach. Both techniques begin with identical acquisitions; multipleshifted slabs interleaved into a blurred image, IB. The first technique, SUper-REs-olution by SLIce DEconvolution (SURESLIDE), uses a conditioned deconvolutionoperator to enhance IB. The second, Simple Edge Enhancing Refinement (SEER),invokes a simple filtering process to refine the details of IB.5.2.1 SURESLIDEThe strategy for SURESLIDE is shown in Figure 5.3. We have limited the con-sideration of data to 1D (the z direction) for simplicity. Multiple slab acquisitionsS1 S2 S3h = 13[1 1 1]zIEFigure 5.3: SURESLIDE approach. The slab voxels represent a summation of the underly-ing voxels of the higher resolution image IE . Interleaving multiple staggered slabs providesa blurry image that may be interpreted as h⊗ IE , where h represents a blurring kernel.S1, S2, . . . , SN , shifted by ∆z/N (an N = 3 staggered slab acquisition is illustrated)are interleaved to form a composite image IB. The image IB is a blurred representa-86CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTtion of the underlying structure that we wish to extract (IE).We can mathematically describe the situation in Figure 5.3; the blurred image IB,is found by convolving the underlying image IE, with a blur operator h [75]IB(z) = h(z)⊗ IE(z) (5.1)where we restrict ourselves to a 1D convolution along the slice direction. For theN = 3 case illustrated, h = 13[1 1 1].Equation 5.1 can be reformulated as a matrix equationIB = HIE (5.2)where the convolution with h is represented by constructing the appropriate Toeplitzmatrix. For a general kernel, h =[h1 h2 h3];H = 1|h|h1 h2 h3 0 0 0 . . .0 h1 h2 h3 0 0 . . .0 0 h1 h2 h3 0 . . .0 0 0 h1 h2 h3 . . .... ... ... ... ... ... . . .. (5.3)If IB is an N ×N image, then H will be N × (N +K − 1), where K is the length ofthe kernel, h. The higher resolution image IE may then be determined by invertingHIE = H†IB, (5.4)where † represents the pseudoinverse. This is a similar approach to that used inChapter 4, but in this case the kernel is known and is used to construct the Toeplitzmatrix representing convolution.87CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTThe banded Toeplitz matrix H in this case is ill-conditioned and the result H† isvery sensitive to noise [76, 77]. In the SURESLIDE technique, this is addressed byusing a row specific Gaussian smoothing on the deblurring matrix H†. The form ofthe smoothing function isfs(z, c) = e−(c−z)22σ2s , (5.5)where, σs is the width. c and z represent the column and row numbers of the entry ofH†, respectively. The result is a Gaussian function centered at the desired z locationof the output image IE. These details are discussed at greater length in Section 5.6.3.5.2.2 SEERThe required acquisition for SEER is identical to that of SURESLIDE. The SEERalgorithm is outlined in Figure 5.4.Figure 5.4: SEER approach. The blurred image is considered in terms of low and highspatial frequency components IB = IL + IH . 1) A representation of IL is found by low-passfiltering. 2) IH is found as the difference IL − IB. 3) An enhanced resolution image IE isfound by increasing the amount of IH component in IB.88CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTFollowing collection and arrangement of the slabs, we may consider the blurredimage IB to be composed of two parts; the high resolution (IH) and low resolution(IL) content, that isIB = IL + IH (5.6)We may now make an estimate of the lower resolution image component bysmoothing IB. This is accomplished with a blurring operator, h:IL = IB ⊗ h (5.7)where, h is a square kernel, for example. This is analogous to using a low-pass filter.Now we may compute the remaining componentIH = IB − IL, (5.8)and finally create a new image, IE, that has enhanced resolution. This is accomplishedby increasing the contribution from IH by a factor, w:IE = IB + wIH = IB + w(IB − IL). (5.9)The filter type (h) and tuning factor w may be determined by optimization or visualinspection.The SEER algorithm can be seen as an extrapolation process, first moving back-wards to determine the difference between lower and higher resolution stages, thenapplying the difference forward to provide an image with resolution enhanced beyondthe initial stage IB.89CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENT5.3 Phase Based EnhancementWhilst the preceding techniques embody the spirit of SR with staggered low resolu-tion slab acquisitions, the final approach is fundamentally different. The techniqueoffers a compromise between full 3D spatial encoding and 2D multislice MRI. SLiceENhancement DividER (SLENDER) is a new framework that employs additionalphase encoding to localize signal in the slice direction.5.3.1 SLENDERSLENDER uses additional phase encoding to achieve sub-voxel signal localizationrather than seeking to deblur or refine lower resolution images. The acquisition stillrequires N slabs, but they are not spatially staggered. Instead, the acquired datais composed of a normal in-phase slab, S1 and a phase encoded slab, S2 that has agradient applied along the slice (z) direction (Figure 5.5).zAByxS2S1S2 S1αFigure 5.5: The SLENDER approach; the oblong voxel is split into sub-voxels A and Busing an in-phase and slice-phase encoded acquisition.The z-gradient can be easily applied by altering pre-phase gradient lobes or addingthe gradient area separately. This is akin to the PLACE acquisition [2], althoughin this case, the applied gradient is generally larger, causing significant intra-voxeldephasing.The voxels in the slab acquisitions S1 and S2 at a given location (x, y, z) are90CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTdefined as:S1(x, y, z) = [A(x, y, z) + B(x, y, z)]P0(x, y, z)S2(x, y, z) =[A(x, y, z) + B(x, y, z)eiα]P0(x, y, z)P1(z)(5.10)where, P0(x, y, z) represents the background phase error, P1(z) is the local phaseaccrual due to z-gradient encoding and α represents the phase angle between theresolved sub-pixel components A and B. P1(z), by definition, is a phase ramp with aslope bP1(z) = ei(bz+a), (5.11)an additional phase correction, a, is allowed to compensate for zeroth order phaseerror (such as z-gradient isocenter offset). A factor of e−iα/2 is absorbed into thezeroth order term a to simplify the expression for S2 in Equation 5.10.Before we can determine A and B, α must be known and P0 and P1 must beconsidered. P0 is explicitly known from the argument of S1 and should be constantfor subsequent scans. The local phase accrual due to P1 is related to the strength ofthe encoding gradient and the position of the voxel relative to the z-gradient isocenter.Once P1 is known, we may directly solve for A and B:A = J2 − J1eiα1− eiαB = J1 − J21− eiα(5.12)where, J1 = S1P ∗0 and J2 = S2P ∗1P ∗0 (Note: we could also solve this system usingJ1 = S1 and J2 = S2P ∗1 as the effects of P0 are common to both). The final image IEis created by interleaving A and B.91CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTRotational Differential FieldTo determine P1 we employ a Rotational Differential Field (RDF) measurement [65].The phase difference between one slice and the next in the phase encoded imageS2 represents a complex rotation of the underlying signal. This information can beextracted by looking at the RDF ∆1 = S2(z)∗S2(z + 1). ∆1 is the 1st order RDF(denoted by the subscript 1). Substituting Equation 5.10 into the RDF, we have∆1 = [A(z) +B(z)e−iα][A(z + 1) + B(z + 1)eiα]P ∗1 (z)P1(z + 1)∆1 = [A(z)A(z + 1) + B(z)A(z + 1)e−iα+ A(z)B(z + 1)eiα + B(z)B(z + 1)]P ∗1 (z)P1(z + 1). (5.13)While the phase of the term in square brackets depends on the angle α and the relativesizes of A and B at the positions z and z+1, when summing over all voxels the totalphase contribution averages to zero. Taking the statistical sum 〈S∗2(z)S2(z + 1)〉, weare left with the important complex factor∆1 =∑zP ∗1 (z)P1(z + 1)=∑ze−i(bz+a)ei[b(z+1)+a]= eib.(5.14)Once b is known, it may be used to compute the zeroth order correction, a; since,P1(z) = ei(bz+a) we have∑P1(z)e−ibz = eia (5.15)The phase of the RDF sum and the subsequent zeroth order correction are related tothe slope and the offset of the phase ramp (Equation 5.11) across the slices.One must be careful when interpreting the argument of the RDF phasor as the92CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTslope. The RDF provides a statistical average of the phase difference between adjacentvoxels with −pi < b < pi. Since there is a definite possibility of phase wrap, there isambiguity in the argument b modulo 2pi. When the applied gradient is small enough,0 < b < pi and represents (approximately) the slope of the phase ramp along z. Whenb < 0 it indicates that the slope is negative, contrary to the setup of the experiment.The true slope is then likely b + 2pi. It is prudent to introduce a new variable βto represent the true underlying phase ramp (or phase rotation) behind the phasorargument b.The accuracy of the phase difference determined by the ∆1 RDF can be improvedby using larger shifts n > 1 [65]. In this case, the solution of the n-pixel shift RDFwill result in a higher order phasor:∆n =∑P ∗1 (z)P1(z + n)∆n = einb(5.16)We desire to find b from ∆n, but there are n complex roots that represent a phaserotation, each of which provide the correct phase difference from z to z + n whenapplied n times. However, only one of these roots represents the true phase rotationand underlying phase ramp. To determine which is correct, we find another RDFfor a shift m 6= n. If m and n are relative prime they will only share one root thatrepresents the correct phase rotation from z to z + 1. Figure 5.6 shows the complexroots for the ∆1, ∆4 and ∆7 RDF phasors.93CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENT∆1∆7∆4Figure 5.6: RDF roots. The roots for n = 4 (blue) and n = 7 (magenta). The commonroot provides the correct value of b. The n = 1 root (black) is less accurate.The relative prime solutions will share one common root from which we mayextract the correct value of b. Taking the n = 7 root as the RDF b value provides amore accurate representation of P1.5.4 Qualification of Enhanced ImagesThe effectiveness of the enhancement strategy is assessed by examining the ability toimprove resolution, and the accuracy of the reconstruction in terms of noise propaga-tion. The SNR efficiency is a measure used to determine how much signal is gainedversus the additional time required for the extra slab acquisitions.Edge WidthThe resolution gain may be characterized by examining the sharpness of the edges.Past SR studies have used an edge width measurement [69, 74] to quantify the degree94CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTof resolution gain. The edge width is determined by fitting a sigmoid function to themagnitude image along the direction of the resolution enhancement. The edge widthfunctionf(z) = 11 + e−a(z−z0) (5.17)where, z0 is the relative edge location and a represents the steepness of the edge. The“width” of the edge (we), then is related to the parameter a and may be consideredas the amount of run required to see a rise from 10-90% of the peak:we =4.4aRS, (5.18)RS is a resolution scale factor to account for the relative size of pixels between imagesat different resolutions.SNRWhen utilizing additional acquisitions SNR efficiency, ηSNR, is an important consid-eration. SNR efficiency is defined as the ratio between the resulting image SNR andthe square root of the data acquisition time, τ :ηSNR =SNR√τ (5.19)SNR is commonly defined as the mean of pixel values in a region of relatively uniform(and strong) signal, divided by the standard deviation of pixel values in a region ofsignal void (pure noise). For the purposes of this study this definition will suffice.A previous study demonstrated that the SNR efficiencies for 3D and 2D multisliceacquisitions are roughly equivalent [78]. We compute the SNR of the magnitudeenhanced images solely to provide an additional measure for the relative performanceof the proposed SNR techniques.95CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENT5.5 MethodSimulationI0(a)I0(b)I0(c)Figure 5.7: Full resolution reference images. a) Brain image for simulation (256× 256). b)Box image for simulation (256×256). c) Sphere image with lesion for simulation (256×256).2D multislice staggered (spatial and PE) slab acquisitions were simulated from a 256×256 magnitude only brain image (Figure 5.7a). The simulation is intended to show aproof of concept and the efficacy of each technique. The slice direction is along thevertical and the slice thickness was chosen to be ∆z = 8 pixels. Simulated acquisitionsfor 2- and 4-fold (N = 2 and N = 4, respectively) resolution enhancements werecreated.A simple square phantom (Figure 5.7b) was employed to study the resolutionenhancement and edge width effects for each technique. The edge width measurementrequires a transition from signal void to high SNR with spatial uniformity on eitherside of the edge. The properties of the square phantom facilitate an isolated study ofthe edge width.A spherical phantom (Figure 5.7c) was also constructed to provide a more realisticstructure (such as the top of the head) and a signal void region was placed in thesphere to simulate the effect of a lesion. The lesion was sized to be approximatelyhalf of the slice thickness (4× 4× 4 voxels, 8-fold rebinning to slice). The ability of96CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTeach technique to resolve the lesion was investigated.For SEER and SURESLIDE acquisitions, N slabs staggered by δ = 8/N pixelswere simulated. A downsampled image IDS was obtained by rebinning signal fromI0 into 8/N pixel bins to provide a comparison for the resulting enhanced image IE.For the SLENDER simulations only the N = 2 case was treated. The additional slabwas not spatially staggered but acquired with an additional linear phase encoding inthe slice direction. A low-frequency random phase map was applied to an upsampledinitial image prior to slab creation to simulate realistic P0 and P1 effects.Real DataI0(a)Figure 5.8: Full resolution reference image; GE Resolution phantom (96× 256).A General Electric (GE) geometric daily quality assurance phantom (sagittal crosssection; Figure 5.8a) was imaged to determine resolution enhancement with real MRdata. The GE phantom also lends itself to the edge width analysis since it possesseswell defined edges surrounded by relative signal uniformity. Staggered slab acqui-sitions were obtained on a 3T GE scanner using a 2D axial multislice Fast SPoiledGRadient (FSPGR) sequence; TR/TE = 8.2/3.4ms, 256 × 256 matrix size, 1mm2 in-plane resolution, 4mm slice thickness. 24 slices were acquired at one position and asecond set of 24, shifted by ∆z/2 = 2mm for a total of N = 2 slab acquisitions.97CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTA full 3D FSPGR acquisition with similar imaging parameters was obtained asa gold-standard for comparison. The 3D acquisition has 256 × 256 × 96 matrix sizewith 1mm3 isotropic resolution. For each approach, the optimal enhanced image wasselected by visual inspection, as well as consideration of resulting edge width andSNR measurements. The regions used to calculate SNR and we are highlighted overthe full resolution images in Figure 5.9.(a) (b)(c) (d)Figure 5.9: Positions of SNR and Edge Width measurements; SNR a) Simulated Box andb) GE phantom; Edge width c) Simulated Box and d) GE phantom.98CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTComputational ComplexityFor the three techniques, we begin with a set of N slabs, consisting of Nx ×Ny ×Nzvoxels, where Nx and Ny represent the in-plane matrix size and Nz the number ofslices.SURESLIDE relies on solution of a linear least squares system via matrix pseu-doinverse, much like the convolution fitting in Chapter 4. The main difference is thekernel operator is used to form the Toeplitz matrix. The convolution kernel will beof length N , resulting in matrix H of size Nz × (Nz + N − 1). The computationof H† requires N3z + 2N2z (Nz + N − 1) operations and occurs only once. The de-blurring matrix is subsequently applied to the blurred image volume IB resulting inNxNyNz(Nz +N − 1) operations. The total operational cost isflopsSURE = N3z + 2N2z (Nz +N − 1) +NxNyNz(Nz +N − 1). (5.20)The smoothing to generate HS† incurs additional operations, in general N2z , althoughthis can be significantly less if a small smoothing width is used; entries outside thesmoothing window may be simply set to 0.In SEER the convolution has NxNyNz(Nz + N − 1) operations, and subsequentmatrix subtraction and scaled addition, each have NxNyNz operations. This resultsinflopsSEER = NxNyNz(Nz +N − 1) + 2NxNyNz. (5.21)SLENDER involves a number of steps to determine the final sub-voxel compo-nent matrices A and B. To determine the correct value of α (and the global phaseslope b for P1 removal) we employ the RDF. Prior to this, the background phasemust be removed from each slab acquisition, requiring 2NxNyNz operations. TheRDF calculation incurs a number of shifts and complex matrix subtractions. If wecompute 2 shifts to find the relative prime root factor that provides α we require an-99CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTother 2NxNyNz operations. Following this there is the removal of P1 from the phaseencoded slab S2, resulting in NxNyNz complex multiplications. Finally, the solutionto Equation 5.12 may be found, resulting in another 2NxNyNz operations. The totaloperations for the SLENDER algorithm isflopsSLENDER = 7NxNyNz. (5.22)For comparison in this case we consider a 3D FFT algorithm withflops3DFFT = NxNyNNz log(NxNyNNz) (5.23)computations (we have assumed the number of points in the z direction is NNz tocorrespond with the number of points in the interleaved slabs). If Nz < Nx, Ny, whichis typical for 2D multislice acquisitions, the most demanding of the three reconstruc-tions in terms of operations is SLENDER. Taking Nx = Ny = 256, Nz = 64 andN = 2 the SLENDER result requires 2.9 × 107 flops, an order of magnitude lowerthan the amount for the 3D FFT with equivalent size (1.3× 108).5.6 Results5.6.1 SimulationFigure 5.10 shows results for the simulated brain image for N = 2.100CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTI0(a)IDS(b)IB(c)SEER(d)SURESLIDE(e)SLENDER(f)Figure 5.10: Brain image simulation N = 2 enhancement results. a) Full resolution I0;(256 × 256). b) Downsampled IDS ; c) blurred interleaved image IB; d) SEER IE ; e)SURESLIDE IE and f) SLENDER IE . b)-f) are 64× 256.To provide a comparison for the N = 2 enhancement, the full resolution image I0(Figure 5.10a) was rebinned into a downsampled image IDS with z resolution on thesame scale as the enhanced images. In the case of the N = 2 brain image simulationthis is a downsample factor of 4 between I0 and IDS.The 2-fold enhancement provides limited improvement. The zoomed plots shownin Figure 5.11 show some variation in the enhancement technique results.101CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTI0(a)IDS(b)IB(c)SEER(d)SURESLIDE(e)SLENDER(f)Figure 5.11: Brain image simulation N = 2 enhancement results (zoomed). a) Full resolu-tion I0; (256 × 256). b) Downsampled IDS ; c) blurred interleaved image IB; d) SEER IE ;e) SURESLIDE IE and f) SLENDER IE . b)-f) are 64× 256.Visual inspection of the images indicate that each method is capable of providingmodest resolution gains. The white arrows highlight a small amount of contrastrecovery in IEs (Figure 5.11d-f), compared with IB (Figure 5.11c). The edge fittingin the N = 2 case was difficult for the brain phantom, as suitable edge interfaces werenot present. The top and bottom edges where uniform signal is found do not extendfor an adequate number of pixels to obtain accurate fittings.Figure 5.12 shows resolution enhancement on the brain images for the N = 4 case(excluding SLENDER results).102CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTIDS(a)IB(b)SEER(c)SURESLIDE(d)Figure 5.12: Brain image simulation N = 4 enhancement results. a) Downsampled IDS ;b) blurred interleaved IB; c) SEER and d) SURESLIDE IE . All images 128× 256.Here the difference between SEER and SURESLIDE enhancements is more appar-ent. The SEER image suffers from less noise, while the SURESLIDE image appearsto do better at returning original high resolution information.The arrows in the zoomed plot (Figure 5.13) show the recovery of features thatare present in IDS in the SURESLIDE image. These features were not recovered inthe final SEER image.103CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTIDS(a)IB(b)SEER(c)SURESLIDE(d)Figure 5.13: Brain image simulation N = 4 enhancement results (zoomed). a) Down-sampled IDS ; b) blurred interleaved IB; c) SEER and d) SURESLIDE IE . All images128× 256.Results for the box phantom are shown in Figure 5.14 for N = 2. The resolutionenhancement effects between IB (Figure 5.14c) and the resulting enhanced images IE(Figures 5.14d-f) are once again difficult to see. However, the edge width calculationswere well suited to the geometry of the phantom and allow a more quantitativeinvestigation of the resolution gains.104CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTI0(a)IDS(b)IB(c)SEER(d)SURESLIDE(e)SLENDER(f)Figure 5.14: Simulated box image N = 2 enhancement results. a) Full resolution I0;(256 × 256). b) Downsampled IDS ; c) blurred interleaved image IB; d) SEER IE ; e)SURESLIDE IE and f) SLENDER IE . b)-f) are 64× 256.The resulting edge widths and SNR measures for the enhanced images are sum-marized in Table 5.1.I0 IDS IB IESEER SURESLIDE SLENDEREdge Width [pix] 0.36 1.15 7.82 6.03 1.25 3.43Improvement [%] 95.4 85.3 0.0 22.9 84.0 56.1SNR – – 44.1 37.4 27.9 46.3Table 5.1: Summary of results for box image N = 2. The edge width measurements aregiven in terms of pixels in the high resolution image I0.105CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTThe SNR is calculated as described in Section 5.4 and reported for the initialblurry image, IB, and the final enhanced images, IE. The edge widths are computedvia Equations 5.18 and 5.17. The edge width displayed in the table represents anaverage of 8 different edge width fittings from suitable interfaces in the image (see forexample Figure 5.9c). Figure 5.15 shows a comparison of such fittings for the highresolution image, blurred image and a SEER enhanced image.Figure 5.15: Fittings to determine edge widths. Data from the box phantom image; highresolution image I0, N = 2 blurred image IB and SEER enhanced image IE .While all methods improve on the resolution of the interleaved image IB, they fallshort of matching the resolution of the comparable reference image (IDS). SURES-LIDE provides the best resolution gain in this case, but suffers greatly from noisecontamination. On the other hand, SEER results in better SNR but does not offergreatly improved resolution. The SLENDER technique provides a more balanced re-sult offering a 2-fold increase of resolution over the initial interleave image, and thehighest SNR efficiency of the enhancement techniques. Each of the techniques resultsin a slightly different artifact to be discussed in detail in Section 5.6.3.The spherical phantom with added lesion results are shown in Figure 5.16 (zoomedin for better visualization). The lesion can be seen clearly in the full resolution I0(Figure 5.16a) and downsampled IDS (Figure 5.16b) images. In Figure 5.16c, theeffect of a lesion with a size on the order of half a slice thickness is shown in the106CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENT(a) (b) (c)(d) (e) (f)Figure 5.16: Lesion sphere phantom N = 2 enhancement results (zoomed). a) Full resolu-tion I0; (256 × 256). b) Downsampled IDS ; c) blurred interleaved image IB; d) SEER IE ;e) SURESLIDE IE and f) SLENDER IE . b)-f) are 64× 256.blurred interleave image IB. The signal void affects all slices that overlap its position,and each slice will have some signal picked up from the surrounding area. As aresult, the “lesion” appears with less sharpness and contrast. While the SEER andSURESLIDE enhancements (Figures 5.16d and e, respectively) provide some recoveryof contrast, the clear winner in the lesion test is SLENDER. The signal void has beenrecovered to match the original size shown in IDS. There is only a slight difference incontrast due to the loss of SNR compared with the original image.107CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENT5.6.2 Real DataFigure 5.17 displays the results of the application of each technique to the GE reso-lution phantom data.I0(a)IDS(b)IB(c)SEER(d)SURESLIDE(e)SLENDER(f)Figure 5.17: GE phantom N = 2 enhancement results. a) Full resolution I0; (96×256). b)Downsampled IDS ; c) blurred interleaved image IB; d) SEER IE ; e) SURESLIDE IE andf) SLENDER IE . b)-f) are 48× 256.In this case, the downsampled image IDS represents a 2-fold rebinning of the highresolution image I0. The SLENDER result was subjected to an additional tuning ofthe relative phase angle α to improve signal localization and reduce noise (see Section5.6.3). The zoomed plots are shown in Figure 5.18. Once again the SURESLIDE andSLENDER enhancements provide slightly better resolution than SEER.108CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTI0(a)IDS(b)IB(c)SEER(d)SURESLIDE(e)SLENDER(f)Figure 5.18: GE phantom N = 2 enhancement results (zoomed). a) Full resolution I0; (96×256). b) Downsampled IDS ; c) blurred interleaved image IB; d) SEER IE ; e) SURESLIDEIE and f) SLENDER IE . b)-f) are 48× 256.This is particularly noticeable in the restoration of the sharp edges at the top andbottom of the signal void (arrows, Figure 5.18c compared with Figures 5.18e and f).The SEER enhancement results in a slight blurring artifact (arrow, Figure 5.18d).Similar to the simulation results, there is improvement over the initial imageIB (Figure 5.17c), but the enhanced images are not as sharp as the downsampledcomparison image IDS (Figure 5.17b). The resulting edge widths and reconstructedimage SNR are compiled in Table 5.2.109CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTI0 IDS IB IESEER SURESLIDE SLENDEREdge Width [pix] 1.17 2.11 5.23 3.68 3.85 2.07Improvement [%] 77.8 59.9 0.0 29.9 26.8 60.7IP Width [pix] 1.72 1.76 2.18 1.92 1.89 1.95SNR – – 62.9 46.8 41.7 43.5Table 5.2: Summary of results for GE resolution phantom N = 2. The edge width mea-surements are given in terms of pixels in the high resolution image I0.With real MR data, the SEER and SURESLIDE techniques are on equal foot-ing, providing roughly the same resolution enhancement and SNR characteristics.The SLENDER technique provided better resolution gains and considerably higherSNR. The results listed in Tables 5.1 and 5.2 represent the optimal results for eachenhancement technique.Resolution gain was also characterized by inspecting in-plane enhancements. Withthrough-plane resolution improvements, we reduce the partial volume effect, whichmay cause apparent resolution loss in-plane due to the thickness of objects. Forthe GE resolution phantom, the central ramp thickness was measured by fitting aGaussian function along the y direction (see Figure 5.19). The in-plane width resultsare tabulated in the Table 5.2 after the normal edge width calculations. The in-planeimages are shown in Figure 5.20.110CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENT(a) (b)Figure 5.19: In-plane ramp width. a) In-plane view showing location of central rampfittings. b) Example fit of central ramp for I0 and IB.I0(a)IDS(b)IB(c)SEER(d)SURESLIDE(e)SLENDER(f)Figure 5.20: GE phantom N = 2 enhancement in-plane; a) Full resolution I0; b) down-sampled IDS ; c) blurred interleaved image IB; d) SEER IE ; e) SURESLIDE IE and f)SLENDER IE . Showing central 150× 150 region, 1mm2.111CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTThe enhanced images (Figure 5.20d-f) display visible reduction of the central rampwidth compared with the initial interleaved image (Figure 5.20c). The in-plane width(Table 5.2) corroborates this and further indicates a resolution gain provided by theenhancement techniques.5.6.3 Artifacts from Enhancement TechniquesSEERSince each resolution enhancement technique approached the problem in a differentway, the resulting final images each display a unique type of artifact. For the SEERtechnique we can see an “over-correction” effect if the enhancement factor w is toolarge. Excessive contribution from IH causes the signal to overshoot and dip on eitherside of the edge. This creates peaks and troughs around large edges in the final image(Figure 5.21), and can have significant impact on the edge width measurement (Figure5.21b).The over-correction effects were more likely to occur when using the boxcar filterto generate IL, rather than a Gaussian. Larger kernels would cause these effects toextend further from the edge. The choice of kernel type, size and enhancement factorw must be carefully made to provide the greatest amount of resolution gain withoutincurring degrading levels of artifact. In general it was found that the Gaussian kernelprovided better results in terms of managing artifact, although the boxcar kernel wasable to achieve marginally higher resolution gains.SURESLIDEWith the SURESLIDE technique, the excessive noise is controlled with targeted Gaus-sian damping in the deblurring matrix H†. It is possible that this is related to the factthat the assumed kernel h =[1 1]used in deconvolution is a naive idealization. In112CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENT(a) (b)(c) (d)(e) (f)Figure 5.21: SEER enhancement artifact with h = BOX7 a) w = 0.2, a reasonable en-hancement factor. b) w = 0.6 over-correction causing signal dip/overshoot on either side ofedge. c) and d) Corresponding real and e) and f) simulated images.113CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTreality, h should be somehow related to the slice selection profile, often approximatedas Gaussian in shape [68, 69, 74].The excessive noise can be attributed to signal pile up from non-local regions inthe image. Careful inspection of the deblurring matrix, H† (Figure 5.22) makes theFigure 5.22: The entries of the matrix H†.signal pile up apparent. Referring to Figure 5.23, for a pixel in the enhanced image IEthere is contribution from the entire column in the source image. Because the support(a) (b) (c)Figure 5.23: Improving H† conditioning with smoothing. a) Blurry interleaved image IB.The solid boxes indicate pixel locations that, in the final enhanced image, are reconstructedfrom source data spanning the entire column (dashed lines). b) Corresponding rows ofH† that contribute to the pixels of the final reconstructed image. c) Targeted Gaussiandamping in HS† suppresses the contribution from non-local signal.114CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTof the H† row covers a good deal of the source image, noise effects are compoundedat the final image pixel location. Limiting the support of the rows in the H† matrixto a small neighbourhood around the target point in the deblurred image IE. Afterlocalized Gaussian smoothing of the rows in H†, we are left with a smooth deblurringoperator, HS†, resulting in greatly reduced noise and streaking artifact. Figure 5.24shows the effects of H† versus HS†.(a) (b) (c)Figure 5.24: Improved SURESLIDE reconstruction with HS†. a) Blurred interleave imageIB; b) Noisy deblurred image H†IB. c) Smooth deblurred image HS†IB.However, the width of the applied Gaussian σs must be chosen carefully. If thewidth of this damping σs is too large, noise will not be reduced. If the width becomestoo short, a ringing artifact can appear near edges. This is likely an effect of truncatingthe reconstruction terms in H†. The key to an optimal reconstruction is to balancethe poor conditioning of the deblurring matrix without introducing the truncationartifact.115CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENT(a) (b)Figure 5.25: Choosing the optimal smoothing width for SURESLIDE enhancement. a)σs = 0.2, over-smoothing resulting in truncation of HS† rows and ringing artifact. b)σs = 4.6, residual noise and signal pile up due to insufficient smoothing.With the magnitude approaches, the edge width measure should be utilized withcare. The ringing and overshoot artifacts can impact the edge width calculation signif-icantly, causing the apparent edge width to decrease but not necessarily representinga resolution gain.SLENDERThe SLENDER artifact is different and relates primarily to inaccuracies in the es-timation of P0 and P1. Intra-voxel P0 effects could cause variation in the expectedangle α between A and B, but generally these effects are very small.There is reason to believe that the phase difference between adjacent slice centersdoes not correspond exactly with the phase difference between the resolved sub-voxelsA and B. This is evidenced by the fact that the phasor used to remove P1 does notprovide the optimal resolved image (Figures 5.26a and 5.26b), prompting additionaltuning of the angle α used for separation of A and B (Equation 5.12).116CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENT(a) (b)(c) (d)Figure 5.26: SLENDER α tuning; the additional tuning of α enables a slight improvementin edge width and noise properties. a) Image constructed assuming α = β/2. b) Improvedimage constructed with α > β/2. c) & d) Corresponding edge width fittings.While the tuned α can improve apparent edge width and SNR, it may also alterthe effective spatial encoding, resulting in improper signal localization. This effect isseen in Figure 5.27.117CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENT(a) (b)Figure 5.27: “Over-tuning” of α causes improper signal localization. a) Properly tuned.b) Improperly tuned.Another candidate is the effect of slice profile. Figure 5.28 demonstrates a modelof how the slice profile can cause α 6= β/2.z z + 1αβα0 = β/2α < β/2βI2(z + 1)I2(z)a)b)c)Figure 5.28: The slice profile can cause a disparity between 2α and β. If we consider aregion of uniform signal, the applied gradient will impart a phase ramp across the signal.a) If there is no magnitude modulation in the slice, the resulting angle will be α0 = β/2. b)When the signal magnitude is reduced further from the slice center (a typical slice profile),the resulting α will be smaller. c) Since β is defined by two slice midpoints, the effects of theslice profile will cancel on either side of the midpoint, and b is not affected by a nonuniformslice profile.118CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTWhen this effect was tested in simulation, it was found that the optimal angle wasindeed less than the nominal value α0 acquired from the RDF calculation. However,in real data the opposite was seen; αopt > β/2. This indicates that either the sliceprofile model is incorrect, or there are additional factors at play. This is an interestingpoint for future study.5.7 DiscussionThe acquisition times between SLENDER and the two magnitude based techniquesare virtually identical for the same number of acquired slabs, N . Some additionaltime may be involved in the preparation of the phase encoded slab for SLENDER, butthe impact on total scan time will be minimal. The reconstruction times of all threetechniques are reasonable when compared to a 3D FFT required for reconstructionof full 3D MRI.In terms of performance in the N = 2 case, SLENDER is a strong candidate forthe best choice. The slightly more involved reconstruction process is rewarded byimproved resolution gain compared with SEER and better SNR characteristics thanSURESLIDE. Although SURESLIDE is a clear winner in terms of minimizing theedge width, the noise corruption presents a serious problem for practical use. Anadditional weakness of SURESLIDE is the inability to resolve smaller details whenN is small. This is evidenced most clearly in the lesion test, but is also noticeable toa lesser extent in the other N = 2 tests.Both the SEER and SURESLIDE reconstructions assume spatially invariant blurin the slice direction. This is likely a reasonable assumption, but allowing for spa-tially variant blur operators may provide improvement in the enhanced image. TheSLENDER reconstruction could be modified to allow a more realistic representationof the slice profile.119CHAPTER 5. THROUGH-PLANE RESOLUTION ENHANCEMENTSEER and SURESLIDE are attractive in terms of their simplicity of implemen-tation. Both can be applied to magnitude only data and require no modificationof the pulse sequence. While SEER offers only a modest improvement in resolu-tion, it is easily understood and possesses intrinsic noise mitigation effects that aredesirable. Although the SLENDER technique is limited to the N = 2 case, whenconsidering the limitations on through-plane resolution in 2D multislice MRI, evena twofold improvement can provide added benefit. The slice thickness is cut in halfwithout compromising the SNR and without the added complications of acquiringthinner slices. There is a need to modify the pulse sequence by adding the additionalz gradient area to the phase encoded slab, but this modification is minimal.120Chapter 6ConclusionThe purpose of this study was to investigate various methods for encoding and de-coding signal in MRI. The technique rGRAPPA introduced an improvement on thestandard implementation inspired by the spatial variation of the receiver coil sensitivi-ties. rGRAPPA reconstructions resulted in lower Relative Error (RE) and suppressedresidual aliasing artifact in comparison with standard GRAPPA. The rGRAPPA im-plementation provides a calibration sensitive to the localized nature of the weightingfactors without the need for excessive kernel sizes or additional assumptions aboutthe nature of the weights. The improved reconstruction accuracy of rGRAPPA al-lows for faster scan times or increased spatial resolution. The notion of combining thegains of rGRAPPA with a technique such as hpGRAPPA [31] to obtain even higherreconstruction quality is an interesting possibility.A number of viable optimization metrics were discovered, capable of predicting theoptimal rGRAPPA reconstruction region width d and corresponding image withoutneed of a reference scan. However, future studies should address the shortcomings ofthe optimal region prediction to further accelerate the offline reconstruction time. Itis also possible that a weighted combination of metric measures (not considered inour analysis) could provide a better result.121CHAPTER 6. CONCLUSIONThe convolution fitting technique proved effective in predicting phase differencemaps in real and simulated MR data. Limiting trajectories for minimum requiredsource data were explored and it was found that central k-space data produced thebest results. However, there is still much work to be done in this area. The methodsdeveloped here could prove useful in other applications, such as characterizing coilinhomogeneity, or predicting relative coil profile relationships that could be helpfulin PPI. There is a potential for an impact in quantitative mapping applications andk-space extrapolation techniques such as keyhole imaging. In addition to testing theconvolution fitting technique on other MRI applications, future studies should conducta thorough investigation of the dependence of the kernel on the underlying spatialinformation, allowing corruption free estimation of the relative contrast information.We have presented three possible techniques to achieve through-plane resolutionenhancement in 2D multislice MRI data. Each approach has its own relative strengthsand weaknesses. SEER is a simple enhancement that can be performed on any blurrydata to provide edge enhancement. The applied blurring kernel can be selected toachieve the desired results, however, the ultimate resolution enhancement seems tobe modest in comparison to the other techniques. SURESLIDE presents a deblurringoperation that provides good recovery of fine detail, but suffers from noise amplifica-tion in the reconstruction. SLENDER, while still in its infancy, represents a bridgebetween 2D multislice and true 3D acquisitions. Though only applied in the mosttrivial case (a 2-fold enhancement), this technique provides a robust improvement ofthrough-plane resolution and could have significant potential.As the demand for higher in-plane resolution increases, there will be a correlateddemand for through-plane resolution to overcome partial volume effects. Future workshould study the optimal phase encoding required for SLENDER and investigate thepossibility of extending the resolution gain to higher factors, such as splitting theslice into 3 or more sub-voxels. The techniques presented have shown evidence of122CHAPTER 6. CONCLUSIONproviding resolution enhancement, but there is yet much room for optimization.123Bibliography[1] Q-S. Xiang. Two-Point Water-Fat Imaging With Partially-Opposed-Phase(POP) Acquisition: An Asymmetric Dixon Method. Magn. Reson. Med., 56:572–584, 2006.[2] Q-S. Xiang and F.Q. Ye. Correction for Geometric Distortion and N/2 Ghostingin EPI by Phase Labeling for Additional Coordinate Encoding (PLACE). Magn.Reson. Med., 57:731–741, 2007.[3] I. Rabi, J. Zacharias, S. Millman, and P. Kusch. A New Method of MeasuringNuclear Magnetic Moment. Physical Review, 53(4):318–327, 1938.[4] F. Bloch. Nuclear Induction. Physical Review, 70:460–474, 1946.[5] F. Bloch, W.W. Hansen, and M. Packard. The Nuclear Induction Experiment.Physical Review, 70:474, 1946.[6] E. Purcell, H. Torrey, and R. Pound. Resonance Absorption by Nuclear MagneticMoments in a Solid. Physical Review, 69:37–38, 1946.[7] H.Y. Carr. Free Precession Techniques in Nuclear Magnetic Resonance. PhDthesis, Harvard University, 1952.[8] P. Mansfield and P.K. Grannell. NMR diffraction in solids? J. Phys. C: SolidState Phys., 6:L422–L426, 1973.124BIBLIOGRAPHY[9] E.L. Hahn. Spin Echoes. Physical Review, 80(4):580–594, 1950.[10] S. Ljunggren. A Simple Graphical Representation of Fourier Based ImagingMethods. J. Magn. Reson., 54:338–343, 1983.[11] D. Twieg. The k-Trajectory Formulation of the NMR Imaging Process with Ap-plications in Analysis and Synthesis of Imaging Methods. Med. Phys., 10(5):610–621, 1983.[12] P. Mansfield. Multi-planar image formation using NMR spin-echoes. J. Phys.C, 10:L5–L58, 1977.[13] P. Jezzard and R.S. Balaban. Correction for Geometric Distortion in Echo PlanarImages from B0 Field Variations. Magn. Reson. Med., 34:6573, 1995.[14] D. Holland, J.M. Kuperman, and A.M. Dale. Efficient Correction of Inhomoge-neous Static Magnetic Field-Induced Distortion in Echo Planar Imaging. Neu-roimage, 50(1), 2010.[15] M.L. Wood and R.M. Henkelman. MR Image Artifacts from Periodic Motion.Med. Phys., 12(2):143–151, 1985.[16] G.H. Glover and A.T. Lee. Motion Artifacts in fMRI: Comparison of 2DFT withPR and Spiral Scan Methods. Magn. Reson. Med., 33:624–635, 1995.[17] C.L.G. Ham, J.M.L. Engels, G.T. van de Wiel, and A. Machielsen. Periph-eral Nerve Stimulation During MRI: Effects of High Gradient Amplitudes andSwitching Rates. J. Magn. Reson. Imag., 7(5):933–937, 1997.[18] L. Axel. Surface Coil Magnetic Resonance Imaging. J. Comput. Assist. Tomogr.,8:381, 1984.[19] E.R. McVeigh, J. Bronskill, and R.M. Henkelman. Phase and Sensitivity ofReceiver Coils in Magnetic Resonance Imaging. Med. Phys., 13:806, 1986.125BIBLIOGRAPHY[20] D.K. Sodickson and W.J. Manning. Simultaneous acquisition of spatial harmon-ics (SMASH): fast imaging with radiofrequency coil arrays. Magn. Reson. Med.,38:591–603, 1997.[21] D.K. Sodickson, M.A. Griswold, P.M. Jakob, R.R. Edelman, and W.J. Manning.Signal-to-Noise Ratio and Signal-to-Noise Efficiency in SMASH Imaging. Magn.Reson. Med., 38:591–603, 1997.[22] D.K. Sodickson. Tailored SMASH Image Reconstructions for Robust in vivoparallel MR Imaging. Magn. Reson. Med., 44:243–251, 2000.[23] M. Bydder, D.J. Larkman, and J.V. Hajnal. Generalized SMASH Imaging.Magn. Reson. Med., 47:160–170, 2002.[24] P.M. Jakob, M.A. Griswold, R.R. Edelman, and D.K. Sodickson. AUTO-SMASH: A Self-Calibrating Technique for SMASH Imaging. Magn. Reson. Mat.Phys. Biol. Med., 7:42–54, 1998.[25] R.M. Heidemann, M.A. Griswold, and A. Haase P.M. Jakob. VD-AUTO-SMASHimaging. Magn. Reson. Med., 45:1066–1074, 2001.[26] P.B. Roemer, W.A. Edelstein, C.E. Hayes, S.P. Souza, and O.M. Mueller. TheNMR Phased Array. Magn. Reson. Med., 16:192–225, 1990.[27] P. Qu, J. Yuan, B. Wu, and G.X. Shen. Optimization of regularization parameterfor GRAPPA reconstruction. ISMRM 14 Proceedings, 2474, 2006.[28] W. Liu, X. Tang, Y. Ma, and J.H. Gao. Improved Parallel MR Imaging Using aCoefficient Penalized Regularization for GRAPPA Reconstruction. Magn. Reson.Med., 69(4):1109–1114, 2013.[29] F-H. Lin. Prior-regularized GRAPPA reconstruction. ISMRM 14 Proceedings,3656, 2006.126BIBLIOGRAPHY[30] T. Zhao and X. Hu. Iterative GRAPPA (iGRAPPA) for improved parallel imag-ing reconstruction. Magn. Reson. Med., 59(4):903–907, 2008.[31] F. Huang, Y. Li, S. Vijayakumar, S. Hertel, and G.R. Duensing. High-PassGRAPPA: An Image Support Reduction Technique for Improved Partially Par-allel Imaging. Magn. Reson. Med., 59:642–649, 2008.[32] Z. Wang, J. Wang, and J.A. Detre. Improved Data Reconstruction Method forGRAPPA. Magn. Reson. Med., 54:738–742, 2005.[33] E.G. Kholmovski and D.L. Parker. Spatially Variant GRAPPA. ISMRM 14Proceedings, 285, 2006.[34] S. Skare and R. Bammer. Spatial Modeling of the GRAPPA Weights. ISMRM13 Proceedings, 2422, 2005.[35] J.W. Cooley and J.W. Tukey. An algorithm for the machine calculation of com-plex Fourier series. Math. Comput., 19:297–301, 1965.[36] K.P. McGee, A. Manduca, J.P. Felmlee, S.J. Riederer, and R.L. Ehman. ImageMetric-Based Correction (Autocorrection) of Motion Effects: Analysis of ImageMetrics. J. Magn. Reson. Imag., 11:174–181, 2000.[37] Q-S. Xiang, M.J. Bronskill, and R.M. Henkelman. Two-point interferencemethod for suppression of ghost artifacts due to motion. J. Magn. Reson. Imag.,3(6):900–906, 1993.[38] Q-S. Xiang. Accelerating MRI by Skipped Phase Encoding and Edge Deghosting(SPEED). Magn. Reson. Med., 53:1112–1117, 2005.[39] M. Lustig, D. Donoho, and J.M. Pauly. Sparse MRI: The Application of Com-pressed Sensing for Rapid MR Imaging. Magn. Reson. Med., 58:1182–1195, 2007.127BIBLIOGRAPHY[40] R. Nana. On Optimality and Efficiency of Parallel Magnetic Resonance Imag-ing Reconstruction: Challenges and Solutions. PhD thesis, Georgia Institute ofTechnology, 2008.[41] R. Nana and X. Hu. Data Consistency Criterion for Selecting Parameters fork-space Based Reconstruction in Parallel Imaging. Magn. Reson. Med., 28:119–128, 2010.[42] A. Samsonov. On Optimality of Parallel MRI Reconstructions in k-space. Magn.Reson. Med., 59:156–164, 2008.[43] M.A. Griswold, M. Blaimer, F. Breuer, R.M. Heidemann, M. Mueller, and P.M.Jakob. Parallel Magnetic Resonance Imaging Using the GRAPPA OperatorFormalism. Magn. Reson. Med., 54:1553–1556, 2005.[44] S. Ma, W. Yin, Y. Zhang, and A. Chakraborty. An Efficient Algorithm forCompressed MR Imaging using Total Variation and Wavelets. IEEE ProceedingsCVPR, 2008.[45] K.T. Block, M. Uecker, and J. Frahm. Undersampled Radial MRI with MultipleCoils. Iterative Image Reconstruction Using a Total Variation Constraint. Magn.Reson. Med., 57:1086–1098, 2007.[46] S.F. Gull and G.J. Daniell. Image Reconstruction from Incomplete and NoisyData. Nature, 272:686–690, 1978.[47] R.T. Constable and R.M. Henkelman. Why MEM does not work in MR ImageReconstruction. Magn. Reson. Med., 14:12–25, 1990.[48] R.P. Bocker and S.A. Jones. ISAR motion compensation using the burst deriva-tive measure as a focal quality indicator. Int. J. Imaging Systems, Technol.,4:285–297, 1992.128BIBLIOGRAPHY[49] M. Bydder, D.J. Larkman, and V. Hajnal. Use of Information Theory to AlignSeparately Acquired Reference and Target Data for SENSE Processing. ISMRM09 Proceedings, 773, 2001.[50] W.T. Dixon. Simple Proton Spectroscopic Imaging. Radiology, 153:189–194,1984.[51] P.R. Moran. A Flow Velocity Zeugmatographic Interlace for NMR Imaging inHumans. Magn. Reson. Imag., 1:197–203, 1982.[52] Y. Ishihara, A. Calderon, H. Watanabe, K. Okamoto, Y. Suzuki, K. Kuroda,and Y. Suzuki. A Precise and Fast Temperature Mapping Using Water ProtonChemical Shift. Magn. Reson. Med., 34:814–823, 1995.[53] R. Muthupillai, P.J. Rossman, D.J. Lomas, J.F. Greenleaf, S.J. Riederer, andR.L. Ehman. Magnetic Resonance Imaging of Transverse Acoustic Strain Waves.Magn. Reson. Med., 36:266–274, 1996.[54] M. A. Bernstein, K. F. King, and X. J. Zhou. Handbook of MRI Pulse Sequences.Elsevier, 2004.[55] E.M. Haacke, R.W. Brown, M.R. Thompson, and R. Venkatesan. MagneticResonance Imaging: Physical Principles and Sequence Design. New York: Wiley,1999.[56] Q.X. Yang, J. Wang, X. Zhang, C.M. Collins, M.B. Smith, H. Liu, X.H. Zhu,J.T. Vaughan, K. Ugurbil, and W. Chen. Analysis of Wave Behavior in LossyDielectric Samples at High Field. Magn. Reson. Med., 47(5):982–989, 2002.[57] T.S. Ibrahim, R. Lee, A.M. Abduljalil, B.A. Baertlein, and P.M. Robitaille.Dielectric Resonances and B1 Field Inhomogeneity in UHFMRI: ComputationalAnalysis and Experimental Findings. Magn. Reson. Imag., 19(2):218–226, 2001.129BIBLIOGRAPHY[58] B. Belaroussi, J. Milles, S. Carme, Y.M. Zhu, and H. Benoit-Cattin. IntensityNon-Uniformity Correction in MRI: Existing Methods and their Validation. Med.Imag. Analysis, 10:234–246, 2006.[59] W. Brey and P. Narayana. Correction for intensity falloff in surface coil magneticresonance imaging. Med. Phys., 15(2):241–245, 1988.[60] J. Sled and G. Pike. Standing-wave and RF penetration artifacts caused by el-liptic geometry: an electrodynamic analysis of MRI. IEEE Trans. Med. Imaging,17(4):653–662, 1998.[61] W. Lin, F. Huang, G. R. Duensing, and A. Reykowski. Off-Resonance ArticactCorrection with Convolution in k-Space (ORACLE). Proc. Intl. Soc. Mag. Reson.Med., 19:2687, 2011.[62] L. Hanson and T. Dyrby. RF inhomogeneity correction: validity of the smooth-bias approximation. ISMRM 10 Proceedings, page 2316, 2002.[63] M.A. Griswold, P.M. Jakob, R.M. Heidemann, M. Nittka, V. Jellus, J. Wang,B. Kiefer, and A. Haasel. Generalized Autocalibrating Partially Parallel Acqui-sitions (GRAPPA). Magn. Reson. Med., 47:1202–1210, 2002.[64] G.H. Golub and C.F. van Loan. Matrix Computations. Johns Hopkins UniversityPress, 1996.[65] Z. Chang and Q-S. Xiang. Nonlinear Phase Correction With an Extended Sta-tistical Algorithm. IEEE Trans. Med. Imag., 24(5):791–798, 2005.[66] R.A. Jones, O. Haraldseth, T.B. Muller, P.A Rinck, and A.N. Oksendal. k-Space Substitution: a Novel Dynamic Imaging Technique. Magn. Reson. Med.,29:830–834, 1993.130BIBLIOGRAPHY[67] J.J. van Vaals, M.E. Brummer, W.T. Dixon, H.H. Tuithof, H. Engels, R.C.Nelson, B.M. Gerety, J.L. Chezmar, and J.A. den Boer. “Keyhole” Methodfor Accelerating Imaging of Contrast Agent Uptake. J. Magn. Reson. Imag.,3:671–675, 1993.[68] M. Irani and S. Peleg. Motion analysis for image enhancement: resolution,occlusion, and transparency. J. Vis. Comm. Image Rep., 4(4):324–335, 1993.[69] H. Greenspan, G. Oz, N. Kiryati, and S. Peled. MRI inter-slice reconstructionusing super-resolution. Magn. Reson. Imag., 20:437–446, 2002.[70] A. Gholipour, J. Estroff, M. Sahin, S. Prabhu, and S. Warfield. Maximum a Pos-teriori Estimation of Isotropic High-Resolution Volumetric MRI from OrthogonalThick-Slice Scans. Proc. MICCAI, Beijing, China, 6362:109–116, 2010.[71] A. Souza and R. Senn. Model-Based Super-Resolution for MRI. IEEE Proc.Eng. Med. Biol. Soc., pages 430–434, 2008.[72] R. Shilling, M. Brummer, and K. Mewes. Merging Multiple Stacks MRI into aSingle Data Volume. IEEE Int’l Sym. Biomed. Imag., 3:1012–1015, 2006.[73] R. Shilling, T. Robbie, T. Bailloeul, K. Mewes, R. Mersereau, and M. Brum-mer. A Super-Resolution Framework for 3D High Resolution and High-ContrastImaging Using 2D Multislice MRI. IEEE Trans. Med. Imag., 28:633–644, 2009.[74] E. Van Reeth, I.W.K. Tham, C.H. Tan, and C.L. Poh. Super-Resolution inMagnetic Resonance Imaging: A Review. Concepts Magn. Reson. A, 40A(6):306–325, 2012.[75] A.K. Jain. Fundamentals of Digital Image Processing. Englewood Cliffs, NJ:Prentice-Hall, 1989.131BIBLIOGRAPHY[76] J. Abbiss, J. Allen, R. Bocker, and H. Whitehouse. Fast Regularized Decon-volution in Optics and Radar. Proc. IMA Conf. Math. Signal Processing, 3,1994.[77] J.G. Nagy, R.J. Plemmons, and T.C. Torgersen. Iterative Image Restoration Us-ing Approximate Inverse Preconditioning. IEEE Trans. Imag. Proc., 5(7):1151–1162, 1996.[78] G. Johnson, Y.Z. Wadghiri, and D.H. Turnbull. 2D Multislice and 3D MRISequences are Often Equally Sensitive. Magn. Reson. Med., 41:824–828, 1999.[79] E.H. Moore. On the Reciprocal of the General Algebraic Matrix. Bulletin Am.Math. Soc., 26(9):394–395, 1920.[80] R. Penrose. A Generalized Inverse for Matrices. Math. Proc. Cambridge Phil.Soc., 51:406–413, 1955.132Appendix AThe Moore-Penrose PseudoinverseThe Moore-Penrose pseudoinverse [79, 80] provides a least squares solution to a systemof linear equations. For the systemAx = b, (A.1)the vector x which solves the system may not exist, or if it does, may not be unique.The pseudoinverse A† provides the minimum least squares solution:x→ argmin (||Ax− b||2) , (A.2)If the rows of m × n matrix A are linearly independent, it is said to have fullrow rank. Generally, m < n and A represents a system with more equations thanunknowns. In this case, the Moore-Penrose pseudoinverse is definedA† = AH(AAH)−1, (A.3)where, AH represents the Hermitian (conjugate) transpose and A−1 represents theinverse of A.133APPENDIX A. THE MOORE-PENROSE PSEUDOINVERSEComputational ComplexityStraightforward computation of A† requires a number of floating point operations(flops) outlined in the sequence below. Beginning with the m× n matrix A and then × m matrix AH , we track the resulting matrix size and the number of flops foreach step;AAH (m×m) nm2(AAH)−1 (m×m) m3AH(AAH)−1 (n×m) nm2 (A.4)In general, the number of floating point operations in terms of complex multipli-cations for the computation of the pseudoinverse of m× n matrix A isflops = m3 + 2nm2 (A.5)134
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- A study of spatial encoding and decoding in MRI
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
A study of spatial encoding and decoding in MRI Depew, Thomas Andrew 2015
pdf
Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
Page Metadata
Item Metadata
Title | A study of spatial encoding and decoding in MRI |
Creator |
Depew, Thomas Andrew |
Publisher | University of British Columbia |
Date Issued | 2015 |
Description | Spatial encoding is a key feature (perhaps the key feature) of Magnetic Resonance Imaging (MRI). Since the discovery that magnetic field gradients could be used to localize signal, researchers have sought methods to acquire higher quality images, with greater efficiency. This dissertation investigates three approaches for encoding and decoding the information of MRI to provide useful images for clinical diagnosis. Partial Parallel Imaging (PPI) introduced a framework that enabled faster data acquisition, allowing reduction in scan time or improved resolution. We present a regionally optimized implementation of the popular PPI technique GeneRalised Auto-calibrating Partial Parallel Acquisition (GRAPPA). The regional implementation provides images with lower data recovery error and reduced residual aliasing artifact. We assess a number of image quality metrics that would allow automated selection of the optimal reconstructed image. A number of physical quantities can be mapped via comparison of two images with different sensitivities. When the contrast between these images is smoothly varying, the information may be captured using only a fraction of the k-space data required for full image reconstruction. We present a technique to provide robust recovery of relative information maps between images from minimal k-space data. The effectiveness of the technique is demonstrated through application to phase contrast MRI data. Many MRI applications are limited by acquisition in 2D multislice mode. In this regime, the slice direction typically suffers lower resolution than the in-plane directions. We present three strategies to improve through-plane resolution. The relative merits of each technique are investigated, and the performance is quantified with standard measures. The implications of the potential artifacts resulting from each technique are discussed. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2015-04-28 |
Provider | Vancouver : University of British Columbia Library |
Rights | Attribution-NonCommercial-NoDerivs 2.5 Canada |
DOI | 10.14288/1.0166198 |
URI | http://hdl.handle.net/2429/52986 |
Degree |
Doctor of Philosophy - PhD |
Program |
Physics |
Affiliation |
Science, Faculty of Physics and Astronomy, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 2015-09 |
Campus |
UBCV |
Scholarly Level | Graduate |
Rights URI | http://creativecommons.org/licenses/by-nc-nd/2.5/ca/ |
AggregatedSourceRepository | DSpace |
Download
- Media
- 24-ubc_2015_september_depew_thomas.pdf [ 8.1MB ]
- Metadata
- JSON: 24-1.0166198.json
- JSON-LD: 24-1.0166198-ld.json
- RDF/XML (Pretty): 24-1.0166198-rdf.xml
- RDF/JSON: 24-1.0166198-rdf.json
- Turtle: 24-1.0166198-turtle.txt
- N-Triples: 24-1.0166198-rdf-ntriples.txt
- Original Record: 24-1.0166198-source.json
- Full Text
- 24-1.0166198-fulltext.txt
- Citation
- 24-1.0166198.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
data-media="{[{embed.selectedMedia}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0166198/manifest