Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Towards dynamic, patient-specific musculoskeletal models Levin, David Isaac William 2012

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2012_spring_levin_david.pdf [ 33.37MB ]
Metadata
JSON: 24-1.0052165.json
JSON-LD: 24-1.0052165-ld.json
RDF/XML (Pretty): 24-1.0052165-rdf.xml
RDF/JSON: 24-1.0052165-rdf.json
Turtle: 24-1.0052165-turtle.txt
N-Triples: 24-1.0052165-rdf-ntriples.txt
Original Record: 24-1.0052165-source.json
Full Text
24-1.0052165-fulltext.txt
Citation
24-1.0052165.ris

Full Text

Towards Dynamic, Patient-Specific Musculoskeletal Models  by David I.W. Levin B.Sc. Biology and Computer Science, The University of Western Ontario 2002 M.Sc. Medical Biophysics, The University of Western Ontario 2006  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  Doctor of Philosophy in THE FACULTY OF GRADUATE STUDIES (Computer Science)  The University Of British Columbia (Vancouver) April 2012 c David I.W. Levin, 2012  Abstract This thesis focuses on the development of tools to aid in producing dynamic simulations from patient specific volumetric data. Specifically, two new computational methods have been developed, one for image acquisition and one for simulation. Acquiring patient-specific musculoskeletal architectures is a difficult task. Our image acquisition relies on Diffusion Tensor Imaging since it allows the non-invasive study of muscle fibre architecture. However, musculoskeletal Diffusion Tensor Imaging suffers from low signal-to-noise ratio. Noise in the computed tensor fields can lead to poorly reconstructed muscle fibre fields. In this thesis we detail how leveraging a priori knowledge of the structure of skeletal muscle can drastically increase the quality of fibre architecture data extracted from Diffusion Tensor Images. The second section of this thesis describes a simulation technique that allows the direct simulation of volumetric data, such as that produced by the denoising algorithm. The method was developed in response to two key motivations: first, that the medical imaging data we acquire is volumetric and can be difficult to discretize in a Lagrangian fashion, and second that many biological structures (such as muscle) are highly deformable and come into close contact with each other as well as the environment. In response to these observations we have produced an  ii  Eulerian simulator that can simulate volumetric objects in close contact. The algorithm intrinsically handles large deformations and potential degeneracies that can result in contacting scenarios. Extending the simulator to produce complex musculoskeletal simulations is also discussed. These two algorithms address concerns in two stages of a proposed pipeline for generating dynamic, patient specific musculoskeletal simulations.  iii  Preface The work comprising this thesis has resulted in several publications in which I collaborated with many members of the Sensorimotor Computation Lab as well as members of the University of British Columbia MRI Research Centre. All the work in this thesis was carried out under the supervision of Dr. Dinesh K. Pai who provided overall guidance for and was a collaborator on all resulting publications. The work described in Chapter 5 has been published in Medical Image Analysis. I was the primary author of the work and carried out the entirety of the software development as well as the majority of the writing (with helpful suggestions from Dr. Pai). I also conceptualized most of the algorithm under the guidance of Dr. Pai who first conceived of exploiting the divergence free character of muscle fibres. Imaging for the study was carried out by Dr. Burkhard M¨adler, the third author, and Dr. Benjamin Gilles performed segmentation of the experimental data. The work described in Chapter 6 was presented at SIGGRAPH 2011. Again I was the primary author of the work and performed the majority of the writing and software development. Joshua Litven, the second author implemented the Quadratic Program solver used for solving the described inequality constrained op-  iv  timization Dr. Shinjiro Sueda provided theoretical guidance. Garrett Jones wrote the rendering pipeline to produce all images and generated many of the final animations. The entire project evolved out of discussions between Dr. Pai and myself about how to best simulate volumetric muscles. The following publications resulted from the work presented in this thesis: Chapter 5: Levin, D.I.W., Gilles, B., M¨adler, B. and Pai, D.K. (2011), Extracting Skeletal Muscle Fibre Fields from Noisy Diffusion Tensor Data. Medical Image Analysis 15(3): 340-353 Chapter 6: Levin, D.I.W., Litven, J., Jones, G.L., Sueda, S. and Pai, D.K. (2011), Eulerian Solid Simulation with Contact. ACM Transactions on Graphics (SIGGRAPH 2011) 30(4): 36:1-36:10 Imaging for Chapter 5 was performed under ethics approval from the UBC Clinical Research Ethics Board (Certificate Number: H08-00251)  v  Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iv  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  vi  List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ix  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  x  Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  xiv  Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  xvi  1  Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1  2  Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  6  2.1  Classification of Muscle Architecture and Functional Significance  6  2.2  MRI Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . .  8  2.3  2.2.1  Denoising Diffusion Tensor Imaging Data . . . . . . . . .  10  2.2.2  Fibre Tracking in Diffusion Tensor Studies . . . . . . . .  11  Numerical Simulation using Muscle Architecture . . . . . . . . .  12  vi  2.3.1  Line and Strand Based Simulations . . . . . . . . . . . .  12  2.3.2  Volumetric Simulations . . . . . . . . . . . . . . . . . . .  13  3  Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  17  4  Muscle Fibre Reconstruction from Noisy Data . . . . . . . . . . . .  22  4.1  Background and Motivation . . . . . . . . . . . . . . . . . . . .  22  4.2  Comparison to Other Work and Contribution . . . . . . . . . . . .  23  4.3  Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  24  4.3.1  Theoretical Background . . . . . . . . . . . . . . . . . .  24  4.3.2  Numerical Implementation . . . . . . . . . . . . . . . . .  28  4.3.3  Synthetic Data Generation . . . . . . . . . . . . . . . . .  33  4.3.4  MRI Data Acquisition and Segmentation . . . . . . . . .  37  Results and Discussion . . . . . . . . . . . . . . . . . . . . . . .  37  4.4.1  Synthetic Data Results . . . . . . . . . . . . . . . . . . .  39  4.4.2  Human Subject Data Results . . . . . . . . . . . . . . . .  47  4.4.3  Limitations and Future Work . . . . . . . . . . . . . . . .  50  Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  52  Eulerian Simulation of Volumetric Objects . . . . . . . . . . . . . .  53  5.1  Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . .  54  5.2  Eulerian Simulation . . . . . . . . . . . . . . . . . . . . . . . . .  54  5.2.1  Strain Calculation  . . . . . . . . . . . . . . . . . . . . .  56  5.2.2  Notation  . . . . . . . . . . . . . . . . . . . . . . . . . .  57  5.3  Simulating a Single Object . . . . . . . . . . . . . . . . . . . . .  57  5.4  Simulating Multiple Objects . . . . . . . . . . . . . . . . . . . .  60  4.4  4.5 5  vii  5.4.1  Velocity Level Constraints . . . . . . . . . . . . . . . . .  62  5.4.2  Gauss’ Principle of Least Constraint . . . . . . . . . . . .  64  5.4.3  Post-Stabilization . . . . . . . . . . . . . . . . . . . . . .  65  5.4.4  Adaptive Time Step . . . . . . . . . . . . . . . . . . . . .  66  GPU Implementation . . . . . . . . . . . . . . . . . . . . . . . .  67  5.5.1  Quadratic Program Solver . . . . . . . . . . . . . . . . .  69  Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  70  5.6.1  Simulating a Single Object . . . . . . . . . . . . . . . . .  71  5.6.2  Simulating Multiple Objects . . . . . . . . . . . . . . . .  72  5.6.3  GPU Implementation . . . . . . . . . . . . . . . . . . . .  76  5.6.4  Discussion . . . . . . . . . . . . . . . . . . . . . . . . .  79  5.6.5  Limitations and Future Work . . . . . . . . . . . . . . . .  81  Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  82  Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  83  6.1  Overview of Contributions . . . . . . . . . . . . . . . . . . . . .  83  6.2  Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  84  6.3  Final Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . .  86  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  87  A Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  99  A.1 Magnetic Resonance Imaging . . . . . . . . . . . . . . . . . . . .  99  5.5  5.6  5.7 6  A.2 Diffusion Weighted Imaging . . . . . . . . . . . . . . . . . . . . 105 A.3 Muscle Structure . . . . . . . . . . . . . . . . . . . . . . . . . . 107  viii  List of Tables Table 4.1  A comparison of denoised vector fields produced by the FRDS and JRLMMSE algorithms . . . . . . . . . . . . . . . . . . .  47  Table 5.1  Memory usage of Eulerian simulator . . . . . . . . . . . . . .  79  Table 5.2  Performance timings for Eulerian simulator . . . . . . . . . . .  79  ix  List of Figures Figure 1.1  An illustration of the two levels of muscle architecture . . . .  2  Figure 1.2  The proposed musculoskeletal imaging pipeline . . . . . . . .  4  Figure 3.1  Examples of MRI acquisition . . . . . . . . . . . . . . . . .  19  Figure 3.2  An example of imaging artifacts . . . . . . . . . . . . . . . .  21  Figure 4.1  The stencil for the discrete Laplacian . . . . . . . . . . . . .  29  Figure 4.2  A schematic of the CRSS and FRDS algorithms. . . . . . . .  34  Figure 4.3  Reconstruction algorithm output . . . . . . . . . . . . . . . .  38  Figure 4.4  Quantitative results for the synthetic z-axis vector field . . . .  40  Figure 4.5  Quantitative results for the synthetic curved vector field . . . .  40  Figure 4.6  Quantitative results for the synthetic 2D bipennate vector field  41  Figure 4.7  Quantitative results for the synthetic 3D bipennate vector field  41  Figure 4.8  Qualitative comparison for the synthetic z-axis vector field . .  43  Figure 4.9  Qualitative comparison for the synthetic curved vector field . .  44  Figure 4.10 Qualitative comparison for the synthetic 2D bipennate vector field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  45  Figure 4.11 Qualitative comparison for the synthetic 3D bipennate vector field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  x  46  Figure 4.12 Position and orientation of anatomical MRI data . . . . . . .  48  Figure 4.13 Comparison of noisy and denoised (CRSS) vector field results using streamlines . . . . . . . . . . . . . . . . . . . . . . . .  49  Figure 5.1  Eulerian simulator integration regions . . . . . . . . . . . . .  61  Figure 5.2  Two balls with different material properties fall onto a rigid box under the influence of gravity . . . . . . . . . . . . . . .  Figure 5.3  71  A bunny is instantaneously compressed flat and allowed to return to its original shape. . . . . . . . . . . . . . . . . . . . .  72  Figure 5.4  Reference configuration preservation . . . . . . . . . . . . .  73  Figure 5.5  A collision example . . . . . . . . . . . . . . . . . . . . . . .  73  Figure 5.6  The effect of post-stabilization on collision response . . . . .  74  Figure 5.7  Adaptive time step size . . . . . . . . . . . . . . . . . . . . .  75  Figure 5.8  Examples generated using the Eulerian simulator . . . . . . .  76  Figure 5.9  A teddy bear is simulated directly from CT data . . . . . . . .  76  Figure 5.10 Elastic slab benchmark . . . . . . . . . . . . . . . . . . . . .  77  Figure 5.11 Performance of the primal-dual interior point algorithm . . . .  78  Figure 5.12 The bunny crunch . . . . . . . . . . . . . . . . . . . . . . . .  78  Figure A.1  Spin alignment in a magnetic field . . . . . . . . . . . . . . . 100  Figure A.2  The effect of an RF pulse on a collection of spins . . . . . . . 101  Figure A.3  Canonical MR signal decay . . . . . . . . . . . . . . . . . . 102  Figure A.4  Dephasing spins leading to decreased net transverse magnetization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103  Figure A.5  T1 and T2 weighted scans . . . . . . . . . . . . . . . . . . . 105  Figure A.6  A cross section of skeletal muscle . . . . . . . . . . . . . . . 108 xi  List of Symbols mT  millitesla . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  17  u  Muscle fibre field . . . . . . . . . . . . . . . . . . . . . . . . .  25  ∇  Gradient operator . . . . . . . . . . . . . . . . . . . . . . . . .  25  ∇·  Divergence operator . . . . . . . . . . . . . . . . . . . . . . . .  25  ∇×  Curl operator . . . . . . . . . . . . . . . . . . . . . . . . . . .  25  h  Harmonic component of Helmholtz-Hodge decomposition . . .  25  p  Divergence free component of Helmholtz-Hodge decomposition  25  g  Curl free component of Helmholtz-Hodge decomposition . . . .  25  ρ  Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  55  v  Velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  55  σ  Cauchy stress . . . . . . . . . . . . . . . . . . . . . . . . . . .  55  E  Strain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  55  F  Deformation gradient . . . . . . . . . . . . . . . . . . . . . . .  56  I  Identity matrix . . . . . . . . . . . . . . . . . . . . . . . . . .  56  D(P) Discrete Jacobian matrix of a function P . . . . . . . . . . . . .  57  Ω  Infinitessimal volume region . . . . . . . . . . . . . . . . . . .  58  Γ  Infinitessimal boundary segment . . . . . . . . . . . . . . . . .  63  xii  B0  Main magnetic field . . . . . . . . . . . . . . . . . . . . . . . .  99  B1  Magnetic component of RF pulse . . . . . . . . . . . . . . . . . 100  T1  Time constant for axial magnetization recovery . . . . . . . . . 101  T2*  Time constant of transverse magnetization decay . . . . . . . . 102  T2  Time constant of transverse magnetization decay from spin-spin interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102  xiii  Glossary BR  Brachioradialis  DTI  Diffusion Tensor Imaging  DWI  Diffusion Weighted Imaging  DOF  Degrees of Freedom  ECRB  Extensor Carpi Radialis Brevis  ECRL  Extensor Carpi Radialis Longus  EPI  Echo-Planar Imaging  EMG  Electromyography  FOV  Field of View  FSE  Fast Spin Echo  GPU  Graphics Processing Unit  MAC  Marker and Cell  MR  Magnetic Resonance xiv  MRI  Magnetic Resonance Imaging  NURBS Non-rational Uniform B-Splines PCSA  Physiological Cross-Sectional Area  QP  Quadratic Program  RBF  Radial Basis Function  SNR  Signal-to-Noise Ratio  SPD  Symmetric Positive Definite  T1W  T1-Weighted  T2W  T2-Weighted  TE  Echo Time  TR  Repetition Time  xv  Acknowledgments First and foremost I want to thank my supervisor Dr. Dinesh K. Pai. Dinesh once told me that you have to know how to cook before you can be a chef. And so it went that under his tutelage I spent many long hours slaving away over a warm keyboard. But at the end of it all not only had he taught me an indescribable amount about the tools of computer science, he showed me what those tools could create in the hands of a great practitioner. So Dinesh, thanks for being both a great teacher and a great chef. To my committee members, Dr. David Lowe and Dr. Michiel van de Panne, thank you for your guidance and insight, all of which served to improve this thesis. I would also like to thank my thesis examiners, Dr. Robert Bridson, Dr. Scott Delp and Dr. Sidney Fels for their thorough reading of my thesis and their rigorous examination of the contents therein. During my time at UBC I have had the opportunity to collaborate with many excellent researchers. None of the work you see here would have made it to completion without the help of Dr. Benjamin Gilles, Dr. Burkhard M¨adler, Dr. Shinjiro Sueda, Joshua Litven and Garrett Jones. I would be remiss if I did not thank each and every member (past and present) of the Sensorimotor Lab for their support and friendship (even the occasional cup  xvi  of coffee). However, I would like to single out my friend and colleague Shinjiro Sueda. Working with Shin provided a constant source of inspiration. There are very few ideas that I have had over the course of these 5 years that weren’t indirectly shaped by our discussions (usually interspersed with some talk about the state of professional tennis). I would also like to thank Danny Kaufman. Danny showed me what it was to be perpetually calm in the face of chaos. He always made time to chat about research or life and offered meaningful, level-headed commentary on both. To my sister Helen and my brother Dan: Sibling rivalry is probably what drove me to enter a PhD program but your support was an immeasurable reason why I actually completed it. Finally to my parents: You gave me my curiosity, my love of learning and my work ethic. All the qualities that make me a passable researcher I learned from you. More than that, you always believed in me. When I left University after 3 years you didn’t bat an eye. When I went back you never said I told you so. When I stayed in school for 7 years after that you never told me it was time to get a job. You let me figure it all out on my own and so perhaps it is no curious thing that figuring it out is how I enjoy spending my time. Know that whatever good work I have done and will do is because of the love and support the two of you have given me. To Cheryl, thank you for everything. Special thanks to all the animals back on the farm. Nothing keeps you grounded like watching a cat, dog, or horse lazing around the fields in the summer.  xvii  Chapter 1  Introduction This thesis concerns itself with the development of two computational tools for constructing dynamic, patient-specific, musculoskeletal models. The musculoskeletal system can be viewed as a window into the brain’s motor control system. By understanding why and how certain physical actions are undertaken one could hope to “reverse engineer” the control processes of the nervous system. Building accurate numerical models of the musculature is a useful way to gain understanding of its salient features. These models also provide a set of tools with which to develop hypotheses about neural control. While the kinematics of the skeletal system play a large role in the behaviour of musculoskeletal systems, the muscles themselves are also of primary importance. The modelling and simulation of muscles is a well-studied topic in both the graphics and the biomechanics literature and while many algorithms exist for simulating muscle, they are, for the most part, built from the “outside-in”. They consider the overall shape of an individual muscle volume to be the overriding factor in determining muscle behavior. Our hypothesis is that subject-specific muscle architecture is the main determinant of muscle behaviour. 1  This thesis will explore the construction of tools that could be used to study the effect of muscle architecture on the performance of an individual’s musculoskeletal system. We note that the focus of this thesis is the development of these tools and that their use for simulating the biomechanics and movement of individual human subjects is beyond the scope of this dissertation. Muscle architecture exists at two fundamental levels. The first is that of an individual muscle which is termed fibre architecture and the second is the architecture of a musculoskeletal system; the arrangement of muscles within the confines of the body. Both these types of muscle architecture are well represented in Figure 1.1. Insertion  Origin  Pennation Muscle Arrangement  Figure 1.1: A picture illustrating the two levels of muscle architecture in the human forearm. The fibre directions within each muscle are clearly indicated, illustrating their differing fibre architectures. The complex packing of the individual muscles into the forearm structure demonstrates the second level of muscle architecture. Illustration c Lea and Febiger(1918) (Gray [43]), adapted with permission.  Fibre architecture, the arrangement of fibres inside the muscle, has an affect on the performance of an individual‘s muscles. So much so that muscles with similar  2  geometric descriptions but different architectures can exhibit a 10 to 20 times difference in contraction speed and force [59]. Such properties are used by the body to optimize specific muscles for certain roles, for instance the architecture of the quadriceps appears well-suited for generating high forces whilst that of the hamstrings allows for large excursions [57]. Observing fibre architectures is a difficult task. Since most of the techniques used to record fibre orientation are invasive, there is little work on studying subject specific architecture variation within humans. Given that muscle architecture is so fundamental to many biomechanical responses it is natural to ask whether this variation has any effect on individual task performance. While fibre arrangement is the first component of muscle architecture, the second, more macroscopic architectural consideration is the arrangement and interaction of multiple muscles within a musculoskeletal system. Packing constraints and fibre architecture are often linked as the forces required for certain motions are produced by muscles which must also fit within the internal structure of an entity. Fibre arrangement affects both force generation and muscle volume and this coupling bridges our two levels of architecture. As such, multiple muscles with differing fibre architectures act in close proximity to produce human movement. Since the effects of these spatial relationships are difficult to test in vivo due to restrictions in the way data may be gathered, numerical simulations offer one of the few avenues available to undertake this type of analysis. The construction of such musculoskeletal models can be divided into three steps (Figure 1.2). The first is the acquisition of pertinent structural and physiological data. The second is the processing of this data into some useful form and the final step is the use of this processed data inside a simulation framework. For this thesis we propose to tackle 3  all three stages of this pipeline. We propose to develop new imaging methodologies for acquiring muscle architecture information and to develop new simulation techniques which can handle the challenges imposed by the volumetric nature of our data as well as those imposed by the structure of muscle itself. Chapter 3  Chapter 4  Anatomical and Diffusion Weighted Volumes  Chapter 5  Anatomical Volume and Fibre Field  Figure 1.2: Our ideal musculoskeletal imaging pipeline which begins with model acquisition and ends with the production of a dynamic simulation. In this thesis we will touch upon topics ranging from Magnetic Resonance and Diffusion Tensor Imaging to simulation of deformable bodies. For readers unfamiliar with these topics Appendix A and the first section of Chapter 5 can provide useful starting points respectively. We will begin by discussing the problem of acquiring muscle fibre architecture data. Here we have turned to Diffusion Tensor Imaging (DTI), a Magnetic Resonance Imaging (MRI) technique that measures anisotropic diffusion within tissue. Recently it has seen some use as a means to estimate bulk fibre parameters in muscle. However it suffers from low signal to noise ratio when used to image skeletal muscle due to the magnetic resonance properties of the tissue itself. Furthermore, the size of certain muscles can affect the accuracy of the obtained  4  measurements. The first contribution of this thesis is an algorithm that exploits the physical constraints of muscle fibre arrangements to reconstruct reasonable muscle fibre fields from very noisy Diffusion Tensor Images. To our knowledge, there is no other muscle specific denoising algorithm available for DTI. We illustrate the algorithm’s effectiveness on synthetic data with various levels of added noise. We also show the results of applying the algorithm to DTI data acquired from the forearm of a human subject. The second contribution of this thesis is a new technique for directly simulating volumetric data while robustly handling inter-object contact such as the kind that occurs between muscles. Unlike traditional methods which use a Lagrangian description of a given object, we present an Eulerian, embedded boundary approach to resolving contact between highly deformable objects such as muscles. We demonstrate the algorithm’s effectiveness using a number of close contacting scenarios involving deformable and rigid objects. Most notably the algorithm can directly simulate volumetric data (such as MRI) with minimal preprocessing. In summary seeks to resolve two important technical problems in the development of a subject-specific musculoskeletal simulators: • Muscle fibre architecture acquisition from noisy diffusion tensor data • A new technique for simulating volumetric imaging data with robust contact handling and a discussion on extending this technique to produce simulations of complex musculoskeletal systems  5  Chapter 2  Related Work Related work for this thesis can be divided into three distinct categories. The first is work done on classifying muscle architectures as well as establishing their functional significance, the second is work done on acquiring and processing muscle architecture data and the third is work done on the numerical simulation of volumetric materials in contact.  2.1  Classification of Muscle Architecture and Functional Significance  A comprehensive review of the structural and functional aspects of muscle architecture is offered in Gans [39]. In this work Gans describes the basic structure of a muscle which consists of parallel arrangements of muscle fibres grouped into motor units which themselves are grouped into internal sub domains. The motor unit is established as the smallest unit of muscle control since it is connected to a single motor neuron which causes each fibre in the motor unit to contract in unison. In this paper fibre architectures are described using two broad classifications: parallel, in 6  which the majority of fibres are parallel to the muscle line of action, and pennate, in which fibres lie at an angle to the muscle line of action. Functional properties of each fibre architecture are also defined. Muscles with parallel architectures achieve greater change in length during contraction whilst pennate muscles generate more force. The importance of Physiological Cross-Sectional Area (PCSA) is also discussed. PCSA is the area of a transverse section of a muscle. Experimental evidence has shown that the maximum force a muscle can exert is proportional to PCSA. More in-depth analysis of fibre architectures has been carried out in Lieber and Blevins [58] and Lieber et al. [59] in which the muscle architecture of dissected rabbit hind limbs[58] and human forearm muscles [59] were classified. In both studies selected muscles from the region of study were removed, via dissection, from cadavers. It is specially noted in Lieber et al. [59] that only a subset of the muscles of interest could be studied from each cadaver because dissection destroys neighbouring tissues. Muscles were then classified using the following measured bulk fibre statistics: average fibre length, muscle length, pennation angle, PCSA and the ratio of fibre length to muscle length. The main observation of the study was that all five muscles could be classified with 100 percent accuracy using the fibre length, muscle length ratio, and pennation angle. This result becomes more interesting when contrasted with that of Lieber and Blevins [58] in which it was found that muscle weight was the primary metric by which muscles of the lower limb could be classified. Specifically that the greater degrees of freedom in the hand seem to be mirrored in the further differentiated muscle architectures of the hand muscles. The results of this study are supported by Ravichandiran et al. [84] in which the extensor carpi radialis brevis and longus muscles were digitized, ex7  vivo, using a fine tip probe. Fibre length, pennation angle and cross sectional area were then estimated from the digitized data. The results showed significant differences in the fibre statistics for the two studied muscles.  2.2  MRI Acquisition  Here we will review work that deals with the non-invasive acquisition and processing of muscle architecture data using Magnetic Resonance Imaging (MRI). van Donkelaar et al. [104] performed some of the initial experiments in order to determine whether Diffusion Tensor Imaging (DTI) could provide accurate fibre direction estimates for physical simulation. Imaging of the tibialis anterior muscle was performed using a high field (4.7T) small animal magnet. The subject was an anesthetized Lewis rat with its hind limb immobilized and exposed. Thirteen slices of high resolution anatomical Magnetic Resonance (MR) data were obtained with corresponding DTI images. Microscopy was used to obtain an image of a longitudinal muscle slice. DTI fibre directions were compared to those measured from both the longitudinal muscle image as well as the high resolution MR and found to exhibit no greater than 5 degree error. It was also reported that the DTI was more inaccurate near the boundary of the muscle than towards the medial axis. It is worth noting that this comparison was done using a single slice of DTI data in which DTI directions at each point were computed as an average of the directions at the neighbouring 10 voxel locations. A finite element (FE) simulation of muscle contraction was performed using the computed fibre directions but no quantitative results were reported. Sinha and Yao [87] produced some of the earliest work on human musculoskeletal DTI. This work focused on a tetrahedral gradient encoding scheme which 8  increases the signal-to-noise ratio of DTI acquisitions. This gradient encoding scheme can be used to compute the trace of a diffusion tensor as well as its offdiagonal terms. This acquisition technique was applied to perform in-vivo DTI of a human calf muscle. The focus of this work was the computation of scalar metrics of anisotropic diffusivity (of which the trace of the diffusion tensor is one) and not on accurate fibre direction computation. However, the authors did compute fibre directions within the calf but note that due to low SNR the directions exhibit a very high variance. They also observe that results could be improved using a more comprehensive gradient encoding scheme. One of the most intriguing musculoskeletal DTI studies is outlined in Lansdown et al. [54]. In this study pennation angles in the tibialis anterior muscle of a human volunteer were measured and compared to those obtained via ultrasound (US) measurement. To our knowledge this is the first human musculoskeletal DTI study in which fibre tracking (and thus fibre architecture) is at the forefront. Each subject’s leg was immobilized using a custom apparatus. MR imaging was performed using a knee coil in order to increase the SNR of the resulting scans. In order to image the entire calf region, overlapping imaging volumes were captured and stitched together. In order to compute pennation angle the aponeurosis, which divides the muscle into two separate compartments, was manually segmented and used as a seed surface for fibre tracking. Pennation angle for each fibre was computed as the average angle between a fibre path and the plane tangent to the seed point which generated that fibre. This paper illustrates that despite the shortcomings of DTI, accurate fibre information can be obtained, provided it is extracted in a robust manner. The averaging of pennation along muscle fibres served to denoise the DTI data acquired during the study. Low SNR is mentioned in this paper as a 9  major impediment to DTI being used in detailed studies of muscle microarchitecture or to diagnose injury. The first DTI study of the human forearm was performed by Froeling et al. [38]; a reproducibility study using 6 scalar parameters of diffusion but not fibre direction. It should be noted that our work in Chapter 4is the first human forearm DTI study to examine fibre direction.  2.2.1  Denoising Diffusion Tensor Imaging Data  Because DTI data is most often used in a neurological setting, there are no (to the author’s knowledge) algorithms that have been designed for musculoskeletal DTI. However, several algorithms designed for use on DTI of the brain are of interest. These algorithms apply regularization to the problem of estimating diffusion tensor images from diffusion weighted images. Two concurrently published works, Basu et al. [10] and Fillard et al. [35], utilize probabilistic estimators to compensate for Riccian noise in Diffusion Weighted Imaging (DWI) data. Riccian noise results from the complex valued Gaussian distributed noise that occurs due to the fact that MR images are captured in the spatial frequency domain and then transformed into the spatial domain. Their results focus on scalar measures of diffusion computed from diffusion tensors. However, they do show tensor images reconstructed from brain DWI data with a Signal-to-Noise Ratio (SNR) of 15. The results are visually pleasing though no quantitative analysis is performed using either patient or synthetic data. In Basu et al. [10] it is conjectured that noise is not an issue for structural imaging; this is an assumption we have explored in the completed work presented herein. The results of this exploration are available in Chapter 4. In Fillard et al. [35] a Log-Euclidean framework for tensor calculus first described in [4] is used to derive a maximum likelihood procedure for reconstructing denoised 10  diffusion tensor fields. A limited set of results is presented consisting of scalar metrics for diffusivity. Finally, Zhang et al. [114] use convolution with a non-Gaussian smoothing kernel to suppress noise in a noisy tensor field. Other methods of denoising, such as wavelet domain filtering [112], have been explored but only in the context of anatomical MRI. Their affect on diffusion tensor reconstruction is unknown.  2.2.2  Fibre Tracking in Diffusion Tensor Studies  In order to study fibre architecture using DTI, fibre information must be extracted from the reconstructed diffusion tensor fields. Again, most of the techniques in the literature were developed for neurological tractography, but they are especially pertinent for muscle architecture analysis. The most intuitive method for extracting fibres from DTI data is to extract streamlines from the primary eigenvector field of the diffusion tensor data. By definition the vector field defines the tangents of all possible streamlines and so streamlines can be computed by integrating particles through the field. In Ding et al. [29] the simplest numerical integration scheme, Forward Euler [5], is used. Other methods, including higher order integrators [50] and front evolution [78], have been applied to this problem as well. Methods such as Tensorlines [106] and Surfacelines [113] use weighted combinations of each tensor eigenvector in order to compensate for regions of isotropic diffusion in otherwise anisotropic structures. A comprehensive review of fibre tracking is available in Mori and van Zijl [67]. It should be noted that all the mentioned methods, not just those relying on numerical integration, place heavy emphasis on the primary eigenvector of the diffusion tensor.  11  2.3  Numerical Simulation using Muscle Architecture  Subject specific dynamic models of muscle can be invaluable tools for the diagnosis of movement disorders [80] as well as for the study of the neural control of movement [97]. Simulating musculoskeletal systems is an oft-explored topic and has been attempted using a myriad of techniques, each one utilizing fibre architecture information to varying degrees.  2.3.1  Line and Strand Based Simulations  Line-of-force models simplify muscles to line segments that apply forces along the line of action of the muscle. A prime example of this technique can be seen in Delp and Loan [25]. In this paper a graphics based framework for generating and simulating musculoskeletal systems was developed. The framework allowed a user to specify bones, joints and muscle lines of action, from which joint moment arms could be computed and kinematic studies performed. Electromyography (EMG) data was used to activate the muscles and perform motion studies. In Delp and Loan [26] this framework was revisited and expanded upon. Similar techniques are used for the study of shoulder motion in H¨ogfors et al. [45] and Maural et al. [61]. In Maural et al. [61] an extension to the single line-of-force method was used to simulate shoulder motion. Instead of single lines of action, muscle action was described by a piecewise continuous series of line segments [83]. This method provided an updated mechanism by which moment arms could be computed. Pandy [77] again used line-of-force models and an optimization based controller to simulate walking and pedaling motions in humans. Recently, alternatives to line-offorce models have been proposed. One such method, Strands [92, 93], models muscles as dynamic B-spline curves with mass. These primitives facilitate mod12  elling the complex routing paths that muscles and tendons follow. Furthermore, the behaviour of the simulated system is now affected by the moving mass of the muscle itself, something previous line-of-force models have ignored.  2.3.2  Volumetric Simulations  Line-of-force and strand methods make assumptions reasonable for thin muscles with parallel architectures but large volume muscles with complex fibre architectures are arguably best simulated using volumetric simulations (simulation methods that additionally capture behaviours perpendicular to the muscle line of action). These methods are potentially well suited to explore the role of complex, three dimensional, fibre arrangements and packing constraints on muscle function. The seminal work on this kind of simulation was that of Ng-Thow-Hing [73] in which muscles were constructed using B-spline solids. These same solids were also used to approximate the fibre architecture of the muscle. An embedded network of visco-elastic links was used to apply contractile forces to these solids in order to generate volume preserving deformations. In contrast to this, recent work has focused on using continuum mechanics based muscle models for simulation purposes. These methods treat a muscle as a reinforced composite and rely on discretizing the muscle in some way. In Blemker and Delp [15] template fibre architectures for parallel muscles, pennate muscles, curved muscles and fanned muscles were defined in simple domains. A transformation between these fibre domains and the finite element volume was determined and the fibre direction at each integration point of the model was sampled from these templates. In Teran et al. [96] the finite volume method was used to produce simulations an order of magnitude faster than those in Blemker and Delp [15] however this method assumes a con13  stant fibre direction per computational element. The fibre directions come from the B-spline solids of Ng-Thow-Hing [73] and would, hypothetically, be much easier to obtain from experimental data. Mesh free methods have also been tentatively studied for volumetric simulation. In Zhou and Lu [115] a Non-rational Uniform B-Splines (NURBS) based Galerkin method was used to simulate skeletal muscle contraction in isolation. All of the above volumetric methods belong to the class of Lagrangian simulations which explicitly track an objects deformed configuration. Methods of this type are best distinguished by their selected discretization; Finite Element (FE) based simulations tend to rely on tetrahedral [47] or hexahedral meshes [70] while recently particle and frame based methods have gained popularity due to their mesh-free nature [27, 41, 69, 88]. Accurate tetrahedral and hexahedral mesh generation from surface data requires additional algorithms [53, 56] and in cases of large deformation, these meshes must be updated over time to avoid numerical instabilities [7, 109]. Particle methods require an initial distribution of points and transient resampling of the simulated domain as well as the specification of kernel functions to define particle support [69, 88]. The major complicating factor in the use of Lagrangian simulations for simulating medical imaging data is that the data is volumetric and has implicitly defined structure. Furthermore the generation of an initial discretization can be challenging. In contrast to previous methods, simulations performed from the Eulerian perspective avoid these issues by discretizing space instead of the object itself. This approach is popular for simulating materials, such as fluids [37], for which a reference configuration is not easily defined. More recently, Eulerian simulations have been used to simulate melting solids [20] and viscoelastic fluids [42, 60] by incor14  porating additional stress terms into the equations for fluid flow. Hybrid methods have also been developed for granular materials [71, 116]. Eulerian simulations of elastic and elasto-plastic solids have seen use outside of graphics to study impact problems during which one or both participating objects may be severely deformed [6, 8, 65, 94, 100, 101]. All the methods described above evolve either the strain or deformation gradient over the course of the simulation and thus suffer from an accumulation of numerical error that can, in the case of purely elastic materials, prevent them from returning to their undeformed states [7]. Recently advection of reference coordinates has been explored as a means to prevent this in 2D hyperelastic simulations [51]. Eulerian simulations provide an additional advantage for simulating deformable objects in close contact (like muscles). Because Eulerian simulations simulate multiple objects on the same spatial grid they provide a built-in collision resolution mechanism [95]. This has previously been exploited in hair simulation to mitigate the difficulties inherent in detecting and resolving the collisions of thousands of individual hairs [62, 79]. This type of collision resolution does not allow tangential motion at the contact point and so additional collision resolution measures need be applied. In McAdams et al. [62] a separate Lagrangian collision phase is applied after the Eulerian component of the simulation. Faure et al. [32] use penalty forces and constraints on displacement derived from mapping disparate simulations to a common Eulerian grid to resolve contacts. Tran and Udaykumar [100] represent the surface of a deforming solid implicitly using the particle level set method of Enright et al. [31], and use the difference in signed distance at grid nodes to detect impending collisions. Stick or slip boundary conditions are then applied based on a predefined threshold. This approach is also used in Barton and Drikakis [8]. 15  Solid-fluid coupling [11, 21, 85] resolves collisions between a fluid and solids; however, the problem of multiple colliding, deforming solids is not considered. Eulerian techniques have also been used to resolve collisions between deforming Lagrangian objects [1] and to resolve solid-fluid coupling problems [85] (in which equality constraints are used to constrain fluid velocities normal to the surface but allow for discontinuous tangential velocity). In Chapter 5 we present our work on a 3D Eulerian simulator for close contacting volumetric solids that can directly simulate data from medical imaging modalities such as CT and MR.  16  Chapter 3  Data Acquisition In this thesis we have used DTI to observe skeletal muscle fibre architecture noninvasively and this chapter details our acquisition protocol. A brief introduction to Magnetic Resonance Imaging and Diffusion Tensor Imaging can be found in Appendix A for those unfamiliar with either imaging technique. All imaging for our studies was performed at the UBC High-Field MRI Research Centre using their 3T Philips Achieva MRI Scanner. This scanner has dual nova gradients (80 mT /m maximum gradient strength, 200 T /m/s maximum slewrate) and is running scanner software release 2.1.3. Our current acquisition protocol is designed to acquire data from the human forearm. It utilizes a series of varying MRI sequences targeting an identical Field of View (FOV)(Figure 3.1). The imaging subject lies prone in the scanner with the left arm raised straight overhead to be placed as optimally as possible in the magnet’s centre. Centring the arm in the magnetic field is important as it maximizes field homogeneity and this, in turn, helps reduce noise in the resulting imaging data. The subject’s forearm was secured in an 8-element phased array knee coil with 15 17  cm inner diameter. The current imaging protocol consists of a fast gradient echo T1-Weighted (T1W) localizer (Figure 3.1A) for positioning and planning followed by low and high resolution T2-Weighted (T2W)(Figure 3.1B) Fast Spin Echo (FSE) scans for reconstruction of bone and muscle/fat surface boundaries. The session was concluded with a high resolution DTI scan for muscle fibre orientation and segmentation (Figure 3.1C). Quick low resolution anatomical FSE-scans were acquired with an in-plane resolution of 1.5 × 1.5 mm2 and a slice thickness of 4 mm covering the entire lower arm. The high resolution FSE was designed to match the DTI scan in location, orientation and anatomy coverage with the following parameters: FSE-factor 12 with asymmetric profile order to give an effective echo time of TE=50 ms; field of view (FOV):120 × 120 × 150 mm3 with an in-plane resolution of 0.65 × 0.65 mm2 and a slice thickness of 2 mm. The lower resolution T2W-scan was used for segmenting bones and muscles that passed out of the field of view of the high resolution scan. Important parameters such as origin/insertion locations and bone coordinate systems could thus be obtained. Note that these two scans were run sequentially with the DTI scan and that the subject was immobilized. All image volumes were closely aligned, however further image registration was performed and will be detailed later in this chapter. Diffusion Tensor Imaging was performed with a single shot diffusion sensitized spin-echo Echo-Planar Imaging (EPI) sequence involving 16 different gradient encoding directions at a maximum diffusion b-value of 500 s/mm2 . We used 18  Figure 3.1: Slices from each type of MRI acquisition. Image (A) shows a section of the T1 localizer scan acquired using the scanner body coil. Image (B) shows a slice of the high resolution T2 weighted anatomical scan acquired using the knee coil. Image (C) shows a diffusion weighted image (1 of 16) and image (D) shows the fractional anisotropy (FA), computed from the diffusion tensor imaging volume. The colours of the FA image show the direction of diffusion at each pixel: the blue component shows diffusion into/out of the imaging plane, red shows diffusion along the x axis and green shows diffusion along the y axis.  19  a reduced FOV of 120 × 120 × 150 mm3 , SENSE-factor of 2.0 and enhanced gradient performance to shorten the echo train length of the EPI-readout as much as possible for better compensation of susceptibility induced artifacts. Fat suppression was performed with a spectral spatial inversion prepared fat suppression technique. Further imaging parameters were as follows: Echo Time (TE)=48 ms, Repetition Time (TR)=6000 ms, acquisition matrix 80x80 leading to an effective acquisition voxel size of 1.5 × 1.5 × 2.0 mm3 and a scan time of 5 minutes. Additional image processing was done using the FSL software library. An affine registration of anatomical and diffusion scans was performed using FSL’s linear registration application FLIRT [49] and eddy correction of the diffusion images was performed using the utility FDT [36]. We utilize Paraview, an open source visualization toolkit developed by Kitware Inc. [89] to generate stream lines from fibre fields reconstructed from DTI data. The stream tracing filter in Paraview produces paths through a vector field by integrating the positions of marker particles which originate from a geometric source. Integrator choices are limited to Runge-Kutta 2, Runge-Kutta 4 and an adaptive time-stepping Runge-Kutta 4-5. Typically we use the Runge-Kutta 4-5 integrator as it empirically produces the most visually pleasing results. The major limitation of our current diffusion acquisition protocol is that it fails to capture any forearm muscle in its entirety. Muscle segmentation was still possible due to the use of the aforementioned low-resolution anatomical scan which encompassed the entire arm of the subject. However we are unable to extract muscle fibres for a whole muscle. This prevents us from being able to accurately compute bulk muscle fibre metrics such as average fibre length or physiological cross sectional area. The main reason for this is the limited FOV of the knee coil used 20  Figure 3.2: An example of imaging artifacts due to subject motion during images. Expanded views of the image regions within the red and blue rectangles are shown to the right of the full axial slice. Note the ringing prevalent in the selected image regions, which is the typical MRI motion artifact. to acquire both the DTI and high resolution anatomical data. In future work this could be overcome either by aligning and fusing multiple acquisitions or by using a different coil for the MR acquisition.  21  Chapter 4  Muscle Fibre Reconstruction from Noisy Data In this chapter we will describe our work in the area of musculoskeletal image processing which encompasses the first and second stages of the proposed pipeline shown in Figure 1.2. Studying subject-specific muscle architecture necessitates a high-quality, minimally invasive method for acquiring muscle architecture information. Below we will discuss the motivation for a skeletal muscle specific denoising algorithm and demonstrate its effectiveness with a series of quantitative experiments and qualitative observations using both synthetic and in-vivo data.  4.1  Background and Motivation  As was previously mentioned, fibre architecture plays a large role in understanding the function of muscle. In fact, muscles made of similar fibre types but with different architectures can have a 10 to 20 times difference in speed or strength [59]. Observing and modeling muscle architecture is important for computational studies 22  of muscle performance [15] and in medical procedures such as tendon transfer operations [16]. Recent work has shown that Diffusion Tensor Imaging (DTI) can be used to accurately estimate bulk muscle architecture parameters such as pennation angle [54]. However, musculoskeletal DTI is difficult due to the magnetic resonance (MR) properties of muscle which lead to lower signal-to-noise ratio (SNR) in the diffusion weighted images [54]. Once DTI data has been acquired, fibre tracking can be applied in order to construct muscle fibre fields. These fibre fields can then be used for muscle architecture analysis [54]. Most fibre tracking algorithms construct fibres by advecting a particle or surface through the primary eigenvectors of the measured Diffusion Tensor field [9, 24, 29, 67]. Even methods such as Tensorlines utilize the primary eigenvector weighted by the linear anisotropy of the underlying tensor [106]. Underlying these methods is the assumption that the extracted primary eigenvectors are correct; the presence of noise in the imaging data does not make this necessarily so. The question we wish to address, within the context of musculoskeletal DTI is: how does one choose an appropriate fibre direction at each voxel given a potentially noisy tensor field?  4.2  Comparison to Other Work and Contribution  The problem of choosing a denoised fibre direction vector field directly from a diffusion tensor field is not often addressed in literature. There has been a large focus on creating denoised tensor fields from noisy tensor or diffusion weighted data [10, 35, 63, 72, 102, 103, 105, 108, 114] and on directly using the primary eigenvectors of a tensor field as fibre directions during tracking [9, 22, 24, 29, 67, 103, 108]. Here we present a method that generates a globally optimal vector field from a 23  noisy tensor field using assumptions about the biological structure of skeletal muscle (that it is divergence free in most areas). This is in contrast to the standard methodology of performing simple eigenvector estimation after denoising tensor data. The algorithm’s design is motivated by the Helmholtz-Hodge decomposition of a vector field. Computing this decomposition has been previously explored in the computer graphics literature by Tong et al. [98]. In this paper we present a new method for computing a denoised muscle fibre field from a densely sampled diffusion tensor imaging volume. This allows us to reconstruct noise-reduced fibre direction fields.  4.3  Methods  In the following section we describe a method for constructing noise-reduced vector fields from musculoskeletal DTI images. The algorithm is based on constrained optimization and the Helmholtz-Hodge decomposition of a vector field. It exploits simple properties of muscle fibre arrangement to produce accurate vector fields from highly corrupted diffusion tensor fields.  4.3.1  Theoretical Background  Muscle fibres, at some level, are parallel and, except at specific regions (the insertion and origin of the muscle) do not converge or diverge. The presented algorithm is motivated by the Helmholtz-Hodge decomposition of the muscle fibre field which is defined as  24  u(x) = h(x) + p(x) + g(x) ∇ · h = 0, ∇ × h = 0, (4.1) ∇ · p = 0, ∇×g = 0 where u is the fibre vector field, ∇ is the gradient operator, ∇· is the divergence operator and ∇× is the curl operator. The Helmholtz-Hodge decomposition contains two divergence free components h and p (h is known as the harmonic field) as well as one more curl free component g. Because g is the only component with non-zero divergence it will contain the insertions and origins of the muscle fibres while the other two components describe the gross divergence free structure of the fibre field. We can further simplify the decomposition by interpreting a diffusion tensor as a description of the probable direction of the diffusion gradient. Since diffusion is driven by an underlying concentration function we can view the problem of fibre field reconstruction as finding the concentration function, u, that best fits the described diffusion gradient. The reconstructed vector field ∇u will be necessarily curl free. Because muscle fibres are locally parallel it is reasonable to assume that the rotational component of the field is likely to be noise, thus assuming that the reconstructed vector field is the gradient of a scalar function u should lead to better denoising performance. It also causes p in Equation 4.1 to be zero because it is the only component of the decomposition that has non-zero curl. These simplifying assumptions lead us to estimate a subset of the full Helmholtz-Hodge decomposition (h and g) and below we describe how to estimate g and h from noisy tensor  25  data (Figure 4.3). Let D(x) be the diffusion tensor at point x, let u(x) be the concentration gradient at point x and u(x) be the concentration at point x. By definition we know that  u(x) = ∇u(x).  (4.2)  We define the following cost function for the match between u(x) and D(x): n  u(xi )T D(xi )u(xi ) u(xi )T u(xi ) i=1  c(D, u) = ∑  (4.3)  where n is the number of voxels in the imaging volume. This cost function is a sum of Rayleigh quotients, and each term in the sum is maximized when u(x) is in the same direction as the primary eigenvector of D(x). In order to construct a noise free vector field that is consistent with muscle physiology we must constrain this maximization. Instead of finding the set of vectors, u, that maximize the cost in Equation 4.3 we search for an optimal concentration u (related by Equation 4.2). The divergence free constraint then becomes a constraint on the Laplacian of u and the optimization performed is  n  ∇u(xi )T D(xi )∇u(xi ) ∇u(xi )T ∇u(xi ) i=1  h = arg max ∑ u  s.t. ∇ · ∇u = 0, where h = ∇h is the harmonic field in Equation 4.1. 26  (4.4)  Next we must solve for g which we have chosen to represent as a linear combination of Radial Basis Function (RBF)s (Equation 4.5). Our assumption is that g contains the sources and sinks of the muscle fibre field. These sources and sinks alter fibre directions locally, with the effect dissipating with distance. RBFs are an appropriate set of basis functions because they mimic this behavior. We solve for the coefficients, ai , by substituting this u into the cost function Equation 4.3 and performing an unconstrained optimization in which h is held constant. n  g(x) = ∑ ai ∇gi (x)  (4.5)  i=1  where gi is a radial basis function (RBF) and ai are coefficients. The complete vector field can then be represented as n  u(x) = ∇ h(x) + ∑ ai gi (x) .  (4.6)  i=1  We solve for the coefficients, ai , by substituting this u into the cost function Equation 4.3 and performing an unconstrained optimization in which h is held constant. Again, we are assuming that the bulk of the muscle is divergence free. Muscle fibres converge or diverge to or from tendons at the aponeurosis. The computed g corrects h at these points. We have found that Gaussian RBFs (Equation 4.7) offer the best denoising in our experiments. Each RBF is positioned at a point y in space. In the current implementation we centre an RBF at each grid point in the imaging volume. The parameter σ can be used to adjust the support of the function. In practice a larger σ causes the denoised vector field to be smoother while a smaller σ preserves more of the field’s details. 27  gi (x) = exp −  4.3.2  1 (x − yi )T (x − yi ) σ2  (4.7)  Numerical Implementation  We reconstructed DTI image volumes from diffusion weighted images (from both synthetic and human subject data) using non-linear least squares fitting. The reconstruction implementation enforces the constraint that each diffusion tensor in the DTI image must be Symmetric Positive Definite (SPD) [13]. This is done by expressing each diffusion tensor as a Cholesky factorization which allows the SPD constraint to be incorporated into the cost function minimized by the least squares procedure. Each DTI image volume contains tensors at the discrete points indexed by the triple (i, j, k). Finite differences are a straightforward method with which to estimate the gradient and the Laplacian of u. However, one-sided finite difference schemes, on a regular grid, do not incorporate information from the entire surrounding neighbourhood of the discrete tensor field Di jk . To remedy this problem we discretize u and D using a Marker and Cell (MAC) grid [17]. In this way we can accurately compute the gradient of u at the exact location of D using centred differences. In two dimensions the Laplacian operator at a point indexed by (i − 1/2, j) can be defined as  Li−1/2, j =  1 (u − 2ui−1/2, j + ui−1/2, j+1 ) ∆y i−1/2, j−1 1 + (ui−3/2, j − 2ui−1/2, j + ui+1/2, j ) ∆x  28  (4.8)  ui-1/2,j+1  ui-3/2,j  ui-1/2,j ui,j  ui+1/2,j  ui-1/2,j-1  Figure 4.1: A diagram of the nodes used to compute the Laplacian at node ui−1/2, j which is depicted by the solid black circle. Neighbouring nodes are shown as hollow circles. where ∆x is the pixel spacing in x and ∆y is the pixel spacing in the y direction (Figure 4.1). Similar formulas can be defined for nodes at ui, j+1/2 . Linear interpolation is used to calculate values of u that lie off of the MAC grid. We also define the gradient of u, at node (i, j) as  Gi, j =  1 1 (ui+1/2, j − ui−1/2 ), (ui, j+1/2 − ui, j−1/2 ∆x ∆y  T  (4.9)  so that it is centred over each diffusion tensor Di j in the image. By assigning each of the n discrete cell centres i, j a unique number k and each of the p discrete cell wall points a unique number m we can use equations Equation 4.8 and Equation 4.9 to define matrices L and G such that  29  p  lk =  ∑ Lkm um  (4.10)  m=1  and p  uk =  ∑ Gkm um  (4.11)  m=1  where lk is the Laplacian computed at point k, uk is the gradient at point k and um is the vector containing values of u at each cell wall point. Please note that in actual fact the matrix G has 2 (or 3)n rows because the gradient at each point has two (or three in three dimensions) components (Equation 4.9) and so the indexing must increment by two (or three). Here we ignore this detail in in order to simplify the notation. As a further notational convenience we define Gk which is a matrix consisting of the rows of G required to compute uk in the following manner  uk = Gk u  (4.12)  where u is now the vector of all the discrete values of u. Using the above operators we can finally define the discrete form of Equation 4.4, used in the first step of the algorithm, as  n  uT GTi Di Gi u . T T i=1 u Gi Gi u  h = max ∑ u  (4.13)  s.t. Lu = 0 When solving for g we begin by constructing the matrix R which is defined as  30  R = [∇g1 ∇g2 ...∇gn ]  (4.14)  where each column of the matrix, ∇g j , is the gradient of an RBF (Equation 4.5) centred at one of the the 1...n MAC grid cell centres and evaluated at all of the 1...n cell centres. In a similar manner as before we can define Rk such that  uk = hk + Rk a  (4.15)  where a is the vector of coefficients from Equation 4.5 and hk is Gk h. By substituting Equation 4.15 into Equation 4.3 we can define the discrete cost as a function of the coefficients, a. We can then define the optimization used in the second stage of the algorithm as  n  (Gi h + Ri a)T Di (Gi h + Ri a) . T i=1 (Gi h + Ri a) (Gi h + Ri a)  max ∑ a  (4.16)  Our algorithm proceeds in two phases, the initial h-phase solves the optimization described by Equation 4.13. This can be done using standard techniques. We use constrained steepest descent to minimize a negated Equation 4.13. We maintain the constraints by projecting onto the nullspace of L at each step. This projection is defined as P = I − LT (LLT )−1 L  (4.17)  where I is the identity matrix. The second phase, g-phase, solves the optimization described by Equation 4.16. We use the limited memory Broyden-Fletcher31  Goldfarb-Shanno (L-BFGS) method [19] to minimize a negated Equation 4.16. The solution methods used in both the h-phase and g-phase of the algorithm require the gradients of equations Equation 4.13 and Equation 4.16 which are given by n  ∇u c = ∑ −2GTi i=1  (uTi Di ui )ui − (uTi ui )Di ui (uTi ui )2  (4.18)  (uTi Di ui )ui − (uTi ui )Di ui (uTi ui )2  (4.19)  and n  ∇a c = ∑ −2RTi i=1  respectively. It should be noted that if a vector field h is a solution to Equation 4.13 then ah is also a solution for any scalar a. However in terms of optimality (Equation 4.3) h and ah are equal. We normalize h such that the maximum vector length in the field is 1. As noted above, the new vector field is still a solution to Equation 4.13 and its cost is unchanged but our starting harmonic for the g-phase of the algorithm becomes unambiguous. For real DTI datasets the optimization scheme was modified. The initial, constrained optimization, Equation 4.13, is difficult to solve for large datasets due to the memory required to store the computational grid. In these cases we solve for the divergence free field on a coarse grid and then use trilinear interpolation to resample the solution to full resolution such that the second optimization can be performed. This resampling procedure is known as prolongation. The Laplacian (Equation 4.8) has a large magnitude in areas where h, the scalar potential of the harmonic, has a high spatial frequency. Since we are constraining h to have a Laplacian equal to zero we are searching for a function with low spatial frequencies. Such a function can be represented on a lower resolution spatial grid. 32  In practice we limit the dimensions of the coarse grid to a maximum of 16×16×16 voxels. Numerical experiments also show that all constraints remain satisfied after applying the prolongation operator. Solving for g is done using a stochastic method in which we randomly choose m voxel locations in the DTI data. We then solve for a using Equation 4.16 and compute an updated u using Equation 4.15. This procedure is repeated until all voxel locations in the DTI dataset have been sampled. Because the initially described method solves for h using a full resolution grid and uses a dense sampling of the imaging volume to compute g we term it Full Resolution, Dense Sampling (FRDS) whereas the second method uses a Coarse Resolution grid and Stochastic Sampling (CRSS) (Figure 4.2). Pseudocode for each algorithm is shown in 1 and 2. The initial step in both algorithms is to compute the primary eigenvectors of each tensor Di . This is done using LAPACK [2] and in the pseudo code we denote this operation as e(D). We use a MINPACK [48] for all non-linear least squares computations. Specifically we use the reference FORTRAN implementation of LAPACK and the C++ version of MINPACK [28]  4.3.3  Synthetic Data Generation  In order to test the effect of noise on the denoising algorithm we generated four synthetic datasets. The first was a constant vector field with the vector (0, 0, 1) at each point. The second was a two dimensional (2D) bipennate muscle dataset. This dataset was an 8 × 8 × 8 volume of tensors computed from a vector field defined by  vs (x) =      (1, −1, 0)T  if xx < 4    (−1, −1, 0)T  if xx > 4  33  .  (4.20)  Full Resolution  Coarse Resolution  Stochastic Sampling  Dense Sampling  Figure 4.2: A schematic of the CRSS and FRDS algorithms. The third was a semi-circular parallel line dataset described by v = (−r sin(θ )), r cos(θ ), 0)T  (4.21)  where r = x − xc , xc is the centre of the curves and θ is arctan( xy ). The fourth was a synthetic 3D bipennate dataset described by  vs (x) =      (1, −1, 1)T  if xx < 4    (−1, −1, 1)T  if xx > 4  . 34  .  (4.22)  Algorithm 1 Pseudocode for the Full Resolution, Dense Sampling Algorithm. 1: h0 ⇐ (GT G)−1 GT e1 (D) 2: P ⇐ I − LT (LLT )−1 L 3: h0 ⇐ Ph0 4: tol ⇐ 1e−8 5: rel ⇐ 1e8 h0T GTi Di Gi h0 6: c prev ⇐ ∑ni=1 h0T GTi Gi h0 7: while rel > tol do 8: u ⇐ Gh0 (uT D u )u −(uT u )D u 9: ∇c ⇐ ∑ni=1 −2GTi i i i (uiT u )2i i i i i i 10: ∇c ⇐ P∇c 11: h0 ⇐ BrentsMethod(h0, ∇c) [82] h0T GT D Gi h0 12: c ⇐ ∑ni=1 h0T Gi T Gi h0 c−c prev c  i  i  rel ⇐ end while 15: a0 ⇐ 0 (G h0+R ak )T Di (Gi h0+Ri ak ) 16: a ⇐ Minimize − ∑ni=1 (Gi h0+Ri ak )T (G using L-BFGS using a0 as an h0+R ak ) 13:  14:  i  i  i  i  initial guess. {ak are the coefficients at the kth iteration of LBFGS.} 17: u ⇐ Gh0 + Ra 18: return Normalize (u)  Simulated Diffusion Weighted Images were computed from these vector fields using the algorithm of Bergmann et al. [13]. Rician noise was added to the artificial DWI images at varying SNR. Noisy tensor fields were then computed from these DWI images. Both algorithm versions were tested on SNR values ranging from 20 to 5. The primary eigenvector field of each noisy tensor field was extracted and compared to the original vector field using the sum of the absolute value of the dot product as a similarity measure:  s=  1 n ∑ v1(xi )T v2(xi ) . n i=1  where n is the number of voxels in the dataset. 35  (4.23)  Algorithm 2 Pseudocode for the Coarse Resolution, Stochastic Sampling Algorithm. 1: T ⇐ Trilinear prolongation operator 2: h0 ⇐ minh GT · h − e1 (D) 2 using Non-linear Least Squares (MINPACK [48]) {T · h interpolates h from the coarse grid resolution to the resolution of the imaging data.} 3: P ⇐ I − LT (LLT )−1 L 4: h0 ⇐ Ph0 5: tol ⇐ 1e−8 6: rel ⇐ 1e8 h0T GTi Di Gi h0 7: c prev ⇐ ∑ni=1 h0T GTi Gi h0 8: while rel > tol do 9: u ⇐ GT · h0 (uT D u )u −(uT u )D u 10: ∇c ⇐ ∑ni=1 −2T T GTi i i i (uiT u )2i i i i i i 11: ∇c ⇐ P∇c T T h0 G D Gi h0 12: c ⇐ ∑ni=1 h0T Gi T Gi h0 i 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26:  c−c  i  rel ⇐ cprev end while u ⇐ Gh0 m ⇐ Number of RBFs to fit at each iteration S ⇐ Set of all unsampled voxel locations u ⇐ Gh0 while S is not empty do Sm ⇐ m randomly selected voxel locations from S S ⇐ S − Sm Rm ⇐ [∇gS1 ...∇gSm ] (u+Rm a)T Di (u+Rm i a) am ⇐ Minimize − ∑ni=1 (u+Ri m a)T (u+R m a) using L-BFGS using 0 as an inii i tial guess. u ⇐ u + R m · am end while return Normalize (u)  36  Denoised vector fields were then computed and these were compared to the original vector field using the same metric.  4.3.4  MRI Data Acquisition and Segmentation  Data acquisition was performed in the manner described in Chapter 3. Muscle surfaces were created by segmenting the anatomical MRI data using the fast shape matching algorithm of Gilles and Pai [40]. The three muscle surfaces generated were those of the Brachioradialis (BR), Extensor Carpi Radialis Longus (ECRL) and the Extensor Carpi Radialis Brevis (ECRB). Volumetric masks were generated from these muscle surfaces and these were then used to mask the DTI data in preparation for denoising. Fibre tracking was performed on each muscle vector field (prior to and after denoising) using Paraview, an open source visualization toolkit developed by Kitware Inc. [89]. As was previously mentioned in Chapter 3, one limitation of our current diffusion acquisition protocol is that it fails to capture any forearm muscle in its entirety. Muscle segmentation was still possible due to the use of the aforementioned low-resolution anatomical scan which encompassed the entire arm of the subject. However we were unable to extract muscle fibres for a whole muscle. This prevented us from being able to accurately compute bulk muscle fibre metrics such as average fibre length or physiological cross sectional area. We hope to address this issue in a future work and will discuss it further in later sections of this paper.  4.4  Results and Discussion  Figure 4.3 shows the output of each stage of the algorithm for a synthetic fusiform muscle tensor field. The top left sub figure shows the principal eigenvectors of  37  the input tensor field. Notice that, as expected, the curl free component contains the sources and sinks at either ends of the synthetic fusiform muscle dataset. Because we are only interested in the direction of each vector we normalize the final summed vector field in order to produce a final output which is shown in the top right of Figure 4.3 for comparison with the input vector field. It is also worth noting that the algorithm corrects the spurious directions present in the input vector field. The consistent vector directions are a result of the initial, divergence free optimization. The gradient field of h neither converges nor diverges. This eliminates the flipped vectors in Figure 4.3.  Normalized Output  Y Axis  Y Axis  Input  X Axis  X Axis  Algorithm Curl Free  X Axis  Result  = X Axis  Y Axis  Y Axis  +  Y Axis  Divergence Free  X Axis  Figure 4.3: The output of the various algorithm stages. This figure shows the principal eigenvector field of the input tensors (top left) as well as the final output of the algorithm (top right) and the output of the divergence free reconstruction, estimated curl free component and their sum.  38  4.4.1  Synthetic Data Results  Figures 4.4, 4.5, 4.6, and 4.7 summarize the results of applying the FRDS and the CRSS denoising algorithms to all four synthetic datasets. In all cases the following patterns can be observed. Firstly, for each combination of SNR and σ both denoising implementations produce vector fields that are more similar (Equation (4.23)) to the ground truth than the primary eigenvector fields. Secondly, the larger two values of σ , 20, and 100, produce noticeably better results for low SNR. Both denoising implementations follow comparable patterns of decreasing similarity as SNR is reduced and σ is increased. However it is interesting that the CRSS algorithm produces more similar results than the FRDS implementation in the 3D bipennate case for σ = 5. The second stage of the FRDS algorithm attempts to fit RBFs at each voxel location simultaneously whereas the CRSS algorithm iteratively fits randomly selected RBF subsets. A sparse set of RBFs may provide a better approximation considering the small number of insertions and origins (i.e. sources and sinks) found in muscle whereas the dense sampling of the FRDS algorithm could be overfitting the data. Figures 4.8, 4.9, 4.10, and 4.11 show the noisy tensor fields, the noisy eigenvectors fields and the denoised vector fields at zero noise (infinite SNR) and SNR values of 20, 10, and 5 for all synthetic datasets. Both denoising algorithms introduce some smoothing in the synthetic bipennate datasets. This is to be expected because we are attempting to fit a smooth field to the underlying tensor data. This smoothing is also a consequence of the RBF fitting stage of the algorithm which uses neighborhood information to estimate the underlying function u. Despite this, both algorithms always produce vector fields which are more similar to the orig-  39  Figure 4.4: Synthetic Z-Axis Vector Field: The similarity of both the denoised vector field and the noisy primary eigenvector field to the original, zero noise data. The result for each algorithm is denoted by the algorithm name followed by the value of the parameter σ . 1  Similarity  0.9  Noisy Eigenvectors FRDS 2.00 CRSS 2.00 FRDS 5.00 CRSS 5.00 FRDS 10.0 CRSS 10.0 FRDS 20.0 CRSS 20.0 FRDS 100.0 CRSS 100.0  0.8  0.7  0.6  0.5  20  15  Signal−to−Noise Ratio  10  5  Figure 4.5: Synthetic Curved Vector Field: The similarity of both the denoised vector field and the noisy primary eigenvector field to the original, zero noise data. The result for each algorithm is denoted by the algorithm name followed by the value of the parameter σ .  40  Figure 4.6: Synthetic 2D Bipennate Vector Field: The similarity of both the denoised vector field and the noisy primary eigenvector field to the original, zero noise data. The result for each algorithm is denoted by the algorithm name followed by the value of the parameter σ .  Figure 4.7: Synthetic 3D Bipennate Vector Field: The similarity of both the denoised vector field and the noisy primary eigenvector field to the original, zero noise data. The result for each algorithm is denoted by the algorithm name followed by the value of the parameter σ .  41  inal, synthetic field than the primary eigenvector fields (Figures 4.4, 4.5, 4.6 and 4.7). As the SNR is decreased both the FRDS and the CRSS algorithms impart more smoothing but still produce results that capture the underlying structure of the synthetic datasets. These observations are supported quantitatively by Figures 4.4, 4.5, 4.6 and 4.7 which show that even at low SNR values the denoised vector fields maintain a high (> 0.9 for σ >= 10) similarity with the synthetic vector fields. Both the FRDS and CRSS algorithms were also compared with the recently published Joint Rician Linear Minimum Mean Squared Error (JRLMMSE) method of Trist´an-Vega and Aja-Fern´andez [102]. JRLMMSE acts on Diffusion Weighted Images prior to tensor calculation in order to reduce the Rician noise. JRLMMSE was applied to the previously defined synthetic datasets (Equations (4.20), (4.22), (4.21)) using the implementation of Trist´an-Vega et. al available in the 3D Slicer software platform [81]. The denoised DWIs were used to create a tensor field from which the eigenvector field was extracted. Since we are primarily interested in the performance of the presented algorithms under low SNR conditions we performed the comparison using an SNR of 5, the results of which are shown in Table 4.1. Parameters for JRLMMSE were manually tuned to achieve the best quantitative results. A σ value of 10 was used for the FRDS and CRSS algorithms. For this value of σ the results for the FRDS and CRSS are nearly identical (Figures 4.4, 4.5, 4.6 and 4.7) thus we present similarity values for the FRDS only. Both algorithms produced denoised vectors that were more similar to the ground truth than those estimated by naive eigenvector computation. The results also show that the FRDS algorithm produced primary eigenvector fields that were quantitatively closer to the ground truth for three of the four datasets (Table 4.1). For the Z42  Figure 4.8: A comparison of denoised results to noisy synthetic z-axis tensor and eigenvector fields for SNR values of 20 to 5. Each row shows images from a specific SNR with 0 noise indicated by Inf. The middle slice of each 3D dataset is shown. Displayed in order from left to right are the tensor volume, the primary eigenvectors of the tensor volume, the FRDS denoising result and the CRSS denoising result. Red, green and blue values of each vector are colored according to their magnitude in the coordinate directions. The red component is equal to the vector magnitude in x, green corresponds to the y magnitude and blue to the z magnitude. 43  Figure 4.9: A comparison of denoised results to noisy synthetic curved tensor and eigenvector fields for SNR values of 20 to 5. Each row shows images from a specific SNR with 0 noise indicated by Inf. The middle slice of each 3D dataset is shown. Displayed in order from left to right are the tensor volume, the primary eigenvectors of the tensor volume, the FRDS denoising result and the CRSS denoising result. Red, green and blue values of each vector are colored according to their magnitude in the coordinate directions. The red component is equal to the vector magnitude in x, green corresponds to the y magnitude and blue to the z magnitude. 44  Figure 4.10: A comparison of denoised results to noisy synthetic 2D bipennate tensor and eigenvector fields for SNR values of 20 to 5. Each row shows images from a specific SNR with 0 noise indicated by Inf. The middle slice of each 3D dataset is shown. Displayed in order from left to right are the tensor volume, the primary eigenvectors of the tensor volume, the FRDS denoising result and the CRSS denoising result. Red, green and blue values of each vector are colored according to their magnitude in the coordinate directions. The red component is equal to the vector magnitude in x, green corresponds to the y magnitude and blue to the z magnitude. 45  Figure 4.11: A comparison of denoised results to noisy synthetic 3D bipennate tensor and eigenvector fields for SNR values of 20 to 5. Each row shows images from a specific SNR with 0 noise indicated by Inf. The middle slice of each 3D dataset is shown. Displayed in order from left to right are the tensor volume, the primary eigenvectors of the tensor volume, the FRDS denoising result and the CRSS denoising result. Red, green and blue values of each vector are colored according to their magnitude in the coordinate directions. The red component is equal to the vector magnitude in x, green corresponds to the y magnitude and blue to the z magnitude. 46  Axis dataset JRLMMSE produces slightly better quantitative results, however for both the 2D and 3D Bipennate datasets the FRDS algorithm produces vector fields with considerably higher similarity (Table 4.1). These results illustrate that the FRDS and CRSS algorithms provide equivalent or better denoising performance than JRLMMSE in low SNR scenarios acting on skeletal muscle-like datasets. Table 4.1: A comparison of denoised vector fields produced by the Full Resolution, Dense Sampling algorithm and the Joint Rician Linear Minimum Mean Squared Error algorithm [102]. Results are shown for four synthetic datasets with signal-to-noise ratio equal to 5.  FRDS JRLMMSE  4.4.2  Z-Axis 0.978 0.981  2D BiPenn. 0.981 0.955  Curves 0.999 0.843  3D BiPenn. 0.970 0.772  Human Subject Data Results  Figure 4.12 shows the orientation of the DWI scan as well as three segmented muscle surfaces relative to the arm and hand of the subject. The three muscle surfaces shown are the brachioradialis (red), the extensor carpi radialis longus (green) and the extensor carpi radialis brevis (blue). Three slices of DWI data are also shown, the first and last slices illustrate the extent of the DWI acquisition. The CRSS denoising algorithm was applied to DTI data computed from the forearm of a human subject. Specifically, we have produced denoised vector fields for portions of three muscles in the forearm, the ECRL, ECRB and BR. We only show subsets of each muscle due to the aforementioned limitations in the MRI acquisition protocol. Results of applying fibre tracking to both the noisy and denoised vector fields for each muscle are shown. Fibre tracking terminated at the tendon 47  Figure 4.12: The relative position of the acquired DWI scan as well as the muscle surfaces used for denoising and fibre tracking. The first, last and an arbitrarily chosen middle slice of the dataset are shown. The displayed muscles are the brachioradialis (red), extensor carpi radialis longus (green) and extensor carpi radialis brevis (blue). section of each muscle mesh because of the narrow structure of the geometry. The figure is oriented in the same manner as Figure 4.12 such that its left edge is distal and the right edge is proximal. Most of the ECRL muscle body fell outside of our DWI field of view. However we do observe that the small fibres extracted from the denoised data do not exhibit the curled trajectories of those in the noisy data. The general behavior of the fibres is similar, moving from the proximal to distal side of the muscle where the ECRL inserts into the tendon that connects to the second metacarpal [99]. The removal of curl from the vector field is to be expected given the previously discussed formulation of the denoising algorithm. The ECRB data shows the effect of our algorithm more clearly, where the chaotic nature of the noisy streamlines is greatly smoothed out. Furthermore, the streamlines of the denoised ECRB can be observed converging at the distal end of the muscle, where the tendon that connects the muscle to the third metacarpal should arise [99]. In the noisy data this convergence is far less obvious. The bifurcated nature of the muscle fibres towards the distal end is similar to the fibre architecture observed by Ravichandiran et al. [84]. The denoised fibres extracted from the BR all curve  48  towards a common insertion point at the distal end of the muscle[99] whereas the noisy data exhibits fibres that appear to diverge (particularly at the proximal edge of the image). Note that the denoised fibres are corrected so that they follow a path towards the distal end of the muscle. This fibre configuration is in agreement with the anatomical description of the BR in that the muscle inserts into the brachioradialis tendon at its distal end [99].  Figure 4.13: Comparison of noisy and denoised (CRSS) vector field results using streamlines. Available portions of three muscles are shown, the distal end of the ECRL (Extensor Carpi Radialis Longus), and larger portions of the ECRB (Extensor Carpi Radialis Brevis) and the BR (Brachioradialis). the left side of the image is towards the distal end of the arm and the right hand side is towards the proximal end. Extraction of a physiologically plausible, noise-reduced vector field from noisy tensor data is not often explored. Both Basu et al. [10] and Fillard et al. [35] use probabilistic estimators to remove noise from the tensor image. They assume that the noise has a Rician distribution. This method has the advantage of not requiring an anatomical segmentation to produce good results. It also works well on DWI images from any anatomical region whereas the presented algorithm is designed specifically for musculoskeletal DTI. However, our method uses physical intuition 49  to produce denoised results and thus should work well even when image noise is not Rician. Furthermore Basu et al. [10] demonstrate results down to an SNR of 12.5 but only present tensor metrics such as fractional anisotropy as results. This makes direct comparison impossible because our algorithm produces vector fields and not tensors as output. They do provide images of the algorithm applied to synthetic images with an SNR of 15. We provide results for synthetic data with SNR as low as 5 and show the algorithm computes reasonable results. Finally the presented algorithm produced quantitatively better results on three of four synthetic datasets than the state of the art JRLMMSE algorithm of Trist´an-Vega and Aja-Fern´andez [102] (Table 4.1). The similarity difference for the fourth dataset was 0.03. These results indicate the presented method competes well with tensor denoising algorithms in terms of being resilient to noise. However, it is important to note that the presented method does not replace existing tensor denoising algorithms in the imaging pipeline. The denoised vector fields offer both resistance to noise and, due to the divergence free constraint, are optimally “muscle like”. This algorithm could be applied to previously denoised tensors to yield a vector field that is more accurate than what could be achieved with either algorithm separately. This vector field could then be used for fibre tracking and tractography in conjunction with the denoised tensor field.  4.4.3  Limitations and Future Work  The presented algorithms have two main limitations. The first is that the underlying theory assumes that a significant part of the vector field to be reconstructed is smooth. While we have shown good results on synthetic data with a discontinuity (Figure 4.10 and Figure 4.11) there is no way to entirely prevent smoothing. 50  In practice this is less of a problem since we segment and process muscles separately and thus avoid the discontinuities caused by boundaries between muscles. The second limitation stems from the fact that we are reconstructing conservative vector fields. Conservative vector fields represent all irrotational flows within a simply connected domain, a domain without holes. The anatomical structure of most skeletal muscles observes this property however there are some exceptions such as sphincters, the ciliary muscle in the eye and the heart. A productive direction of future work would be to relax this restriction. The reliance on an anatomical segmentation is also a limitation. This algorithm’s primary use is to provide more accurate vector fields for fibre tracking. It is fair to assume that such a segmentation will be available because fibre tracking algorithms often require a segmented surface as a constraint. While the results produced by the algorithm are promising there are several areas of future work we intend to explore. Firstly, the parameter σ is currently user defined. It would be advantageous to incorporate this into the optimization scheme using a prior model. Secondly, we would like to modify our MRI acquisition protocol in order to capture entire forearm muscles in the diffusion scan. This would allow the extraction of denoised fibre fields from entire muscles, facilitating the computation of physiological parameters and enabling further validation of the denoised fibre field results against the existing literature [59, 84]. The effect of vector field noise on these physiological parameters could then be examined. Thirdly, the algorithm requires an SPD tensor field as input. Extending the algorithm to operate directly on diffusion weighted images could simplify vector field extraction further as well as lead to increased resistance to noise. Finally, we also intend to perform computational simulations of muscle behavior and examine how extracted 51  fibre architectures affect the function of muscle. Despite these limitations, the results for both synthetic and anatomical data experiments are promising. When run on synthetic data both the deterministic algorithm and its large scale variant show good similarity (> 0.9 Figures 4.4, 4.5, 4.6 and 4.7) for smoothing parameter values of 10 or greater. These results were visually verified as well (Figures 4.8, 4.9, 4.10 and 4.11). Furthermore, fibres extracted from denoised vector fields computed from human forearm DTI data show greater smoothness than those extracted from the noisy data. The results for the ECRB and the BR are especially interesting because the denoised fields appear to either correct errors in the noisy vector fields (BR) or highlight salient features of the fibre architecture such as in the ECRB case (Figure 4.13).  4.5  Conclusion  We have designed and implemented algorithms for constructing denoised skeletal muscle fibre vector fields from noisy diffusion tensor images. Using synthetic data we have shown that the denoised vector fields provide a much better approximation (>0.9 similarity with σ >= 10) to ground truth noiseless data for SNR values greater than or equal to 5. Furthermore we have shown that the algorithms offer denoising performance exceeding that of a state-of-the-art DWI image filter at low SNR for muscle-like datasets. Finally, we show that the algorithms produce qualitatively plausible results when applied to DTI data of a human forearm.  52  Chapter 5  Eulerian Simulation of Volumetric Objects In this chapter we will describe our work on the simulation of volumetric data which forms the final block of the proposed pipeline outlined in Figure 1.2. The simulator described below is designed in accordance with a series of observations brought about by our previous work on musculoskeletal imaging. First, the acquired muscle data is entirely volumetric and generating a Lagrangian, geometric representation would require additional, complicated algorithms. Second, muscles are soft, viscoelastic, highly deformable objects in close contact. Contact between nearby structures can lead to element inversion and instability in the simulation. Finally, collision detection and resolution further complicates matters by requiring additional data structures to detect objects in contact. With this in mind we were motivated to develop an Eulerian method for simulating viscoelastic solids (such as human tissue) undergoing large deformation.  53  The method utilizes a spatial grid as both the simulation and collision data structure and can provide subgrid collision detection and response. No explicit meshes of the deformable objects are required and this makes the algorithm ideal for simulating volumetric data such as that acquired from CT and MRI. Furthermore, the fixed nature of the grid makes this method trivially parallelizable and we demonstrate this with a Graphics Processing Unit (GPU)-based implementation.  5.1  Contributions  Our main contribution is an Eulerian simulator for large deformation, volumetric solids in close contact, with arbitrary constitutive models. We also detail a method for computing strain directly with respect to an object’s reference configuration. This avoids unintended plasticity due to numerical approximation. In addition, we derive a general contact formulation for Eulerian simulations that is based on the volume of overlap of multiple objects and Gauss’ principle of least constraint. We present an implementation that features subgrid collision detection and resolution. In order to make simulations less time consuming an adaptive time stepping scheme for constrained systems is developed. Finally, we present a parallel implementation of the algorithm using the GPU.  5.2  Eulerian Simulation  Continuum simulation of materials is concerned with developing a mapping between an object’s undeformed, or reference, configuration and its deformed, or spatial, configuration. Specifically, given some particle at position X in the reference state, we wish to compute its spatial position x. This can be done by integrat-  54  ing the momentum equation, given by  ρ  dv(X) = ∇ · σ + ρb. dt  Here ρ is the density of the solid, v is the particle’s velocity,  (5.1)  dv dt  is the particle’s ac-  celeration, σ is the Cauchy stress and b is the acceleration due to body forces (like gravity). Solving the same equation in an Eulerian sense requires expressing Equation 5.1 in terms of the spatial variables x as opposed to the reference coordinates X. Restating the momentum equation thusly leads to  ρ  ∂v + v · ∇v = ∇ · σ + ρb ∂t  (5.2)  where the left-hand side of the equation has been modified to include the material derivative of v. Integrating this equation yields a velocity field in space through which the object’s mass can be advected. To compute v we must first compute the stress acting on the object. For solids this requires estimating the strain E which is a function of both x and X. In typical Eulerian simulators X is not available but here we reintroduce it to the simulation framework. The properties of the object are all functions of the form P(X). By expressing X as X(x) we gain access to the object’s properties from the spatial domain, no matter what the deformed configuration. Because X is a reference coordinate we can write dX = 0. dt  55  (5.3)  Applying the material derivative to this yields ∂X + v · ∇X = 0 ∂t  (5.4)  We use this equation to advect the object’s reference coordinates. The availability of X at each spatial point will facilitate elastic stress computation. This approach has been previously demonstrated for 2D hyperelastic simulation [51]. However, unlike that method we do not advect other quantities along with the reference coordinates. Instead we advect only the reference coordinates and access other field variables in a manner akin to texture mapping. Aside from reducing memory usage and per time-step work, a major advantage of this approach is that it allows us to reconstruct sharp object boundaries, something that will be exploited later for collision detection and response.  5.2.1  Strain Calculation  Calculating the stress acting on a solid object in its deformed configuration requires first calculating the strain. The Green finite strain tensor is defined as  E=  1 T F F−I 2  (5.5)  where F is the deformation gradient and I is the identity matrix. The deformation gradient is defined as F = ∇X x,  56  (5.6)  the gradient of the spatial coordinates with respect to the reference coordinates. In the Eulerian method we cannot compute F but rather its inverse given by F−1 = ∇x X,  (5.7)  the spatial gradient of the function X. F−1 can then be inverted to obtain F and finally the finite strain can be computed using Equation 5.5. Given the strain E, any arbitrary constitutive relationship can be used to compute the stress σ (E(x)). Computing the Green finite strain tensor allows the use of familiar Lagrangian constitutive models within the Eulerian simulator. However, any strain tensor (Including the Almansi strain tensor) could be used because both F and F−1 are available.  5.2.2  Notation  Subsequent to this section all spatial derivatives are taken with respect to the spatial coordinates, x, unless otherwise stated. Furthermore we will use a superscripted dot to indicate total derivative with respect to time (e.g., v˙ ) and a superscripted to indicate partial derivative with respect to time (e.g., v ). We will use D(P) to denote the discrete Jacobian matrix of a function P, in which the partial derivatives are evaluated using the standard, first-order upwind differencing scheme.  5.3  Simulating a Single Object  Our algorithm begins by discretizing the simulation domain into a regular 3D grid (G), with user defined spacing (∆x, ∆y, ∆z). We store v and X at the nodes of each grid cell. Partial derivatives of X can then be computed at cell edges using centered differences and interpolated to the cell center in order to reconstruct F−1 .  57  We discretize Equation 5.2 using a simple Finite Volume Method (FVM). We note that other schemes could easily be substituted and that we have chosen the current one for ease of implementation. Rectangular integration regions (cubes in 3D) are defined at each node by adjacent grid cell quadrants (octants in 3D). Figure 5.1 shows a 2D integration region. Assuming constant density inside an integration region (Ω) and applying the divergence theorem to the integral form of Equation 5.1 yields  σ · ndS + f (i, j, k)  m (i, j, k) v˙ (i, j, k) =  (5.8)  ∂Ω  where m (i, j, k) is the mass of the grid node indexed by the triple (i, j, k), v˙ (i, j, k) is the acceleration and f (i, j, k) are the applied body forces. In order to compute the surface integral in Equation 5.8 we follow the work of Teran et al. [96] and express the traction (the force per unit area exerted on each grid cell wall) contribution of each cell on the central node as 1 τc = − L (σ · nA + σ · nB ) , 2  (5.9)  where L is the length of a grid edge. The surface integral in Equation 5.8 can be expressed as ∂Ω  σ · ndS = ∑ τc  (5.10)  c  in the 2D case, where c is one of the four cells neighboring the node (Figure 5.1). In 3D Equation 5.9 becomes 1 τc = − A (σ · nA + σ · nB + σ · nC ) 4 58  (5.11)  where the surface normals are the outward facing normals of each grid cell face and A is the area of a cell face. Expanding v˙ in Equation 5.8 and moving the advective term to the right-hand side of the equation yields  mv = ∑ τc − mD(v)v + f  (5.12)  c  where D(v)v is the discrete matrix-vector form of v · ∇v described in Sec. 5.2.2. Applying the first order (forward Euler) discretization to v in Equation 5.12 and collecting the equations for each grid cell yields the linear system Mvt+1 = f∗  (5.13)  where M is the mass matrix (which is diagonal), vt+1 is the velocity field at time t + 1 and f∗ = Mvt − ∆t MD(vt )vt − τ − f  (5.14)  where τ is now the vector of forces resulting from internal stresses, D(vt )vt is the vector of advective forces, f is the vector of body forces and ∆t is the time step size. In order to make this system non-singular we remove equations corresponding to inactive nodes (m (i, j, k) = 0) and therefore the resulting vt+1 is only valid at active nodes. We extrapolate this velocity field to inactive grid cells using the method of Zhu and Bridson [116]. Because the reference coordinates are maintained for each grid cell they do not require extrapolation. However, we enforce a Neumann boundary condition on the reference coordinate during advection. Specifically that the gradient of the reference coordinate be preserved at the boundary of the domain. 59  In order to update the reference coordinates we discretize Equation 5.4 again using the first-order upwind differencing scheme. This yields the update equation  Xt+1 = Xt − ∆tD(vt+1 )Xt  (5.15)  Since we utilize an Eulerian advection step, we avoid performing additional collision detection between particles or surfaces used in semi-Lagrangian methods.  5.4  Simulating Multiple Objects  The velocity field defined on G is smooth. This does not allow relative tangential motion between objects since the necessary discontinuity cannot be represented. To alleviate this we introduce a ghost cell based method for handling contact constraints for deformable solids in an Eulerian simulation. The approach is similar to that of Robinson-Mosher et al. [85], however we present a more general constraint formulation that accounts for the region of intersection of the deformable solids inside of each grid cell and allows the use of an arbitrary discretization and arbitrary interpolating functions. This approach also provides a simple type of self-collision prevention. The reference coordinate field defined on G is single valued. If an object on G is deformed such that two unconnected regions are colliding within the same grid cell then the space between these two regions must be compressed. This compression is detectable via the computed strain and will result in a repulsive stress. The Eulerian simulator can thus be thought of as applying penalty-based self-collision handling.  60  Figure 5.1: Top Left: A 2D example of the integration region for finite volume stress computation (blue square). The red node denotes position (i, j, k) on G. Numbers 1,2,3 and 4 denote the four grid cells which intersect the integration region. Top Right: A diagram of the intersection of two objects within a spatial cell. Bottom: The ghost cell representation. The velocities of each object are stored at the vertices of their respective ghost cells.  61  5.4.1  Velocity Level Constraints  For the ith object in the simulation domain one can define the velocity field i v, the reference coordinate i X and the spatial coordinate i x. Furthermore one can define the following indicator function  i  M (i X(i x)) =     1 if object has mass at i x  .  (5.16)    0 otherwise The intersection function of two objects, p and q, inside the grid cell r, can then be defined as pq r I  M · qM dΩ.  p  =  (5.17)  Ωr  which is the volume of intersection when the two objects overlap inside the grid cell and zero otherwise (Figure 5.1). If Equation 5.17 is greater than 0 then inequality constraints of the form rpq I˙ < 0 will lead to the appropriate inequality constraints on p v and q v. The derivative of Equation 5.17 with respect to time is pq ˙ r I  ( p M˙ ·qM + pM ·qM˙)dΩ.  =  (5.18)  Ωr  To simplify notation one can define the function rpq T as pq r T  M˙ ·qM dΩ  p  =  (5.19)  Ωr  Application of the chain rule to Equation 5.19 yields the result  pq r T  ∇ pM · (qM p v)dΩ.  =− Ωr  62  (5.20)  Recall that M is a function which indicates whether an object occupies the spatial point x. We can use this definition to rephrase Equation 5.20 as  pq r T  ∇ pM · ( p v)dΩ  =−  (5.21)  Ωr ∩Ωq  where Ωr ∩ Ωq is the volume of the grid cell, r, occupied by object q. Applying the divergence theorem to Equation 5.21 yields  pq r T  p  =−  v · ndΓ +  Γr∩q ∩p  p  v · ndΓ  (5.22)  Γr∩p∩q  where Γr∩q ∩ p is the segment of the boundary of the volume of intersection of grid cell r and object q intersecting volume p and Γr∩p∩q is the boundary of the volume of intersection of r and objects p and q. The volume defined by the intersection of r, p and q is the original volume of intersection (g) of objects p and q within r. Using this we can approximate the above equation as  pq r T  =  p  v · ndΓ.  (5.23)  Γ p∩ g  Finally we can express rpq I˙ as pq ˙ r I  =rpq T +qp r T.  (5.24)  Expressing p v and q v as a linear combination of shape functions within each  63  cell yields pq ˙ r I  n  =  ∑  p  vs  s=1 q  ... + vs  φs · ndΓ Γ p∩ g  ... (5.25)  φs · ndΓ Γq∩ g  where s indexes each grid cell node and φs are shape functions centered at each node s. This can then be expressed in matrix form as pq ˙ r I  =rpq jv  (5.26)  where r is a grid cell of G, rpq j contains the integrals in Equation 5.25 evaluated in r and v is the vector of nodal velocities. We use linear shape functions to interpolate velocities and incorporate the constraints into the simulation using ghost cells [34] to represent the discontinuous velocity field in spatial cells containing more than one object (Figure 5.1).  5.4.2  Gauss’ Principle of Least Constraint  To solve for vt+1 in the presence of contact constraints we use a discrete form of Gauss’ principle of least constraint [66]. Substituting the forward Euler discretization of acceleration in Gauss’ principle results in the following Quadratic Program (QP) 1 vt+1 = arg min vT Mv − vT f∗ v 2 s.t. Jv ≤ b,  64  (5.27)  where J = jT1 , ..., jTm  T  is an m × n matrix. The first line of the equation is just a  variational form of Equation 5.13. The inequality constraints have a clear physical interpretation: They are per-cell contacting surface elements and their associated Lagrange multipliers are contact tractions (pressures). This method correctly computes the velocities due to frictionless contact; extension to contact with friction is possible using the staggered projections method of Kaufman et al. [52]. Since we explicitly deal with inequality constraints using a QP, the constraint sticking problem of methods that approximate these as equality constraints is avoided.  5.4.3  Post-Stabilization  The velocity level constraints in Equation 5.27 are derived from positional constraints and it is these positional constraints that need to be satisfied. Accumulation of error from solving the velocity level QP can cause drift at the position level. Post-Stabilization is a correction applied at the position level in order to ensure the constraints remain satisfied. We follow the post-stabilization approach of Cline and Pai [23]. After updating X (Equation 5.15) we solve for a correction to the spatial coordinates using δ x = −M−1 JT JM−1 JT  −1  g  (5.28)  where δ x is the correction and g is a vector of volumes of intersection computed using Equation 5.17. δ x can be applied using the update equation  Xstabilized (x) = X (x − δ x)  65  (5.29)  5.4.4  Adaptive Time Step  The use of the first order upwind Eulerian advection step imposes the strict stability requirement that ∆t max  |vx | |vy | |vz | , , ∆x ∆y ∆z  =α  (5.30)  where α < 1. v is determined by the forces acting on the deformable object and so one must choose a sufficiently small ∆t such that v is prevented from becoming catastrophically large no matter what forces may arise over the course of a simulation. In the best case this leads to an unreasonably small ∆t for the entire simulation and in the worst case it leads to instability and failure. This problem is compounded for simulations involving contact because arbitrarily large contacting forces may arise unpredictably during any time step. However, it is possible to choose a time step so that the velocity field produced will always satisfy Equation 5.30. We begin from the optimality conditions of our QP (Equation 5.27):   M  J   JT 0   vt+1     λ      =  f∗ b      (5.31)  where λ is the vector of Lagrange multipliers. Recalling that f∗ is a function of ∆t and using the Schur complement of Equation 5.31 we can write  vt+1 = 0 v + ∆t 1 v  (5.32)  where 0 v and 1 v are basis vectors for the optimal solution. These vectors can be determined by performing two QP solves: one with ∆t = 0 and one with ∆t = 1 and subtracting the results. Note that with our solver, the second velocity solve is cheap  66  since the Schur complement is already factorized (see Section 5.5.1). Combining Equation 5.30 and Equation 5.32 we can express the stable ∆t for each grid node as the non-negative root of the quadratic  γ1 ∆t 2 + γ0 ∆t − α = 0  where γi =  |i vx | ∆x  +  |i vy | ∆y  +  |i vz | ∆z  (5.33)  , and α is chosen by the user. Choosing the min-  imum ∆t of all those computed satisfies Equation 5.30. Osher and Fedkiw [76] recommends 0.5 ≤ α < 0.9. In practice we’ve found values in this range to be unconditionally stable.  5.5  GPU Implementation  Algorithm 3 Eulerian Solids Simulation Step 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19:  for each object, i do Compute i M by subsampling i r0 i ¯f = i MD(i vt )i vt i ¯f -= i τ (computed using Equation 5.11) i ¯f -= i f end for // Only add entries corresponding to non-zero entries in i M Build M and ¯f from all i M and i ¯f Build collisionList using Le Grand [55] for each C in collisionList do p = C.p, q = C.q, r = C.r Compute rpq j using Equation 5.25 and add to J end for f∗ = Mvt − ∆t ¯f Compute ∆t using Equation 5.33 vt+1 = 0 v + ∆t 1 v Xt+1 = Xt − ∆tD(vt+1 )Xt −1 δ x = −M−1 JT JM−1 JT g t+1 (x − δ x) Xt+1 (x) = X stabilized 67  We have implemented our algorithm on the GPU using the CUDA API from Nvidia [75] (Algorithm 3). To simplify implementation on the GPU we simulate each deformable object on its own grid, i G, with i v and i X at the corners of each grid cell. We store a high-resolution reference volume, i r0 (which contains a per voxel map of the object id i, i M , the object density ρ and an index to the object’s constitutive model) as a 3D texture. In order to compute i M we subsample i r0 using i X to perform dependent texture lookups within the integration region for each cell node and then compute the total mass at a node using a parallel prefix scan [44]. Due to object deformation, the volume of each subsample in the reference domain must be corrected in order to keep the mass of the object constant throughout the simulation. This is done by multiplying the spatial volume of the sample by det(F−1 ). This is done in parallel for all nodes. Separate kernels are used to compute E, σ and i ¯f. M and ¯f are computed by parallel reduction. A list of cells containing multiple objects (collisionList in Algorithm 3) is built using the method of Le Grand [55]. Each collision in the list is evaluated in parallel to compute Equation 5.25 numerically. We approximate the surface integral in Equation 5.25 by subsampling p M and q M where p and q are potentially colliding objects. Surface normals are approximated using centered differences. The Schur complement is factored once and the factors are used twice in order to determine the appropriate time step, and finally vt+1 is computed (Equation 5.32) and extrapolated to the rest of i G. Finally, Equation 5.15 is used to update i X. Post-stabilization (Equation 5.29) can be applied using a single texture lookup per object.  68  5.5.1  Quadratic Program Solver  We ameliorate the performance issues with solving large QPs by implementing a GPU accelerated solver, which is based on a primal-dual interior point algorithm using Mehrotra’s prediction-correction method Mehrotra [64] to speed up convergence. Following Nocedal and Wright [74], the slack s = b − Jv and dual variable λ are computed each iteration, as well as the maximum step sizes each can take while retaining positivity: αs (∆s) and αλ (∆λ ), respectively. The maximum step size is then  α = min(αs , αλ ) The duality measure µ :=  s,λ m  (5.34)  is used as a stopping criteria. The pseudo-code is  given below. Algorithm 4 Primal-Dual Interior Point 1: 2: 3: 4: 5: 6: 7: 8: 9: 10:  χ = (v, λ , s) while µ > tol do Compute the Newton step ∆χ = (∆v, ∆λ , ∆s) Compute α using Equation 5.34 Set the centering parameter σ := ( µµ )3 Use ∆χ and σ to compute the correction step ∆χ = (∆v, ∆λ , ∆s) Compute α using Equation 5.34 χ = χ + α∆χ end while The major computational effort of the algorithm resides in solving two indefi-  nite linear systems of the form  69    JT           M   ∆v   r1    =  J −Λ ∆λ r2 where Λ is a diagonal matrix with Λii =  si λi .  (5.35)  Due to the decoupled structure  of the mass matrix M, the Schur complement JM−1 JT is relatively inexpensive to compute once per solve; it is also used in the post-stabilization step. The resulting augmented system is SPD:  (Λ + JM−1 JT )∆λ = r2 − JM−1 r1  (5.36)  From which ∆v can be easily computed. The Cholesky solver of CULA [46] was used to solve Equation 5.36. Cusp [12], a simple sparse package developed for CUDA, was employed to efficiently store the sparse matrix J and perform elementary matrix operations.  5.6  Results  All simulations were performed with an isotropic grid cell size of 2cm and an adaptive time stepping parameter of 0.5. A Windows XP workstation with an Intel Core i7 860 (2.80Ghz) CPU, 3.24GB of RAM and an Nvidia GTX 470 GPU was used to generate all results. When necessary for display, surface meshes were generated on the GPU using the Marching Cubes implementation available in the CUDA SDK [75].  70  5.6.1  Simulating a Single Object  Figure 5.2 shows two balls of differing material properties falling onto a rigid box. Both examples were simulated using the same simulator parameters; only the material properties were changed, with the green ball being highly viscous and the blue ball being softly elastic. This example shows that the simulator handles a variety of material properties robustly even in cases of extreme deformation (the blue ball has collapsed under its own weight).  Figure 5.2: Two balls with different material properties fall onto a rigid box under the influence of gravity. The blue ball is a soft elastic ball while the green ball has a high viscosity.  Figure 5.3 shows a bunny, instantaneously squashed flat by scaling its reference coordinates by 40 times in the z direction. The Eulerian simulator does not require any remeshing or additional processing to produce this result. Figure 5.4 shows the results of stretching a deformable dragon to twice its width by scaling the reference coordinates by 0.5 times in the x direction. Upon running the simulation both the volume and shape of the dragon return to their reference configuration. This highlights the benefit of explicitly tracking X, as no artificial plasticity has been introduced over the course of the simulation. 71  Figure 5.3: A bunny is instantaneously compressed flat and allowed to return to its original shape.  5.6.2  Simulating Multiple Objects  Figure 5.5 shows an impact between two highly deformable bunnies. Despite the absence of an explicit surface mesh, the constraint formulation accurately resolves contact between the objects. Note how the bunnies’ ears intertwine, slide against each other but do not interpenetrate. With post-stabilization enabled (Figure 5.6A) contact between the bunnies’ ears is resolved properly. Without stabilization (Figure 5.6B) the ears pass through each other due to constraint drift. Figure 5.7 shows the effect of the adaptive time stepping algorithm (Section 5.4.4). Initially the undeformed objects travel towards each other with constant velocities. This admits a large time step. At the instant of the collision the time step decreases sharply in response to the large contact forces generated. As the object deforms further, the time step decreases slightly but then increases again in response to the removal of the contact forces and the object dissipating its internal stresses. Note that the object never fully resolves its internal stresses and so the time step never fully returns to the original level. However, the mean ∆t computed by the algorithm is 9 times greater than the minimum time step and this illustrates the two-fold advantage of the adaptive time stepping scheme. Firstly, the large variance in time  72  2  Volume Ratio  1.5  1  0.5  0  0  1  2 Time(s)  3  4  Figure 5.4: Deformed objects return to their reference shapes and volumes even after experiencing extreme deformations. Volume Ratio is the ratio of the object’s deformed volume to its reference volume.  Figure 5.5: Two highly deformable bunnies undergo a violent collision and return to their original shapes.  step sizes reinforces the difficulty in choosing the parameter manually. In complex simulations contacts and deformation can occur unpredictably, obscuring this minimum threshold. Choosing an inappropriate time step size for each iteration can lead to failure and/or a waste of computational resources. Secondly, the time step need only be small at specific times during the simulation. Fixing the time step at 73  B  A  Figure 5.6: The effect of post-stabilization on collision response. Frame (A) shows the bunny-bunny impact simulated with post-stabilization and (B) shows the same point in the impact with stabilization disabled.  this level would lead to long simulation times. Figure 5.8 shows two examples of deformable objects interacting with scripted rigid bodies. The left subfigure shows a dragon sliding down two stationary ramps. This example shows that our constraints correctly decouple the tangential velocities of the rigid and deformable objects. Additionally, because we use inequality constraints instead of equality constraints, the decoupling of normal velocities is handled automatically by the constraint solver. This allows the dragon to slide off the edge of the first ramp, impact the second ramp and then continue sliding out of the scene. The right subfigure of Figure 5.8 shows a deformable bunny struck by a high-speed spinning bar. The bar leaves a sharp indentation in the bunny which is visible directly after separation. Dynamic effects such as this lend visual richness to the simulations.  74  Time Step(s)  0.008 0.006 0.004 0.002 0  0  0.5  1  1.5 2 Time(s)  2.5  3  Figure 5.7: The size of adaptive time steps over the course of the two bunny simulation. The horizontal black lines denote the mean and minimum time steps computed for this simulation. Frames from the simulation at specific times are also shown. Figure 5.9 shows the results of direct simulation from volumetric data. Here we load a CT scan of a teddy bear [86]. The only preprocessing required is to select a threshold which defines active voxels. Material properties can be inferred from the values of the underlying voxel data or assigned by the user. With our method such volumetric data can be immediately simulated robustly with collision detection and contact resolution.  75  Figure 5.8: Left: A dragon slides down two frictionless ramps. Right: A bunny is struck by a quickly rotating rigid body.  Figure 5.9: This teddy bear is simulated directly from CT data. Notice that a section of the CT bed is included in the dataset.  5.6.3  GPU Implementation  To test the scaling of the GPU algorithm we constructed stacks of elastic slabs in constant contact. The height of the stack determines both the number of Degrees of Freedom (DOF) and the number of constraints in the QP. The scaling of the algorithm is promising (Figure 5.10). Though the runtimes preclude interactive usage, we feel that the overall robustness of the system compensates for this. Table 5.2 gives timings and memory usage for all examples presented in this paper. Figure 5.11 shows the performance of the GPU QP solver for problems with 104 , 105 and 106 DOFs and a varying number of constraints. The performance of a CPU implementation of the same algorithm is given for comparison. It is important to note that the performance of the GPU algorithm is essentially independent of the  76  Time(ms)  3000  2000  1000  0 1  2  3  4  5 6 Number of Slabs  7  8  9  10  Figure 5.10: The average time taken to compute one time step when simulating a stack of elastic slabs of varying height. number of DOFs in the problem. This stems from the use of the Schur complement at the heart of the solver. The size of the Schur complement is solely dependent on the number of constraints. With respect to number of constraints, the performance of the CPU algorithm deteriorates more quickly than that of the GPU algorithm. For a 1000 constraint problem the GPU algorithm is 9 times faster (in terms of runtime). Utilizing the GPU allows the simulation of highly complex scenes involving numerous bodies undergoing large deformations. Figure 5.12 shows the result of a high speed collision between six elastic bunnies and three elastic dragons. Small features such as the dragons’ horns and the bunnies’ ears make this a challenging collision resolution problem. However, the Eulerian simulator handles the interaction between bunnies seamlessly, even capturing details such as the bunnies’ ears being pinned under the horns and inside the mouths of the dragons. The simulator also handles the resulting large deformations, such as the blue bunny in the bottom  77  5000 CPU − 10000 GPU − 10000 GPU − 100000 GPU − 1000000  Time(ms)  4000 3000 2000 1000 0 0  200  400 600 Number of Constraints  800  1000  Figure 5.11: Performance of the primal-dual interior point algorithm. Results for the GPU implementation of the algorithm are given for 104 , 105 , 106 degrees-of-freedom. For comparison we show the performance of a CPU implementation of the algorithm for 103 variables.  Figure 5.12: Six deformable bunnies and three deformable dragons undergo a catastrophic collision. Top Row: Full frame view of the simulation; Bottom Row: Magnified view. No bunnies were injured in the creation of this simulation.  right corner being squished flat. Again this is all done without an explicit surface or volumetric mesh on a simple regular grid.  78  Example Bunny Dragon CT Bear 2x Bunny(stiff) 2x Bunny(soft) 3x Bunny Sliding Dragon Rotating Block Bunny Crunch  Dim 64 × 64 × 64 64 × 32 × 64 128 × 128 × 128 128 × 64 × 64 128 × 64 × 64 128 × 64 × 64 64 × 64 × 64 64 × 128 × 64 128 × 128 × 64  Mem(MB) 65.1 50.5 270.6 176.5 176.5 256.4 70.2 94.5 1182.39  Table 5.1: Memory usage for each example. The size of the simulation grid as well as the total amount of GPU memory used, in megabytes, is shown. Example Bunny Dragon CT Bear 2x Bunny(stiff) 2x Bunny(soft) 3x Bunny Sliding Dragon Rotating Block Bunny Crunch  Steps 1410 1294 1344 3142 880 1226 888 906 2152  f(1-8) 17.23 9.33 8.37 46.3 45.1 64.9 6.02 6.75 161.7  Collision(9) 0.375 0.474 0.867 1.18 1.22 1.84 0.850 0.828 6.50  J(10-13) 0.00 0.00 0.0970 0.0703 0.0715 3.17 0.104 0.0342 15.32  v(14-16) 0.0626 1.30 16.3 20.1 19.3 26.8 38.3 10.52 1134.0  X(17-19) 1.28 1.27 1.84 3.62 3.63 4.72 2.15 1.32 19.2  Table 5.2: Performance of each example. The total number of time steps taken and the average time in milliseconds for each stage of the algorithm is shown. The numbers in brackets indicate the corresponding lines in Algorithm 3.  5.6.4  Discussion  The Eulerian approach is best suited to simulations during which objects undergo large deformations and/or complex collisions with either themselves or other objects. The approach is elegant since it provides implicit self-collision detection. Furthermore collisions can be handled on the same grid structure as the simulation. Because the mesh is fixed, problems with mesh tangling are avoided which 79  can be a significant advantage over previous Lagrangian methods. In our present implementation, the fixed configuration of the mesh leads directly to an easily definable stability criteria which allows us to derive a simple adaptive time stepping scheme that not only ensures stability but reduces total simulation time. The simplicity of the entire system is illustrated by the fact that it has only 5 user defined parameters (including those for the QP solver); they are the time step parameter, grid size and grid position, machine epsilon and a minimum threshold for the accuracy of the QP solver. Another feature of the algorithm is that volumetric objects can be directly simulated provided that the object is defined as a volumetric function that can be sampled. The method also captures dynamic grid scale detail which increases visual fidelity of the simulations. This is evident in the impact examples shown in this paper. The simulator’s ability to directly simulate volumetric data also makes it uniquely applicable to a wide range of medical and biomechanical problems which require subject-specific simulations. Bottlenecks in algorithm performance currently exist during force computation and velocity update steps (Table 5.2). The force computation step performs subsampling of the simulation mesh and thus is restricted by texture performance of the graphics hardware. Furthermore, objects are processed serially. Fast culling of empty grid cells and parallel treatment of objects would be worthwhile optimizations. Time taken for the velocity update step is most affected by the number of constraints in the QP. Simulations with more objects yield greater numbers of constraints and the performance of the GPU QP algorithm behaves accordingly (Figure 5.11 and Figure 5.10). Both phases of the algorithm devoted to collision detection take negligible time. Memory usage for the algorithm is high; however this is expected due to its volumetric nature. Explicit time stepping and the stability 80  criteria of the Eulerian advection step restrict time step size. Overcoming this will be important for reducing simulation runtime.  5.6.5  Limitations and Future Work  This algorithm has several limitations. First, we make no attempt to deal with subcell boundaries except in the case of inter-object collision resolution. Second, because we have chosen to use an Eulerian advection step, the time steps taken by the simulator will be small compared to semi or fully Lagrangian advection schemes. Third, the current implementation exhibits strong numerical damping. Fourth, because our collision constraints are based on volume of intersection, they (like those of Allard et al. [1]) are not suitable for infinitesimally thin objects such as cloth. More accurate processing of object self-collisions is also an open research problem. Fifth, the current explicit integration scheme necessitates very small time steps for stiff materials. Sixth, the performance and memory usage of our current GPU implementation (Table 5.1) could be greatly improved using known code optimizations. Finally, display is more involved with Eulerian methods if one wishes to render polygonal surfaces. All of these limitations open avenues for promising future work. Embedded boundary techniques have been investigated in the fluids literature and these techniques could be incorporated into the Eulerian solids framework. In addition, utilizing a more advanced advection scheme and integrator could help reduce the damping present in the method. One particularly interesting direction of work would be to adopt the fluid simulation technique of Brochu et al. [18]. Because this type of simulator provides an explicit surface mesh it would alleviate problems with boundary conditions and display. Coupling an Eulerian solids simulation with 81  an Eulerian fluids simulation on the same grid would be a useful extension.  5.7  Conclusions  The presented Eulerian simulator handles highly deformed objects without the need for explicit remeshing (Figure 5.3). Our collision constraint formulation allows proper frictionless interaction between multiple deformable and rigid bodies. The constraint derivation correctly accounts for the region of intersection of two objects and admits a straightforward extension to frictional contact. The general nature of the derivation means these constraints can be used on grids of any element type by defining alternate interpolating functions. Finally, the algorithm accurately resolves extremely challenging collision scenarios involving multiple deformable objects enduring large deformations (Figure 5.12).  82  Chapter 6  Conclusions In this thesis we have presented two algorithms which address different stages in a pipeline for generating subject-specific dynamic simulations of musculoskeletal models (Figure 1.2). We have focused on difficulties in the data acquisition and simulation phases of the pipeline.  6.1  Overview of Contributions  In terms of data acquisition we have made two important contributions to the current research literature. The first is the application of DTI to complicated musculoskeletal systems such as the human forearm. Previous work focused mainly on computing bulk fibre statistics for larger muscles such as those of the human calf. In contrast to this we focus on micro-architectural reconstruction. Secondly we have provided the first (and only, to the author’s knowledge) denoising algorithm specifically designed for musculoskeletal DTI. The presented denoising algorithm for musculoskeletal DTI deviates sharply from the previous methods. Current state-of-the-art methods focus on the problem of denoising arbitrary diffusion 83  tensor or diffusion weighted images. Typically they are applied to neurological data which has considerably different anatomical features than muscle. Neurological data also demonstrates different SNR properties. Instead our focus is on defining a set of very specific constraints which tailor the algorithm to the task at hand, denoising musculoskeletal DTI. This approach yields quantitative results that exceed those of more general state-of-the-art algorithms. In terms of simulation, our work focused on removing the requirement for creating an object specific discretization since, for complicated structures like muscles, this process can be difficult. Our method is unique in that it allows the direct simulation of three-dimensional volumetric data, such as MRI data, with proper dynamics, collision detection and resolution. This approach is heavily defined by our experience working with medical data. Our simulator is designed to handle close-contacting volumetric objects in a straightforward manner. Though the current implementation is slow we feel that the simplification produced in the imaging pipeline itself (that we no longer need to generate additional discretizations) is well worth this trade-off. The combination of these algorithms allows for a potential shift of the typical data driven pipeline because complicated geometric operations (mesh-generation and maintenance) need no longer be performed. Instead, with the exception of the segmentation stage, they are replaced with simpler to implement image processing type computations.  6.2  Future Work  In terms of future work we intend to move further toward the end goal of subjectspecific, musculoskeletal biomechanics. With regard to data acquisition there are several important steps that need to be taken. First, further validation of our DTI 84  denoising algorithm need be undertaken. Acquisition of both anatomical and diffusion tensor data from both the forearm and other anatomical regions could be validated against gold standard measurement techniques such as ultrasound (as was done in Lansdown et al. [54]). Modifications to the acquisition protocol itself such as increasing the field-of-view and decreasing scan time would also increase the utility of musculoskeletal DTI as both a research and a clinical tool. One potential method of accomplishing both of these goals would be to reconstruct muscle fibre architecture from a sparse set of DTI images. Techniques from compressed sensing [30] could potentially be applied to our optimization based problem formulation. Our Eulerian solid simulator also offers many avenues of future work. Since our end goal is to simulate large mechanical systems, performance improvements to the simulator are paramount. In this regard, the small time-steps the simulator must take are a source of consternation. These occur for two reasons. The first is that our integrator is explicit thus time-step size is limited by the magnitude of the forces acting on the simulated objects. An implicit time stepping scheme would help in this regard. Second, certain Eulerian advection schemes impose a time-step restriction based on the resolution of the spatial discretization. Methods such as the semi-Lagrangian advection [90] alleviate this problem but again would need to be extended to accurately handle embedded boundaries, collision and to avoid degenerate deformations. Variational integration approaches have also been shown to useful in this regard and applying these to the Eulerian solids problem would be fascinating [68]. Adding incompressible behaviour to the simulator is also important since most biological materials have a Poisson’s ratio close to 0.5. In fluids simulators this is often accomplished by maintaining a divergence free velocity field (see Bridson [17] for example). However, for solids it is also important 85  to enforce the incompressibility constraint at the positional level. Such a numerical method, that works in concert with our previously described collision handling method would be useful. Reduced formulations could be derived from volume preserving parameterizations [3] and this would alleviate the burden of requiring an explicit constraint satisfaction procedure. Finally, a combination of the purely Eulerian methods with other Lagrangian methods, such as Faure et al. [33] could yield fast, robust simulators that require little, if any, preprocessing of medical imaging data.  6.3  Final Discussion  The objective of this thesis was to provide a new pipeline for musculoskeletal simulations and two new algorithms were designed that improve the image processing and simulation tools available for volumetric data. Obviously a large amount of work remains to complete the pipeline and produce dynamic simulations of full musculoskeletal systems, but, as noted above, that work forms a promising avenue of future work which evolves naturally from this thesis. The endgame is a method for producing personalized biomechanical simulations from subject-specific data in an efficient and non-invasive manner. Such a method has applications ranging from the medical domain to computer graphics and neuroscience. While many open problems remain, in this thesis we have developed two computational tools which bring us closer to this goal.  86  Bibliography [1] J. Allard, F. Faure, H. Courtecuisse, F. Falipou, C. Duriez, and P. Kry. Volume contact constraints at arbitrary resolution. ACM Trans. Graph., 29 (4):82:1–82:10, July 2010. → pages 16, 81 [2] E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. D. Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen. LAPACK Users’ Guide. Society for Industrial and Applied Mathematics, Philadelphia, PA, third edition, 1999. → pages 33 [3] A. Angelidis, M.-P. Cani, G. Wyvill, and S. King. Swirling-sweepers: Constant volume modeling. In Pacific Graphics, Korea, oct 2004. → pages 86 [4] V. Arsigny, P. Fillard, X. Pennec, and N. Ayache. Log-euclidean metrics for fast and simple calculus on diffusion tensors. Magnetic Resonance in Medicine, 56(2):411–421, 2006. → pages 10 [5] U. Ascher. Numerical Methods for Evolutionary Differential Equations. Society for Industrial & Applied Mathematics., 2008. → pages 11 [6] J. W. Banks, D. W. Schwendeman, A. K. Kapila, and W. D. Henshaw. A high-resolution Godunov method for compressible multi-material flow on overlapping grids. J. Comput. Phys., 223(1):262–297, 2007. → pages 15 [7] A. W. Bargteil, C. Wojtan, J. K. Hodgins, and G. Turk. A Finite Element Method for Animating Large Viscoplastic Flow. ACM Trans. Graph., 26 (3):16:1–16:8, July 2007. → pages 14, 15 [8] P. T. Barton and D. Drikakis. An Eulerian method for multi-component problems in non-linear elasticity with sliding interfaces. J. Comput. Phys., 229(15):5518–5540, 2010. → pages 15  87  [9] P. J. Basser, S. Pajevic, C. Pierpaoli, J. Duda, and A. Aldroubi. In vivo fiber tractography using dt-mri data. Magnetic Resonance in Medicine, 44(4): 625–632, 2000. → pages 23 [10] S. Basu, T. Fletcher, and R. Whitaker. Rician noise removal in diffusion tensor mri. Medical Image Computing and Computer-Assisted Intervention MICCAI 2006, pages 117–125, 2006. → pages 10, 23, 49, 50 [11] C. Batty, F. Bertails, and R. Bridson. A fast variational framework for accurate solid-fluid coupling. ACM Trans. Graph., 26(3):100:1–100:7, July 2007. → pages 16 [12] N. Bell and M. Garland. Cusp: Generic parallel algorithms for sparse matrix and graph computations, 2010. Version 0.1.0. http://cusp-library.googlecode.com. → pages 70 [13] O. Bergmann, A. Lundervold, and T. Steihaug. Generating a synthetic diffusion tensor dataset. In Computer-Based Medical Systems, 2005. Proceedings. 18th IEEE Symposium on, pages 277–281, 2005. → pages 28, 35 [14] D. L. Bihan, editor. Diffusion and Perfusion Magnetic Resonance Imaging: Applications to Functional MRI. Raven Press, 1992. → pages 106 [15] S. S. Blemker and S. L. Delp. Three-dimensional representation of complex muscle architectures and geometries. Annals of Biomedical Engineering, 33:661–673, 2005. → pages 13, 23 [16] P. W. Brand, R. B. Beach, and D. E. Thompson. Relative tension and potential excursion of muscles in the forearm and hand. Journal of Hand Surgery, 3(6):209–219, 1981. → pages 23 [17] R. Bridson. Fluid Simulation for Computer Graphics. A.K. Peters, Ltd., 2008. → pages 28, 85 [18] T. Brochu, C. Batty, and R. Bridson. Matching fluid simulation elements to surface geometry and topology. ACM Trans. Graph., 29(4):47:1–47:9, July 2010. → pages 81 [19] R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing, 16:1190–1208, 1999. → pages 32  88  [20] M. Carlson, P. J. Mucha, R. B. Van Horn, III, and G. Turk. Melting and flowing. In Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation, SCA ’02, pages 167–174, 2002. → pages 14 [21] M. Carlson, P. J. Mucha, and G. Turk. Rigid fluid: animating the interplay between rigid bodies and fluid. ACM Trans. Graph., 23:377–384, August 2004. → pages 16 [22] C. Chefd’hotel, D. Tschumperl, R. Deriche, and O. Faugeras. Constrained flows of matrix-valued functions: Application to diffusion tensor regularization. In A. Heyden, G. Sparr, M. Nielsen, and P. Johansen, editors, Computer Vision ECCV 2002, volume 2350 of Lecture Notes in Computer Science, pages 251–265. Springer Berlin / Heidelberg, 2002. → pages 23 [23] M. Cline and D. Pai. Post-stabilization for rigid body simulation with contact and constraints. In Proc. ICRA’03, volume 3, pages 3744–3751, 2003. → pages 65 [24] B. M. Damon, Z. Ding, A. W. Anderson, A. S. Freyer, and J. C. Gore. Validation of diffusion tensor MRI-based muscle fiber tracking. Magn Reson Med, 48(1):97–104, 2002. → pages 23 [25] S. L. Delp and J. P. Loan. A graphics-based software system to develop and analyze models of musculoskeletal structures. Computers in Biology and Medicine, 25(1):21 – 34, 1995. → pages 12 [26] S. L. Delp and J. P. Loan. A computational framework for simulating and analyzing human and animal movement. Computing in Science and Engg., 2(5):46–55, 2000. → pages 12 [27] M. Desbrun and M. P. Gascuel. Smoothed particles: a new paradigm for animating highly deformable bodies. In Proceedings of the Eurographics workshop on Computer animation and simulation, pages 61–76, 1996. → pages 14 [28] F. Devernay. C/c++ minpack. http://devernay.free.fr/hacks/cminpack/cminpack.html. → pages 33 [29] Z. Ding, J. C. Gore, and A. W. Anderson. Case study: reconstruction, visualization and quantification of neuronal fiber pathways. In VIS ’01: Proceedings of the conference on Visualization ’01, pages 453–456, Washington, DC, USA, 2001. IEEE Computer Society. → pages 11, 23 89  [30] D. Donoho. Compressed sensing. Information Theory, IEEE Transactions on, 52(4):1289–1306, 2006. → pages 85 [31] D. Enright, R. Fedkiw, J. Ferziger, and I. Mitchell. A hybrid particle level set method for improved interface capturing. J. Comput. Phys., 183(1): 83–116, 2002. ISSN 0021-9991. → pages 15 [32] F. Faure, J. Allard, and M. Nesme. Eulerian Contact for Versatile Collision Processing. Research Report RR-6203, INRIA, 2007. → pages 15 [33] F. Faure, B. Gilles, G. Bousquet, and D. K. Pai. Sparse meshless models of complex deformable solids. ACM Trans. Graph., 30:73:1–73:10, August 2011. ISSN 0730-0301. doi:http://doi.acm.org/10.1145/2010324.1964968. URL http://doi.acm.org/10.1145/2010324.1964968. → pages 86 [34] R. P. Fedkiw, T. Aslam, B. Merriman, and S. Osher. A non-oscillatory Eulerian approach to interfaces in multimaterial flows (the ghost fluid method). J. Comput. Phys., 152(2):457–492, 1999. ISSN 0021-9991. → pages 64 [35] P. Fillard, X. Pennec, V. Arsigny, and N. Ayache. Clinical dt-mri estimation, smoothing, and fiber tracking with log-euclidean metrics. Medical Imaging, IEEE Transactions on, 26(11):1472–1482, 2007. → pages 10, 23, 49 [36] FMRIB. Fmrib’s diffusion toolbox - fdt v2.0. Webpage: http://www.fmrib.ox.ac.uk/fsl/fdt/index.html, 2006. → pages 20 [37] N. Foster and D. Metaxas. Realistic animation of liquids. Graph. Models Image Process., 58(5):471–483, 1996. ISSN 1077-3169. → pages 14 [38] M. Froeling, J. Oudeman, S. van den Berg, K. Nicolay, M. Maas, G. J. Strijkers, M. R. Drost, and A. J. Nederveen. Reproducibility of diffusion tensor imaging in human forearm muscles at 3.0 t in a clinical setting. Magnetic Resonance in Medicine, 64(4):1182–1190, 2010. → pages 10 [39] C. Gans. Fiber architecture and muscle function. Exercise and Sport Sciences Reviews, 10(1):160–207, 1982. → pages 6 [40] B. Gilles and D. K. Pai. Fast musculoskeletal registration based on shape matching. International conference on medical image computing and computer assisted intervention (MICCAI’08), pages 822–829, 2008. → pages 37 90  [41] B. Gilles, G. Bousquet, F. Faure, and D. K. Pai. Frame-based elastic models. ACM Trans. Graph., 30(2):15:2–15:12, April 2011. → pages 14 [42] T. G. Goktekin, A. W. Bargteil, and J. F. O’Brien. A method for animating viscoelastic fluids. ACM Trans. Graph., 23:463–468, August 2004. → pages 14 [43] H. Gray. Anatomy of the human body. Lea and Febiger, Philadelphia, 1918. → pages 2 [44] M. Harris, S. Sengupta, and J. D. Owens. Parallel prefix sum (scan) with CUDA. In H. Nguyen, editor, GPU Gems 3. Addison Wesley, 2007. → pages 68 [45] C. H¨ogfors, D. Karlsson, and B. Peterson. Structure and internal consistency of a shoulder model. Journal of Biomechanics, 28(7):767 – 777, 1995. → pages 12 [46] J. Humphrey, D. Price, K. Spagnoli, A. Paolini, and E. Kelmelis. CULA: hybrid GPU accelerated linear algebra routines. In Proc. SPIE, volume 7705, page 1, 2010. → pages 70 [47] G. Irving, J. Teran, and R. Fedkiw. Invertible finite elements for robust simulation of large deformation. In Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation, SCA ’04, pages 131–140, 2004. → pages 14 [48] J. J. Mor´e and B. S. Garbow and K. E. Hillstrom. User Guide for MINPACK-1, Report ANL-80-74. Argonne, Illinois, USA, 1980. See http://www-fp.mcs.anl.gov/otc/Guide/softwareGuide/Blurbs/minpack.html.  → pages 33, 36 [49] M. Jenkinson and S. M. Smith. A global optimisation method for robust affine registration of brain images. Medical Image Analysis, 2(5):143–156, 2001. → pages 20 [50] H. Jiang, P. C. van Zijl, J. Kim, G. D. Pearlson, and S. Mori. Dtistudio: Resource program for diffusion tensor computation and fiber bundle tracking. Computer Methods and Programs in Biomedicine, 81(2):106 – 116, 2006. → pages 11 [51] K. Kamrin and J.-C. Nave. An eulerian approach to the simulation of deformable solids: Application to finite-strain elasticity. Submitted 91  (available: http://arxiv.org/PS cache/arxiv/pdf/0901/0901.3799v2.pdf), 2009. URL http://arxiv.org/PS cache/arxiv/pdf/0901/0901.3799v2.pdf. → pages 15, 56 [52] D. M. Kaufman, S. Sueda, D. L. James, and D. K. Pai. Staggered projections for frictional contact in multibody systems. ACM Trans. Graph., 27(5):164:1–164:11, December 2008. → pages 65 [53] F. Labelle and J. R. Shewchuk. Isosurface stuffing: fast tetrahedral meshes with good dihedral angles. ACM Trans. Graph., 26(3):57:1–57:10, July 2007. → pages 14 [54] D. A. Lansdown, Z. Ding, M. Wadington, J. L. Hornberger, and B. M. Damon. Quantitative diffusion tensor mri-based fiber tracking of human skeletal muscle. Journal of Applied Physiology, 103(2):673–681, 2007. → pages 9, 23, 85, 108 [55] S. Le Grand. Broad-Phase Collision Detection with CUDA. In H. Nguyen, editor, GPU Gems 3. Addison-Wesley, 2007. → pages 67, 68 [56] B. L´evy and Y. Liu. L p Centroidal Voronoi Tessellation and its applications. ACM Trans. Graph., 29(4):119:1–119:11, July 2010. ISSN 0730-0301. → pages 14 [57] R. Lieber and J. Frid´en. Clinical significance of skeletal muscle architecture. Clinical orthopaedics and related research, 383:140, 2001. → pages 3 [58] R. L. Lieber and F. T. Blevins. Skeletal muscle architecture of the rabbit hindlimb: Functional implications of muscle design. Journal of Morphology, 199(1):93–101, 1989. → pages 7 [59] R. L. Lieber, M. D. Jacobson, B. M. Fazeli, R. A. Abrams, and M. J. Botte. Architecture of selected muscles of the arm and forearm: anatomy and implications for tendon transfer. The Journal of Hand Surgery, 17A: 787–798, 1992. → pages 3, 7, 22, 51 [60] F. Losasso, T. Shinar, A. Selle, and R. Fedkiw. Multiple interacting liquids. ACM Trans. Graph., 25(3):812–819, July 2006. ISSN 0730-0301. → pages 14 [61] W. Maural, D. Thalmann, P. Hoffmeyer, P. Beylot, P. Gingins, P. Kalra, and N. M. Thalmann. A biomechanical musculoskeletal model of human upper 92  limb for dynamic simulation. Proceedings of the Eurographics workshop on Computer animation and simulation ’96, pages 121–136, 1996. → pages 12 [62] A. McAdams, A. Selle, K. Ward, E. Sifakis, and J. Teran. Detail preserving continuum simulation of straight hair. ACM Trans. Graph., 28(3): 62:1–62:6, July 2009. → pages 15 [63] T. McGraw, B. Vemuri, E. Ozarslan, Y. Chen, and T. Mareci. Variational denoising of diffusion weighted MRI. Inverse Problems and Imaging, 3(4): 625–648, 2009. → pages 23 [64] S. Mehrotra. On the Implementation of a Primal-Dual Interior Point Method. SIOPT, 2(4):575–601, 1992. → pages 69 [65] G. Miller and P. Colella. A High-Order Eulerian Godunov Method for Elastic-Plastic Flow in Solids* 1. J. Comput. Phys., 167(1):131–176, 2001. ISSN 0021-9991. → pages 15 [66] J. J. Moreau. Quadratic programming in mechanics: One-sided constraints. Journal SIAM Control, 4(1):153–158, 1966. → pages 64 [67] S. Mori and P. van Zijl. Fiber tracking: principles and strategies - a technical review. NMR in Biomedicine, 15(7–8):468–480, 2002. → pages 11, 23 [68] P. Mullen, K. Crane, D. Pavlov, Y. Tong, and M. Desbrun. Energy-preserving integrators for fluid animation. ACM Trans. Graph., 28: 38:1–38:8, July 2009. → pages 85 [69] M. M¨uller, R. Keiser, A. Nealen, M. Pauly, M. Gross, and M. Alexa. Point based animation of elastic, plastic and melting objects. In Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation, SCA ’04, pages 141–151, 2004. → pages 14 [70] M. M¨uller, M. Teschner, and M. Gross. Physically-based simulation of objects represented by surface meshes. In Proc. CGI 2004, pages 26–33, 2004. → pages 14 [71] R. Narain, A. Golas, and M. C. Lin. Free-flowing granular materials with two-way solid coupling. ACM Trans. Graph., 29:173:1–173:10, December 2010. → pages 15  93  [72] R. Neji, N. Azzabou, N. Paragios, and G. Fleury. A convex semi-definite positive framework for DTI estimation and regularization. Advances in Visual Computing, pages 220–229, 2007. → pages 23 [73] V. Ng-Thow-Hing. Anatomically-based models for physical and geometric reconstruction of humans and other animals. PhD thesis, University of Toronto, Toronto, Ont., Canada, Canada, 2001. Adviser-Fiume, Eugene. → pages 13, 14 [74] J. Nocedal and S. Wright. Numerical Optimization. Springer-Verlag, 1999. → pages 69 [75] NVIDIA Corporation. CUDA Programming Guide, 2010. URL http://developer.download.nvidia.com/compute/cuda/3 1/toolkit/docs/ NVIDIA CUDA C ProgrammingGuide 3.1.pdf. → pages 68, 70  [76] S. Osher and R. Fedkiw. Level set methods and dynamic implicit surfaces. Springer Verlag, 2003. ISBN 0387954821. → pages 67 [77] M. G. Pandy. Computer modeling and simulation of human movement. Annual Review of Biomedical Engineering, 3(1):245–273, 2001. → pages 12 [78] G. Parker. Tracing fiber tracts using fast marching. Proceedings of the 8th Annual Meeting of ISMRM, page 85, 2000. → pages 11 [79] L. Petrovic, M. Henne, and J. Anderson. Volumetric methods for simulation and rendering of hair. Technical report, Pixar Animation Studios, 2005. → pages 15 [80] S. J. Piazza and S. L. Delp. The influence of muscles on knee flexion during the swing phase of gait. Journal of Biomechanics, 29(6):723–733, 1996. → pages 12 [81] S. Pieper, B. Lorensen, W. Schroeder, and R. Kikinis. The na-mic kit: Itk, vtk, pipelines, grids and 3d slicer as an open platform for the medical image computing community. 3rd IEEE International Symposium on Biomedical Imaging: Nano to Macro, 2006, pages 698–701, 2006. → pages 42 [82] W. Press, S. Teukolsky, W. Vetterling, and B. Flannery. Numerical Recipes in C. Cambridge University Press, Cambridge, UK, 2nd edition, 1992. → pages 35  94  [83] R. Raikova. A general approach for modelling and mathematical investigation of the human upper limb. J Biomech, 25(8):857–867, Aug 1992. → pages 12 [84] K. Ravichandiran, M. Ravichandiran, M. L. Oliver, K. S. Singh, N. H. McKee, and A. M. R. Agur. Determining physiological cross-sectional area of extensor carpi radialis longus and brevis as a whole and by regions using 3d computer muscle models created from digitized fiber bundle data. Computer Methods and Programs in Biomedicine, 95(3):203–212, 2009. → pages 7, 48, 51 [85] A. Robinson-Mosher, R. E. English, and R. Fedkiw. Accurate tangential velocities for solid fluid coupling. In Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA ’09, pages 227–236, 2009. ISBN 978-1-60558-610-6. → pages 16, 60 [86] S. Roettger. The volume library. http://www9.informatik.uni-erlangen.de/External/vollib/, 2011. → pages 75  [87] U. Sinha and L. Yao. In vivo diffusion tensor imaging of human calf muscle. Journal of Magnetic Resonance Imaging, 15(1):87–95, 2002. → pages 8 [88] B. Solenthaler, J. Schl¨afli, and R. Pajarola. A unified particle model for fluid-solid interactions. Computer Animation and Virtual Worlds, 18(1): 69–82, 2007. ISSN 1546-427X. → pages 14 [89] A. H. Squillacote. The Paraview Guide. Kitware Inc., 2008. → pages 20, 37 [90] J. Stam. Stable fluids. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, SIGGRAPH ’99, pages 121–128, New York, NY, USA, 1999. ACM Press/Addison-Wesley Publishing Co. → pages 85 [91] E. O. Stejskal and J. E. Tanner. Spin diffusion measurements: Spin echoes in the presence of a time-dependent field gradient. The Journal of Chemical Physics, 42(1):288–292, 1965. → pages 106, 107 [92] S. Sueda, A. Kaufman, and D. K. Pai. Musculotendon simulation for hand animation. In SIGGRAPH ’08: ACM SIGGRAPH 2008 papers, pages 1–8, New York, NY, USA, 2008. ACM. → pages 12  95  [93] S. Sueda, G. L. Jones, D. I. W. Levin, and D. K. Pai. Large-scale dynamic simulation of highly constrained strands. ACM Transactions on Graphics, 30(4):39:1–39:9, Aug 2011. → pages 12 [94] D. Sulsky, Z. Chen, and H. Schreyer. A particle method for history-dependent materials. Computer Methods in Applied Mechanics and Engineering, 118(1-2):179–196, 1994. ISSN 0045-7825. → pages 15 [95] D. Sulsky, S.-J. Zhou, and H. L. Schreyer. Application of a particle-in-cell method to solid mechanics. Computer Physics Communications, 87(1-2): 236 – 252, 1995. → pages 15 [96] J. Teran, S. Blemker, V. N. T. Hing, and R. Fedkiw. Finite volume methods for the simulation of skeletal muscle. In Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation, SCA ’03, pages 68–74, 2003. → pages 13, 58 [97] D. G. Thelen, F. C. Anderson, and S. L. Delp. Generating dynamic simulations of movement using computed muscle control. Journal of Biomechanics, 36(3):321–328, 2003. → pages 12 [98] Y. Tong, S. Lombeyda, A. N. Hirani, and M. Desbrun. Discrete multiscale vector field decomposition. ACM Trans. Graph., 22(3):445–452, 2003. → pages 24 [99] G. J. Tortora and S. R. Grabowski. Principles of Anatomy and Physiology. John Wiley and Sons, Inc., 9th edition, 2000. → pages 48, 49, 108 [100] L. B. Tran and H. S. Udaykumar. A particle-level set-based sharp interface cartesian grid method for impact, penetration, and void collapse. J. Comput. Phys., 193(2):469–510, 2004. ISSN 0021-9991. → pages 15 [101] J. Trangenstein. A second-order Godunov algorithm for two-dimensional solid mechanics. Computational Mechanics, 13(5):343–359, 1994. ISSN 0178-7675. → pages 15 [102] A. Trist´an-Vega and S. Aja-Fern´andez. Dwi filtering using joint information for dti and hardi. Medical Image Analysis, 14(2):205 – 218, 2010. → pages 23, 42, 47, 50 [103] D. Tschumperl´e and R. Deriche. Variational frameworks for DT-MRI estimation, regularization and visualization. In Ninth IEEE International Conference on Computer Vision, 2003. Proceedings, pages 116–121, 2003. → pages 23 96  [104] C. C. van Donkelaar, L. J. G. Kretzers, P. H. M. Bovendeerd, I. M. A. Lataster, K. Nicolay, J. D. Janssen, and M. R. Drost. Diffusion tensor imaging in biomechanical studies of skeletal muscle function. Journal of Anatomy, 194(1):79–88, 1999. → pages 8 [105] Z. Wang, B. Vemuri, Y. Chen, and T. Mareci. A constrained variational principle for direct estimation and smoothing of the diffusion tensor field from complex dwi. Medical Imaging, IEEE Transactions on, 23(8):930 –939, aug. 2004. → pages 23 [106] D. Weinstein, G. Kindlmann, and E. Lundberg. Tensorlines: advection-diffusion based propagation through diffusion tensor fields. In VIS ’99: Proceedings of the conference on Visualization ’99, pages 249–253, Los Alamitos, CA, USA, 1999. IEEE Computer Society Press. → pages 11, 23 [107] D. Weishaupt, V. D. K¨ochli, and B. Marincek. How does MRI work. Springer-Verlag Berlin Heidelberg New York, 2003. → pages 99, 100, 101, 103, 104 [108] C.-F. Westin, S. E. Maier, H. Mamata, A. Nabavi, F. A. Jolesz, and R. Kikinis. Processing and visualization of diffusion tensor MRI. Medical Image Analysis, 6(2):93–108, 2002. → pages 23 [109] M. Wicke, D. Ritchie, B. M. Klingner, S. Burke, J. R. Shewchuk, and J. F. O’Brien. Dynamic local remeshing for elastoplastic simulation. ACM Trans. Graph., 29:49:1–49:11, July 2010. → pages 14 [110] Wikipedia. Magnetic resonance imaging — Wikipedia, the free encyclopedia, 2004. URL http://en.wikipedia.org/wiki/Magnetic resonance imaging. [Online; accessed 18-January-2010]. → pages 105 [111] G. L. Wolf and C. Popp. NMR: A Primer of Medical Imaging. SLACK Incorporated, 1984. → pages 101, 102, 104, 105 [112] Y. Xu, J. Weaver, D. Healy, and J. Lu. Wavelet transform domain filters: a spatially selective noise filtration technique. Image Processing, IEEE Transactions on, 3(6):747–758, 1994. → pages 11 [113] S. Zhang and D. Laidlaw. Elucidating neural structure in diffusion tensor mri volumes using streamtubes and streamsurfaces. In Proceedings of ISMRM, page 505, 2001. → pages 11 97  [114] X. Zhang, H. Ye, W. Chen, and W. Tian. Denoising dti images based on regularized filter and fiber tracking. volume 922, pages 720–723. AIP, 2007. → pages 11, 23 [115] X. Zhou and J. Lu. Nurbs-based galerkin method and application to skeletal muscle modeling. In SPM ’05: Proceedings of the 2005 ACM symposium on Solid and physical modeling, pages 71–78, New York, NY, USA, 2005. ACM. → pages 14 [116] Y. Zhu and R. Bridson. Animating sand as a fluid. ACM Trans. Graph., 24 (3):965–972, July 2005. → pages 15, 59  98  Appendix A  Background This appendix contains relevant background for those interested readers who may be unfamiliar with medical imaging; specifically the Magnetic Resonance Imaging techniques used for parts of this thesis. We begin with a discussion of Magnetic Resonance Imaging in general followed with a review of Diffusion Weighted and Diffusion Tensor Imaging. We close the chapter with a brief description of muscle structure.  A.1  Magnetic Resonance Imaging  Magnetic Resonance Imaging (MRI) is a non-invasive imaging technique which relies on fundamental properties of hydrogen nuclei in magnetic fields [107]. Hydrogen nuclei have a distinct magnetic moment and when placed into a strong magnetic field (B0 ) the nuclei magnetic moments (henceforth termed spins to follow the typical MRI terminology) adopt parallel or anti-parallel orientations with respect to the direction of the field. The spins then precess about the field direction at a specific frequency called the Larmor frequency which is proportional to the 99  B0  B0 parallel  anti-parallel  Figure A.1: A spin in alignment with the B0 magnetic field and the precessing motion of a spin around B0 . strength of the main magnetic field (related by the gyromagnetic ratio) [107]. The anti-parallel alignment is a higher energy state than the parallel alignment. This causes more spins to be in the parallel state and gives a group of spins a net magnetization in the direction of B0 . Figure A.1 shows the alignment of the spins with B0 as well as their precessing motion. Because the spins are uniformly distributed around the axis, B0 , the transverse component of the net magnetization (the component perpendicular to B0 ) is zero. It is at this point that resonance comes into play. Resonance is the property that spins will acquire energy if exposed to a radio frequency (RF) pulse oscillating at the Larmor frequency. Let us assume the field B0 is in the z direction in a Cartesian space. We can then apply an RF pulse with its magnetic component B1 in the x − y plane. This leads to two changes in the net magnetization of the spins. First, the addition of energy to the system allows more spins to enter the anti-parallel state, thus the magnetization along the z axis, in the direction of B0 , is decreased. Secondly, the x − y component of the spins aligns with the magnetic moment B1 which is rotating in the x − y plane. The net effect is that the magnetization tips 100  M  B0  M  B0  B1  B1  Mxy  Figure A.2: Applying a radio frequency pulse to a collection of spins. B0 is the direction of the main magnetic field and B1 is the direction of the magnetic field attributed to the RF pulse. In the initial diagram B1 and the spins are both rotating at the Larmor frequency. The second diagram uses a rotating reference frame at the Larmor frequency in which both B1 and the spins are stationary. Mxy is the result of tipping M onto the xy plane. towards the x − y plane. The amount of tipping, or flip angle, is determined by the length and power of the applied pulse. If a pulse is applied such that the number of spins in parallel and anti-parallel states is equal, the net magnetization will lie in the x − y plane and the flip angle will be 90 degrees [111] (Figure A.2). A receiving coil placed next to the lattice (sample) containing the spins will have a signal induced due to electromotive force. The amplitude of this signal oscillates at the Larmor frequency due to the precession of the spins and decays with time due to relaxation of the spins [107] (Figure A.3). Relaxation is a critical aspect of MR imaging and there are two methods by which it occurs. The first is that, as the spins lose energy, they return to their initial distribution of parallel and antiparallel states and thus the net magnetization in the z axis returns. The rate constant with which it returns is termed T1 (Equation A.1). The second type of relaxation is caused by inhomogeneities in B0 . Immediately after tipping the magnetization into the x − y plane, the x − y components of the spins are aligned. However due to inhomogeneities in B0 the Larmor frequency of each spin is slightly different and 101  Figure A.3: Typical MR signal induced by the decaying transverse magnetization of the sample. this causes them to precess at different rates, thus dephasing (Figure A.4). This dephasing leads to a net loss of magnetization in the x − y plane. Interactions between spins cause a similar effect. These two types of dephasing cause a magnetization decay with a rate constant called T2* (Equation A.2). However, in MRI imaging the goal is to measure the decay due only to spin interaction (and not magnetic field inhomogeneity). The rate constant of this decay is called T2 and is described by Equation A.2 except with a different constant [111].  M long (t) = M0long (1 − exp(t/T 1))  (A.1)  Mtrans (t) = M0trans exp(−t/T 2∗ )  (A.2)  MR images display measurements of these rate constants which have been spatially localized. However, magnetic resonance, as described above, does not produce an image; it produces a single signal for a sample. In order to localize the 102  Mxy  Mxy  Figure A.4: Dephasing spins leading to decreased net transverse magnetization. signal within a sample, gradient coils are used. One gradient coil is needed for each dimension in which the signal will be localized, so in order to select a 3D point within a body, 3 gradient coils are required. The initial step is to select a slice of space to image. This is done by applying a gradient in the slicing direction prior to applying the radio frequency pulse. When the pulse is applied, only a portion of the sample will be precessing at the correct frequency and thus only spins in this region will experience resonance and tip into the x − y plane [107]. The second step of the sequence is phase encoding. Activating a gradient perpendicular to the slicing direction causes the spins to precess at spatially varying speeds. When this gradient is deactivated the spins will return to precessing at the same speed but will be out of phase. The phase difference in the received signals provides the second method for spatial localization [107]. The last step in localizing the MR signal is frequency encoding. The final orthogonal gradient is activated during signal readout. This gradient causes spins to  103  precess at different frequencies based on position along the final gradient encoding direction. This sequence is repeated for each row in the image (provided rows are aligned with the frequency encoding direction). Slice selection is performed once for the image. Then, for each row, phase encoding, followed by frequency encoding and readout are performed. Fourier transformation is then used to extract phase and frequency information from the received signal. Signal intensity is then assigned to the appropriate x, y, z pixel in the image [107]. Contrast in MR images is related to the T1 and T2 properties of the tissues being examined and these properties are tied to the tissue’s molecular structure. Recall that T1 describes the time it takes for a tissue to regain its longitudinal magnetization after excitement by an RF pulse and that longitudinal magnetization is related to the number of spins in low (parallel to B0 ) and high (anti-parallel to B0 ) energy states. The rate at which this magnetization returns is therefore proportional to the ability of the surrounding tissue to absorb energy from the spins [111]. Spins emit energy at the Larmor frequency and so molecules with similar natural frequencies absorb this energy more readily. Aqueous tissues which contain molecules that vibrate freely have natural frequencies that are too high to efficiently absorb energy from the spins and this causes slow T1 relaxation. In a T1 weighted image these tissues will be dark since the T1 signal will have a relatively small amplitude when signal readout begins. A similar effect occurs for protein molecules which are large and have a low natural frequency. However, the molecules of fat tissue have a natural frequency near the Larmor frequency and due to this, T1 relaxation occurs quickly. Therefore fat tissue appears as a bright region on T1 weighted images [111]. A tissue’s T2 contrast is related to inhomogeneities in the local B0 magnetic field. These tend to be caused by hydrogen nuclei on larger molecules 104  Figure A.5: A comparison of T1 and T2 weighted scans. Though the same anatomy is imaged, the contrast of identical tissues can differ. Particular attention should be paid to the ventricles of the brain in which the aqueous cerebrospinal fluid appears dark in the T1 weighted image and bright in the T2 weighted image [110]. within the tissue which create local, stationary or slowly varying magnetic fields. These fields cause increased dephasing of the spins and consequently the T2 signal decays quickly. This results in a dark region in a T2 weighted image [111]. Aqueous liquids appear as bright regions in T2 weighted images because they are free of these larger molecules which could disrupt the magnetic field. Figure A.5 shows a comparison of both types of images.  A.2  Diffusion Weighted Imaging  Diffusion Weighted Imaging (DWI) is a method for observing the diffusion of water molecules within tissues. It achieves this using gradient pulses to label spins. It is analogous to the way spatial encoding is done, wherein gradients label spins according to their positions in space. Imagine a single spin in space. If this spin is exposed to a gradient field G for for a short period of time ∆t the net dephasing  105  induced can be expressed as ∆t  φ1 = γ  0  Gz1 dt = γG∆tz1  (A.3)  where z1 is the spin position and γ is the gyromagnetic ratio [14]. In evaluating this integral we are assuming that ∆t is small and thus the spin position z1 is constant. We now wait for a small amount of time (t1 ), in order to let the spin “wander around” due to diffusion and then expose it to an “unlabelling” gradient pulse of opposite magnitude. The dephasing induced by this gradient can be expressed as t1 +∆t  φ2 = γ  t1  −Gz2 dt = −γG∆tz2  (A.4)  [14]. In this case z2 is the spin position during the second gradient pulse. We can now express the net dephasing as φ1 + φ2 or as γG∆t (z1 − z2 ) [14]. If the spin has diffused to a different position in time between gradient pulses, the net dephasing will be non-zero. Dephasing leads to a change in the T2 signal which manifests as a change in image contrast. Thus if we add diffusion gradients to an MR image sequence and observe the T2 signal we produce an image in which the pixel intensity is directly related to the amount of diffusion occurring along the diffusion gradient direction. Stejskal and Tanner developed the equation which relates signal strength (image intensity) to the diffusion tensor D and it is this equation [91] which links Diffusion Weighted Imaging (DWI) to Diffusion Tensor Imaging (DTI). A DWI image is an image acquired with a single diffusion gradient applied. Its pixel intensities show the amount of diffusion in a single direction. A DTI image is an image with a diffusion tensor at each pixel, thus characterizing diffusion in many directions. It should come as no surprise then that it takes multiple DWI images to 106  reconstruct a single DTI image. The Stejskal-Tanner equation is S = exp(−bgTk Dgk ) S0  (A.5)  [91] where b is the b-value, a collection of parameters including gradient intensity and gyromagnetic ratio supplied by the MR scanner and gk is the 3x1 vector expressing the kth applied gradient direction. S is the signal intensity of a DWI image and S0 is the signal intensity at the same spatial location from a non-diffusion weighted image. In the case of DTI, D is the unknown. It is a symmetric 3 × 3 tensor. To build one DTI image we need (in the nominal case) k = 7 DWI images which, when combined with equation Equation A.5 yield 6 equations per image pixel which can be solved. In practice more than 7 DWI images are acquired and the system of equations is solved using Least Squares in order to obtain robustness from image noise. The most important property of the diffusion tensor is that its primary eigenvector describes the most likely direction of diffusion at a spatial location (x, y, z). Fibrous biological tissues restrict diffusion perpendicular to the fibre direction and this causes the primary eigenvector of the diffusion tensor to point along the axis of the fibres. By integrating through the eigenvector fields described by these DTI images we can reconstruct fibre paths within the tissue itself.  A.3  Muscle Structure  DTI is an ideal method for observing skeletal muscle fibre architecture. Skeletal muscle is composed of long, cylindrical, multinucleate cells which are linked together into muscle fibres. Parallel fibres are then combined into groupings called  107  Figure A.6: A cross section of a skeletal muscle illustrating internal structure. Note how nearby muscle fibres are grouped together. Illustration c John Wiley and Sons, Inc. (Tortora and Grabowski [99]) by permission. motor units (Figure A.6). The diffusion of water molecules is restricted by the walls of the cells and therefore they diffuse along the muscle fibres much more than they do transversely to them. This anisotropic diffusion can be detected using DTI [54] and the diffusion tensors can be used to create vector fields which describe the fibre architecture (muscle fibre fields).  108  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0052165/manifest

Comment

Related Items