Overcoming Obstacles in Biomechanical ModellingMethods for Dealing with Discretization, Data Fusion, and DetailbyC. Antonio Sa´nchezB.A.Sc., Mathematics and Engineering, Queen’s University, 2007M.Math., Applied Mathematics, The University of Waterloo, 2010a thesis submitted in partial fulfillmentof the requirements for the degree ofDoctor of PhilosophyinThe Faculty of Graduate and Postdoctoral Studies(Electrical and Computer Engineering)The University of British Columbia(Vancouver)December 2018c© C. Antonio Sa´nchez, 2018The following individuals certify that they have read, and recommend to the Faculty of Graduateand Postdoctoral Studies for acceptance, the dissertation entitled:Overcoming Obstacles in Biomechanical Modelling:Methods for Dealing with Discretization, Data Fusion, and Detailsubmitted by C. Antonio Sa´nchez in partial fulfillment of the requirements for the degree ofDoctor of Philosophy in Electrical and Computer Engineering.Examining Committee:Sidney Fels, Electrical and Computer EngineeringSupervisorPurang Abolmaesumi, Electrical and Computer EngineeringCo-supervisorShahriar Mirabbasi, Electrical and Computer EngineeringUniversity ExaminerMichiel van de Panne, Computer ScienceUniversity ExaminerAdditional Supervisory Committee Members:Robert Rohling, Electrical and Computer EngineeringSupervisory Committee MemberSeptimiu E. Salcudean, Electrical and Computer EngineeringSupervisory Committee MemberiiAbstractBiomechanical modelling has the potential to start the next revolution in medicine, just as imaginghas done in decades past. Current technology can now capture extremely detailed information aboutthe structure of the human body. The next step is to consider function. Unfortunately, thoughthere have been recent advances in creating useful anatomical models, there are still significantbarriers preventing their widespread use.In this work, we aim to address some of the major challenges in biomechanical model construc-tion. We examine issues of discretization: methods for representing complex soft tissue structures;issues related to consolidation of data: how to register information from multiple sources, par-ticularly when some aspects are unreliable; and issues of detail: how to incorporate informationnecessary for reproducing function while balancing computational efficiency.To tackle discretization, we develop a novel hex-dominant meshing approach that allows forquality control. Our pattern-base tetrahedral recombination algorithm is extremely simple, andhas tight computational bounds. We also compare a set of non-traditional alternatives in thecontext of muscle simulation to determine when each might be appropriate for a given application.For the fusion of data, we introduce a dynamics-driven registration technique which is robust tonoise and unreliable information. It allows us to encode both physical and statistical priors, whichwe show can reduce error compared to the existing methods. We apply this to image registrationfor prostate interventions, where only parts of the anatomy are visible in images, as well as increating a subject-specific model of the arm, where we need to adjust for both changes in shapeand in pose.Finally, we examine the importance of and methods to include architectural details in a model,such as muscle fibre distribution, the stiffness of thin tendinous structures, and missing surfaceinformation. We examine the simulation of muscle contractions in the forearm, force transmis-sion in the masseter, and dynamic motion in the upper airway to support swallowing and speechsimulations.By overcoming some of these obstacles in biomechanical modelling, we hope to make it moreaccessible and practical for both research and clinical use.iiiLay SummaryDigital models of the human body can be extremely useful in planning and guiding surgeries andother medical treatments. By combining medical data with our knowledge of how the body behavesand moves, we can use computer simulation to make informed predictions – such as the impact ofdisease, or testing the response to different treatment options. This would allow doctors to create apersonalized treatment plan for each patient. Unfortunately, there are still significant challenges inbuilding detailed patient-specific models, which is preventing them from being used on a large scale.In this work, we aim to address challenges in model construction, including: digitally representingcomplex soft tissue structures; combining medical data from multiple sources; and incorporatingnecessary detailed structural information in an efficient way. By overcoming the major obstaclesin model construction, we hope to make modelling more accessible and practical for both researchand clinical use.ivPrefaceMuch of the work contained in this dissertation has been presented and published elsewhere. Con-tent for the chapters, as well as many of the figures, have been reproduced with permission, asdetailed below and at the end of each chapter.Ethics approval for conducting research related to the prostate in Chapter 4 has been provided bythe UBC Clinical Research Ethics Board (H11-01789). For cadaveric studies involving dissectionand digitization of muscle and tendon architectures in Chapter 5, ethics approval was received fromthe Research Ethics Boards of University of Toronto Health Sciences (P.R. #27210, #28530), MountSinai Hospital (12-0252-E), and the University of British Columbia (H12-00130).The following is a list of publications resulting from the work described in this dissertation.Journal Publications[J1] S. Khallaghi, C. Antonio Sa´nchez, A. Rasoulian, Y. Sun, F. Imani, A. Khojaste, O. Goksel,C. Romagnoli, H. Abdi, S. Chang, P. Mousavi, A. Fenster, A. Ward, S. Fels, and P. Abol-maesumi. Biomechanically constrained surface registration: Application to MR-TRUS fusionfor prostate interventions. IEEE transactions on Medical Imaging, 34(11):2404–2414, 2015.[J2] S. Khallaghi, C. Antonio Sa´nchez, A. Rasoulian, S. Nouranian, C. Romagnoli, H. Abdi,S. Chang, P. Black, L. Goldenberg, W. Morris, I. Spadinger, A. Fenster, A. Ward, S. Fels,and P. Abolmaesumi. Statistical biomechanical surface registration: Application to MR-TRUSfusion for prostate interventions. IEEE transactions on Medical Imaging, 34(12):2535–2549,2015.[J3] A. Fedorov, S. Khallaghi, C. Antonio Sa´nchez, A. Lasso, S. Fels, K. Tuncali, E. N. Sugar,T. Kapur, C. Zhang, W. Wells, P. L. Nguyen, P. Abolmaesumi, and C. Tempany. Open-source image registration for MRI–TRUS fusion-guided prostate interventions. InternationalJournal of Computer Assisted Radiology and Surgery, 10(6):925–934, 2015.[J4] S.-J. Ahn, L. Tsou, C. Antonio Sa´nchez, S. Fels, and H.-B. Kwon. Analyzing center ofrotation during opening and closing movements of the mandible using computer simulations.Journal of Biomechanics, 48(4):666–671, Feb 2015.[J5] C. Antonio Sa´nchez, J. E. Lloyd, S. Fels, and P. Abolmaesumi. Embedding digitized fibrefields in finite element models of muscles. Computer Methods in Biomechanics and BiomedicalEngineering: Imaging & Visualization, 2(4):223–236, 2014.vConference Publications[C1] J. E. Lloyd, C. Antonio Sa´nchez, E. Widing, I. Stavness, and S. Fels. New techniques forcombined fem-multibody anatomical simulation. In Proceeding of the Computer Methods inBiomechanics and Biomedical Engineering (CMBBE-2018), 2018 (Best Paper Award).[C2] C. Antonio Sa´nchez, Z. Li, A. G. Hannam, P. Abolmaesumi, A. Agur, and S. Fels. Con-structing detailed subject-specific models of the human masseter. In T. Arbel, M. J. Cardoso,S. Li, and J. M. R. Tavares, editors, Bio-Imaging and Visualization for Patient-CustomizedSimulations – MICCAI 2017 Satellite Workshop, volume 1, pages 1–8, Que´bec City, Que´bec,September 2017.[C3] S. Khallaghi, C. Antonio Sa´nchez, S. Nouranian, S. Sojoudi, S. Chang, H. Abdi, L. Machan,A. Harris, P. Black, M. Gleave, L. Goldenberg, S. Fels, and P. Abolmaesumi. A 2D-3DRegistration Framework for Freehand TRUS-Guided Prostate Biopsy, pages 272–279. SpringerInternational Publishing, 2015.[C4] I. Stavness, C. Antonio Sa´nchez, J. E. Lloyd, A. Ho, J. Wang, S. Fels, and D. Huang.Unified skinning of rigid and deformable models for anatomical simulations. In SIGGRAPHAsia 2014 Technical Briefs, SA ’14, pages 9:1–9:4, New York, NY, USA, 2014. ACM.[C5] C. Antonio Sa´nchez and S. Fels. Polymerge: A fast approach for hex-dominant meshgeneration. In ACM SIGGRAPH 2014 Posters, SIGGRAPH ’14, pages 40:1–40:1, New York,NY, USA, 2014. ACM. (ACM Student Research Competition, 1st Place).Conference Abstracts/Presentations[A1] L. Sharkey, C. Antonio Sa´nchez, T. Bhatnagar, S. Fels, and T. Oxland. Determiningthe relative mechanical properties of grey and white matter in the rat spinal cord. In 23rdCongress of the European Society of Biomechanics, volume 1, pages 1–1, Seville, Spain, July2017.[A2] C. Antonio Sa´nchez, J. E. Lloyd, Z. Li, and S. Fels. Subject-specific modelling of articulatedanatomy using finite element models. In 13th International Symposium on Computer Meth-ods in Biomechanics and Biomedical Engineering, volume 1, pages 1–1, Montreal, Que´bec,September 2015. (CMBBE Student Presentations, 3rd Place).[A3] C. Antonio Sa´nchez, I. Stavness, J. E. Lloyd, A. G. Hannam, and S. Fels. Modelling masti-cation: the important role of tendon in force transmission. In 12th International Symposium,Computer Methods in Biomechanics and Biomedical Engineering, volume 1, pages 146–147,Amsterdam, Netherlands, October 2014.[A4] C. Antonio Sa´nchez, I. Stavness, J. Lloyd, and S. Fels. Forward-dynamics tracking sim-ulation of coupled multibody and finite element models: application to the tongue and jaw.viIn 11th International Symposium, Computer Methods in Biomechanics and Biomedical Engi-neering, volume 1, pages 412–413, Salt Lake City, Utah, April 2013.Book Chapters[B1] P. Anderson, S. Fels, N. M. Harandi, A. Ho, S. Moisik, C. Antonio Sa´nchez, I. Stavness,and K. Tang. FRANK: a hybrid 2D biomechanical model of the head and neck. In Y. Payanand J. Ohayon, editors, Biomechanics of Living Organs, chapter 20, pages 413–447. AcademicPress, Oxford, 2017.Author’s ContributionsIn [J1] and [J2], myself and Khallaghi acted as equally-contributing authors. My main contributionwas the development, mathematical derivation, and implementation of the dynamics-based surfaceregistration techniques, GMM-FEM and SSM-FEM. Khallaghi helped to develop the approach inthe context of prostate interventions, and was responsible for tasks related to data collection andprocessing, method evaluation, and analysis of results. We both contributed equally to writing ofthe manuscripts. The remaining authors were involved in supervision, clinical guidance, and datacollection. Modified versions of these works are included in Chapter 4 ( c© IEEE 2015).In [J3], one of the two methods discussed was developed by Fedorov and his collaborators. Theother method, GMM-FEM, was contributed by Khallaghi and myself, which includes further vali-dation compared to [J2]. The remaining authors were involved in clinical evaluation and validation.For [J4], my contribution was in the implementation of numerical constraints and of techniquesfor analyzing jaw motion in the hybrid multibody-FEM model. I also consulted in model construc-tion. Co-authors Ahn, Tsou and Kwon built, ran, and analyzed the model, and Fels acted in asupervisory role.In [J5], [C2], [A2], I developed the methods for incorporating digitized fibre and tendon in-formation into finite element models of the muscles in the forearm and the masseter, as well asthe template registration technique. Data collection of the fibre and tendon architectures from acadaveric specimen was performed by Li, but published elsewhere [140]. Hannam manually con-structed muscle and tendon surfaces for one of the models of the masseter, and provided usefulguidance. The other authors supervised or were consulted in the development of the techniques.The fibre-template registration approach in [A2] placed third in the student research presentationcompetition at CMBBE 2015. This work is described in detail in Chapter 5 ( c© Taylor & Francis2014, c© Springer 2017).viiFor [C1], my contribution was the development of the mathematical framework and implemen-tation for two of the methods discussed in the paper – the unified geometric skinning approach andfinite-element embedding – as well as application to the masseter. Lloyd was responsible for theremaining techniques and implementations and for leading the work. Lloyd, Widing and Stavnessdeveloped the reduced modelling techniques and applications, and Fels acted in a supervisory role.This work received the best paper award at CMBBE 2018, and parts can be found in Chapter 4.For [C3], my contribution was in mathematical derivations, and implementation details relatedto the biomechanical modelling aspects of the slice-to-volume ultrasound registration method. Khal-laghi developed and implemented the image-based aspects, and was responsible for data collection,application, and evaluation. The remaining authors contributed to data collection, software inte-gration with supporting tools, supervision, and clinical guidance.In [C4], my contribution was the mathematical development and implementation of the unifiedskinning approach. Stavness led the work related to the applications, aided by Ho, Wang, andHuang for the swallowing and speech simulations. Lloyd aided with implementation, and Felsprovided supervision and guidance. This work is described in Chapter 5 ( c© ACM 2014).The development of the PolyMerge volumetric meshing approach in [C5] was entirely my own,with supervision from Fels. This work received first place in the SIGGRAPH ACM Student ResearchCompetition, and can be found in Chapter 3 ( c© ACM 2014).For [A1], Sharkey led and contributed most of the work related to simulation of grey and whitematter in the rat spinal coord. I acted mainly as a modelling consultant, and aided with theunderlying implementation details.For [A4], my main contribution was the application of the inverse modelling techniques totongue-tracking. The work was based on previous contributions by Stavness and Lloyd, and I wasguided by Stavness and Fels.In the book chapter [B1], my contribution was the section related to modelling of the massetermuscle, a description of the mathematical modelling and simulation framework in general, and indevelopment of the unified skinning approach used to represent the airway mesh in the model. Theother authors each contributed to the book chapter equally, discussing their own modelling focusrelated to the other components in the upper-airway model.viiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivList of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xviiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xviiiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1.1 Why Physics-Based Modelling? . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1.2 Biomechanical Modelling in Practice . . . . . . . . . . . . . . . . . . . . . . . 41.1.3 Discretization: Volumetric Meshing and Alternatives . . . . . . . . . . . . . . 61.1.4 Data: Dynamics-Driven Registration . . . . . . . . . . . . . . . . . . . . . . . 71.1.5 Detail: Structural Intricacies and Missing Information . . . . . . . . . . . . . 81.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.1 Components of a Biomechanical System . . . . . . . . . . . . . . . . . . . . . . . . . 132.1.1 Bones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.1.2 Soft-Tissue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.1.3 Muscles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.2.1 Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.2.2 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.2.3 Dissection and Digitization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.3 Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.4 Simulation Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28ixTable of Contents3 Discretization: Volumetric Meshing and Alternatives . . . . . . . . . . . . . . . . 293.1 Volumetric Meshing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.1.1 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.1.2 Existing Meshing Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.1.3 PolyMerge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.2 Alternative Discretizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.2.1 Quadratic Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.2.2 Nodal Pressure Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.2.3 The Embedded FEM Technique . . . . . . . . . . . . . . . . . . . . . . . . . 563.2.4 The Element-Free Galerkin Technique . . . . . . . . . . . . . . . . . . . . . . 603.2.5 Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643.2.6 Section Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773.3 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784 Data Fusion: Dynamics-Driven Registration . . . . . . . . . . . . . . . . . . . . . . 794.1 Registration Energy Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794.1.1 Similarity Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804.1.2 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824.1.3 Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824.2 Dynamic Model-Based Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834.2.1 Correspondences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844.2.2 Dynamic Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864.2.3 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874.2.4 Registration Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884.3 GMM-FEM: Application to MR-TRUS Fusion . . . . . . . . . . . . . . . . . . . . . 914.3.1 GMM-FEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924.3.2 Related Registration Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 944.3.3 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 974.3.4 Section Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044.4 Dynamic Statistical Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1054.4.1 Statistical Shape Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1054.4.2 Dynamic SSM Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074.5 SSM-FEM: Partial Surface Registration . . . . . . . . . . . . . . . . . . . . . . . . . 1084.5.1 Clinical Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1104.5.2 Partial Surface to Partial Surface Registration . . . . . . . . . . . . . . . . . 1104.5.3 SSM-FEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1114.5.4 Registration Methods for Comparison . . . . . . . . . . . . . . . . . . . . . . 115xTable of Contents4.5.5 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1154.5.6 Section Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234.6 Deformable Registration of Articulated Anatomy . . . . . . . . . . . . . . . . . . . . 1254.6.1 Articulated Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1264.6.2 Template Model and Digitized Target Data . . . . . . . . . . . . . . . . . . . 1274.6.3 Articulated Finite Element Model . . . . . . . . . . . . . . . . . . . . . . . . 1284.6.4 Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304.6.5 Improving Computational Efficiency . . . . . . . . . . . . . . . . . . . . . . . 1334.6.6 Section Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1344.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1355 Detail: Structural Intricacies and Missing Information . . . . . . . . . . . . . . . 1375.1 Detailed Muscle Fibre Architecture in FEMs . . . . . . . . . . . . . . . . . . . . . . 1385.1.1 Finite Element Muscles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385.1.2 Model Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1395.1.3 Forearm Muscle Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435.1.4 Section Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475.2 Detailed Models of the Masseter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1485.2.1 Models for Mastication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1485.2.2 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1505.2.3 Finite Element Model Construction . . . . . . . . . . . . . . . . . . . . . . . 1515.2.4 Constitutive Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1525.2.5 Simulation, Registration, and Results . . . . . . . . . . . . . . . . . . . . . . 1535.2.6 Section Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1545.3 Unified skinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1565.3.1 Geometric versus Dynamic Skinning . . . . . . . . . . . . . . . . . . . . . . . 1575.3.2 Applications to Upper Airway Modelling . . . . . . . . . . . . . . . . . . . . . 1605.3.3 Swallowing Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1615.3.4 Speech Acoustics Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1635.3.5 Limitations and Future Directions . . . . . . . . . . . . . . . . . . . . . . . . 1645.3.6 Section Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1645.4 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1646 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666.1 Summary of Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666.2 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1696.3 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172xiTable of ContentsAppendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191A Mathematical Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192A.1 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192A.2 Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194A.3 Dual Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196B Dynamics: Hybrid Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198B.1 Multibody Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199B.2 Continuum Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202B.2.1 Stresses and Strains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203B.2.2 Equations of Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205B.2.3 Constitutive Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206B.2.4 Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213B.2.5 Discretizing Newton’s Laws for a Continuum . . . . . . . . . . . . . . . . . . 213B.3 Combined Multibody and Continuum Mechanics . . . . . . . . . . . . . . . . . . . . 218B.3.1 Force Effectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218B.3.2 Damping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220B.3.3 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221B.3.4 Energy Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232B.3.5 Quasi-Static State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233B.3.6 Dynamic solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234B.3.7 Quasi-Static Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237C Continuum Mechanics Derivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242C.1 Strain Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242C.2 Stress Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244C.3 Balance Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245C.4 Discretization of the Balance Equations . . . . . . . . . . . . . . . . . . . . . . . . . 248C.4.1 Momentum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250C.4.2 Stress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251C.4.3 Traction and Body Forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254C.4.4 Linearized and Discretized System for a Continuum . . . . . . . . . . . . . . 255D Identities and Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256D.1 Quaternion derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256D.2 Derivative of Rotation Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257D.3 Gauss’s Divergence Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258D.4 Strain Energy Density Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259D.5 Jacobian Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261D.6 Incompressible Strain Energy Density . . . . . . . . . . . . . . . . . . . . . . . . . . 262xiiList of Tables1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Table 2.1 Density of body tissues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Discretization: Volumetric Meshing and Alternatives . . . . . . . . . . . . . . . . 29Table 3.1 Finite element types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Table 3.2 PolyMerge summary statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524 Data Fusion: Dynamics-Driven Registration . . . . . . . . . . . . . . . . . . . . . . 79Table 4.1 TRE results for GMM-FEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102Table 4.2 Statistical significance of GMM-FEM results . . . . . . . . . . . . . . . . . . . . . 103Table 4.3 Computation time of GMM-FEM . . . . . . . . . . . . . . . . . . . . . . . . . . . 103Table 4.4 TRE results for SSM-FEM comparisons . . . . . . . . . . . . . . . . . . . . . . . . 121Table 4.5 TRE results for SSM-FEM by region . . . . . . . . . . . . . . . . . . . . . . . . . 122Table 4.6 Statistical significance of SSM-FEM results . . . . . . . . . . . . . . . . . . . . . . 122Table 4.7 Computation time of SSM-FEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235 Detail: Structural Intricacies and Missing Information . . . . . . . . . . . . . . . 137Table 5.1 Material constants for simulations . . . . . . . . . . . . . . . . . . . . . . . . . . 143A Mathematical Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192B Dynamics: Hybrid Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198C Continuum Mechanics Derivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242D Identities and Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256xiiiList of Figures1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Figure 1.1 FRANK: A hybrid model of the head and neck . . . . . . . . . . . . . . . . . . . 2Figure 1.2 Sample volumetric meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Figure 1.3 Digitized dissection of human muscles . . . . . . . . . . . . . . . . . . . . . . . . 92 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Figure 2.1 Rigid bones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Figure 2.2 Point-to-point models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Figure 2.3 Muscle fibre pennation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Figure 2.4 Dynamic characteristics of muscle . . . . . . . . . . . . . . . . . . . . . . . . . . 20Figure 2.5 MRI and CT of the head and neck . . . . . . . . . . . . . . . . . . . . . . . . . . 22Figure 2.6 DTI of the brain and tongue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Figure 2.7 MRI and US of the prostate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Figure 2.8 Segmentation of CT and MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Figure 2.9 Segmentation surface post-processing . . . . . . . . . . . . . . . . . . . . . . . . . 25Figure 2.10 Dissection and digitization of the masseter . . . . . . . . . . . . . . . . . . . . . . 253 Discretization: Volumetric Meshing and Alternatives . . . . . . . . . . . . . . . . 29Figure 3.1 Example finite element meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Figure 3.2 Spatial partitioning of hexes and tets . . . . . . . . . . . . . . . . . . . . . . . . . 34Figure 3.3 PolyMerge: overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38Figure 3.4 Polycube penalty terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39Figure 3.5 Varying the polycube complexity parameter . . . . . . . . . . . . . . . . . . . . . 40Figure 3.6 PolyMerge node distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45Figure 3.7 Tetrahedra-merging patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47Figure 3.8 Finding opposite tets and nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . 50Figure 3.9 Hexahedral face conformity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Figure 3.10 PolyMerge hex-dominant mesh results . . . . . . . . . . . . . . . . . . . . . . . . 53Figure 3.11 Hexahedral trimming patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58Figure 3.12 Cylindrical beam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65Figure 3.13 Convergence comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66Figure 3.14 Bipennate muscle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67xivList of FiguresFigure 3.15 Bipennate muscle forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Figure 3.16 Embedded bipennate models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69Figure 3.17 Coupled bipennate model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70Figure 3.18 Original tongue model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71Figure 3.19 Tongue models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72Figure 3.20 Tongue protrusion results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73Figure 3.21 Sparsity structure comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75Figure 3.22 Quadratic element inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764 Data Fusion: Dynamics-Driven Registration . . . . . . . . . . . . . . . . . . . . . . 79Figure 4.1 Dynamic registration: FEM sphere . . . . . . . . . . . . . . . . . . . . . . . . . . 89Figure 4.2 Dynamic registration: articulated arm . . . . . . . . . . . . . . . . . . . . . . . . 90Figure 4.3 Overview of GMM-FEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92Figure 4.4 Prostatectomy data collection protocol . . . . . . . . . . . . . . . . . . . . . . . . 95Figure 4.5 Prostate biopsy segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96Figure 4.6 Example GMM-FEM registration to full data . . . . . . . . . . . . . . . . . . . . 98Figure 4.7 Example GMM-FEM results for prostatectomy data . . . . . . . . . . . . . . . . 99Figure 4.8 GMM-FEM results with missing data . . . . . . . . . . . . . . . . . . . . . . . . 100Figure 4.9 Example GMM-FEM results for a prostate biopsy data . . . . . . . . . . . . . . 101Figure 4.10 Example corresponding fiducial pairs in MR and TRUS . . . . . . . . . . . . . . 101Figure 4.11 Overview of SSM-FEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109Figure 4.12 Prostate SSM modes of variation . . . . . . . . . . . . . . . . . . . . . . . . . . . 117Figure 4.13 Ambiguity of prostate boundary in MR and TRUS . . . . . . . . . . . . . . . . . 118Figure 4.14 Example SSM-FEM registration . . . . . . . . . . . . . . . . . . . . . . . . . . . 119Figure 4.15 Example of SSM-FEM registration with missing data . . . . . . . . . . . . . . . 119Figure 4.16 Example corresponding fiducial pairs in MR and TRUS . . . . . . . . . . . . . . 120Figure 4.17 Articulated model and digitization of the arm . . . . . . . . . . . . . . . . . . . . 125Figure 4.18 Encapsulating fibre surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128Figure 4.19 Articulated finite element arm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130Figure 4.20 Rigid vs. FEM articulated registration . . . . . . . . . . . . . . . . . . . . . . . . 131Figure 4.21 Articulated muscle fibre registration . . . . . . . . . . . . . . . . . . . . . . . . . 132Figure 4.22 Forearm registration with varying parameters . . . . . . . . . . . . . . . . . . . . 1335 Detail: Structural Intricacies and Missing Information . . . . . . . . . . . . . . . 137Figure 5.1 Fibre registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140Figure 5.2 Computation of average fibre direction . . . . . . . . . . . . . . . . . . . . . . . . 141Figure 5.3 FDS results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144xvList of FiguresFigure 5.4 Displacement field for FDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145Figure 5.5 ECRL results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145Figure 5.6 Displacement field for ECRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146Figure 5.7 Impact of fibre angle on displacement field . . . . . . . . . . . . . . . . . . . . . . 146Figure 5.8 Finite-element models of the masseter . . . . . . . . . . . . . . . . . . . . . . . . 149Figure 5.9 Digitized masseter volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150Figure 5.10 Force production in the masseter . . . . . . . . . . . . . . . . . . . . . . . . . . . 153Figure 5.11 Unified skinning approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156Figure 5.12 Two-way force coupling in a skinned mesh . . . . . . . . . . . . . . . . . . . . . . 158Figure 5.13 Model of the upper airway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161Figure 5.14 Swallowing simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162Figure 5.15 Acoustic simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163A Mathematical Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192B Dynamics: Hybrid Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198Figure B.1 Point-force applied to a rigid body . . . . . . . . . . . . . . . . . . . . . . . . . . 199Figure B.2 Lagrangian versus Eulerian coordinates . . . . . . . . . . . . . . . . . . . . . . . 202Figure B.3 Stress on a surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204Figure B.4 Hill musculotendon model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219Figure B.5 Attachments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222Figure B.6 Joint constraint frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224Figure B.7 Collisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228C Continuum Mechanics Derivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242Figure C.1 Strain of a rod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242D Identities and Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256xviList of AbbreviationsAABB Axis-Aligned Bounding BoxCAD Computer-Aided DesignCPD Coherent Point DriftCT Computed TomographyDLB Dual-Quaternion Linear BlendingDOF Degree Of FreedomDTI Diffusion Tensor ImagingDWI Diffusion Weighted ImageECRL Extensor Carpi Radialis LongusEM Expectation MaximizationFDS Flexor Digitorum SuperficialisFE Finite ElementFEM Finite Element Method (or Model)FFD Free-Form DeformationFLE Fiducial Localization ErrorGMM Gaussian Mixture ModelICP Iterative Closest PointMFree Mesh-FreeMI Mutual InformationMLS Moving Least SquaresMR Magnetic ResonanceNMI Normalized Mutual InformationNURBS Non-Uniform Rational B-SplineOBB Oriented Bounding BoxPCA Principal Component AnalysisPCSA Physiological Cross-Sectional AreaPD Principal DiffusionRAS Right-Anterior-SuperiorSDM Statistical Deformation ModelSPH Smoothed-Particle HydrodynamicsSSD Sum of Squared DifferencesSSM Statistical Shape ModelTMJ Temporomandibular JointTPS Thin Plate SplineTRE Target Registration ErrorTRUS Transrectal UltrasoundUBC University of British ColumbiaUS UltrasoundxviiAcknowledgementsMuch of this work has been supported by the Graphics, Animation and New Media (GRAND) Net-work Center of Excellence, Natural Sciences and Engineering Research Council of Canada (NSERC),and the Canadian Institutes of Health Research (CIHR).In Chapter 3, many of the geometries have been provided freely by outside sources, and usedwith permission. The fertility and kitten models were provided courtesy of Utrecht University, andCow1 and Rocker-arm courtesy of INRIA by the AIM@SHAPE-VISIONAIR Shape Repository.The Stanford Bunny and Dragon models were provided courtesy of the Stanford Computer GraphicsLaboratory.For work in Chapter 5, we graciously thank Benjamin Gilles (INRIA) for helping with thesurface-to-surface registrations; James Li, Dongwoon Lee, and Anne Agur (University of Toronto)for the digitized muscle fibre data and wrapping surfaces; and Poul Nielson (University of Auckland)for providing template meshes. We also gratefully acknowledge the Parametric Human group fortheir input and support.This work would not have been possible without the invaluable support and guidance from myresearch supervisors: Prof. Sidney Fels and Prof. Purang Abolmaesumi. Thank you for allowing methe freedom to take these projects in my own directions, and for steering me back when I driftedtoo far.xviiiTo my loving wife, Sarah Elizabeth Costa,who thought she knew what we were getting into when we met,and has stuck strongly by my side ever since.We can now begin the next chapter, together.xixCHAPTER 1IntroductionBiomechanical modelling and computer simulation are becoming invaluable tools in biomedicalresearch. By combining detailed representations of structural anatomy with our knowledge ofphysics and mechanics, we can make predictions about how the body will move, react, and respondto given stimuli. A biomechanical model tries to capture all this information: it consists of ageometric representation of the anatomy, a mathematical description of how the components moveand interact, and includes physiologically-relevant properties of the biological materials involved.We can use these models to estimate quantities traditionally difficult to measure – such as stressesthat develop inside a muscle as it contracts, or the stiffness of deep internal tissues. Dynamicmuscle-driven models are being applied in physiological research to help us understand complexmotor-control problems, and in the study of differences between normal and pathological function.We are even starting to see modelling enter clinical practice, in treatment planning and in surgicaldesign or training.While there have been many advances over the last decade in creating accurate and highlydetailed biomechanical models, there are still significant barriers to their widespread use. Forone, model creation is an extremely arduous process. It often heavily relies on manual design, andrequires substantial expert knowledge – not only regarding the anatomy of interest, but also in issuesof physics, geometry, numeric stability, and computational efficiency. There are also practicalityconcerns: applications often demand limitations on time and resources, requiring these models tostrike a difficult balance between capturing the desired behaviour while keeping within reasonableconstraints. There are even challenges in collecting and consolidating the necessary data. Thiscould be due to technological limitations, such as an inability to see detailed structural informationusing current standard imaging, or challenges associated with combining data from multiple sourcesinto a common coordinate space.In this dissertation, we try to remove some of these barriers to biomechanical modelling. Weexamine issues of geometric discretization: methods for constructing suitable volumetric repre-sentations of complex anatomical shapes. We discuss issues of consolidating data: physics-drivenmethods for mapping information between sources, either within-subject when data is collectedat different times and different positions, or between-subjects when we have no practical way toobserve the required characteristics. We also address issues of detail and complexity: methods forefficiently incorporating fine structural information necessary for function, or for filling artificial1Introduction 1.1 MotivationFigure 1.1: FRANK: A hybrid biomechanical model of the head and neck [5], used to estimatetissue deformation during a laryngoscopy based on a CT image. The model contains many intercon-nected components including rigid bones, deformable soft tissues, spring-like muscles and ligaments,and various constraints such as joints and contact.model gaps due to lack of data or model simplifications. The overall goal of this work is to helpmake biomechanical modelling more accessible for both research and clinical applications.1.1 MotivationDynamic modelling and computer simulation aims to connect measured data to our knowledge ofphysical processes, in order to further our understanding of the function of complex biomechanicalsystems. Unfortunately, constructing biomechanical models from measured data is a non-trivialprocess: we need to extract geometric and structural information, convert them into appropriatenumerical representations, determine and incorporate relevant mechanical properties, identify anddefine constraints and modes of interaction between components, et cetera. Each of these stepspresents a set of hurdles that must be overcome in order to create a fully functional and predictivemodel. In this work we aim to tackle some of these hurdles, presenting applications along the waythat best demonstrate these challenges, and our approaches for solving them.1.1.1 Why Physics-Based Modelling?Advances in medical imaging have been paramount in pushing forward our understanding of thehuman body, its structure, and its function [118]. With developments in ultrasound (US) we cansafely and cheaply peek beneath the skin surface to examine inner tissue workings in real-time. WithComputed tomography (CT) we can capture the entirety of the human body in three dimensions,with very high resolution and strong contrast between bone and soft tissue. Magnetic resonanceimaging (MRI) provides us with radiation-free 3D imaging that we can tune to highlight differenttissue types [192]. Diffusion-tensor MRI (DTI) allows us to accentuate fibrous structures in the2Introduction 1.1 Motivationbody, such as muscle fascicles in the tongue [78] or axons in the brain [9]. More recent dynamicimaging, such as Cine MRI, phase-constrast MRI, ultrasound strain-rate imaging, videofluoroscopy,and dynamic CT allow us to study biomechanics in action, in vivo. These have been employed toanalyze the fast and intricate interactions taking place during swallowing [105] and speech [268].But while this structural and kinematic data is invaluable to our understanding of the human body,it is only one side of the coin. The other consists of the mechanisms and underlying properties thatdirectly impact function. This is where physics-based modelling will play a crucial role.By adding our prior knowledge of the mechanical behaviour of bones and soft tissues, we cangain insights that we could not have otherwise garnered from medical data alone. For example,the field of elastography combines dynamic imaging with knowledge of wave propagation in orderto estimate a map of a tissue’s mechanical stiffness properties [210, 213]. These properties arenot only important in characterizing tissue function, but strong property variations can also beindicative of damage or disease, such as in the breast [222], liver [255], or prostate [39]. Othermechanical variables such as forces and internal stresses also play a major role in pathology. Brux-ism, an involuntary grinding of teeth, is known to be a leading cause of craniofacial pain due tothe overloading of musculoskeletal tissue [233]. Prolonged periods of physical stress have also beenshown to cause tissue damage leading to pressure ulcers in patients with limited mobility [155].Though forces cannot be imaged directly, they can be estimated by combining mechanical tissueconstitutive models with strain or displacement imaging.Once we have estimates of tissue properties and the impact of mechanical variables on morphol-ogy or function, we can begin to use biomechanical models to make new predictions. Tissue obeyscertain physical laws. With reasonable representations of the anatomy, the interactions betweencomponents, and estimates of the mechanical driving factors such as muscle contractions or appliedexternal forces, we can use these laws to ‘simulate’ the system forwards in time. By comparing amodel’s projections with either follow-up measurements or known consequences, we can formulateand test functional hypotheses. For example, Gick et al. [77] applied a model of the human upper-airway to investigate constrictions of the oropharyngeal isthmus involved in speech. The precisemechanisms behind the airway constrictions are poorly understood and difficult to measure, buttheir model points to two distinct muscle contraction patterns coordinated between the tongue andsoft-palate. These patterns, and the resulting constrictions, are also consistent with previouslyunexplained clinical observations. In this way, a simulated model can be used to derive plausibleexplanations for observed phenomena.Another key advantage of the use of biomechanical models is that we can digitally alter themto test the consequences of physical modifications, such as tissue stiffening caused by disease andtreatment, or surgical alterations. Models of the jaw have been used to analyze biomechanicalconsequences of bone resections and reconstructions [88, 227] and of partial joint replacements on3Introduction 1.1 Motivationchewing [144]. Digital modelling has also been used to predict muscle compensation patterns inorder to restore function in the jaw [227] and tongue [201] after surgery. These types of modelshave the potential to guide treatment planning, shifting the focus from structural restorations tomaximizing post-treatment function.Digital models and computer simulation can have a strong impact on medical research and inclinical application. They rely on – and supplement – our current established medical imagingknowledge, and provide predictive power for testing hypotheses and consequences. The challenge,then, is to create an adequate model for a given application.1.1.2 Biomechanical Modelling in PracticeThere is no perfect model that can represent all the intricacies of the human body. Models,by definition, are simplifications of the real world, based on reasonable assumptions, in orderto estimate some unknown quantities. The more confidence we have in the assumptions, themore precisely we can rely on the predictions. As an example, Handsfield et al. [85] created ahighly detailed model of the Achilles tendon, complete with twisted structural architecture andsliding intra-tendinous interactions. They used the model to study small non-uniform displacementpatterns, on the order of millimeters, that were observed via ultrasound. The narrow modellingfocus and attention to functionally-related details allowed them to make accurate predictions at thatprecise scale. On a more macroscopic scale, John et al. [112] used simplified biomechanical modelsof the entire human musculo-skeletal system to study gait patterns in walking, trying to determinethe drivers for lateral stabilization. These models consisted only of lumped-mass bones, simplifiedjoints, and treated muscles as point-to-point lines exerting forces between their attachment sites.John et al. were not seeking to explain the phenomenon with high precision, but were insteadlooking for insights into motor control strategies.To address a specific question, the model need only be detailed “enough” to provide reliableinsights, or to estimate a quantity within some acceptable bounds. The challenge is to create amodel that balances simplicity with the necessary accuracy. Efficiency, in practice, is a major factorto consider when modelling. While a highly detailed finite-element model of the entire head andneck region may technically allow for studying speech production, it is unlikely to be used sinceit would be far too computationally demanding. Instead, it may be better to treat bones as rigidstructures, each with only 6 degrees of freedom, and some of the muscles as simplified point-to-pointsegments. Efficiency is especially important when trying to bring modelling to the clinic, where ahigh throughput is often demanded. On the other hand, the model does need to capture all therelevant features and behaviours. If the fine structural details of a muscle and its interaction withsurrounding tendinous tissue are important for determining adequate force production, then that4Introduction 1.1 Motivationbehaviour must somehow be included. Models must be tailored to the application, and so everymodel will have a set of different requirements.But despite varying modelling requirements, the construction of biomechanical models followsa very similar process. We will need to:Discretize the geometry: This could be as simple as defining an orientation and center-of-mass for each bone, or constructing detailed volumetric meshes that conform to the boundariesof soft tissue structures. Geometry is typically derived from medical imaging and segmentation.For deformable components, the volumes need to be further discretized internally to supportnumerical integration.Merge relevant structural and functional data: It is rare that complex models are derivedfrom a single data source. Different imaging modalities highlight different properties of tissue:computed tomography images are excellent for extracting bone geometries, magnetic resonanceimaging is better for discerning soft tissues, and ultrasound is best for real-time intra-proceduralimaging. Other potential sources of information include physical measurements, laser-scans,or manual digitization. To leverage the combined information when constructing a model, weneed to transform the data into a common coordinate space. This will inevitably require someform of registration between geometries from multiple sources.Incorporate necessary detail: Bones and soft tissue have mechanical properties that affecttheir behaviour. Determining which information is important for the application and how bestto include it is a crucial step in modelling. Mass and densities need to be measured or approx-imated. Deformable components additionally require a full description of their constitutivebehaviours, such as the directional fibre field in muscle which defines its directions of contrac-tion, or the stiffness properties of internal tendinous sheets which are necessary for transmittingforces.Each of these steps presents common challenges when creating dynamic biomechanical models ofcomplex anatomical systems. In this dissertation, we will address:• Challenges in constructing suitable volumetric discretizations for irregular and highly de-formable soft tissue structures.• Challenges in combining data from multiple sources, particularly when one or more boundariesof the anatomy are difficult to see in imaging.• Challenges in morphing systems with constraints to new data for creating subject-specificmodels.• Challenges associated with incorporating missing essential details, such as structural intri-cacies that directly impact function, or filling model gaps when water or air-tightness isrequired.5Introduction 1.1 MotivationThese challenges often arise regardless of the application. The methods developed in this work toaddress them will be applied in several contexts to demonstrate their practical use.1.1.3 Discretization: Volumetric Meshing and AlternativesTo numerically solve for the physics of a complex three-dimensional structure, we need to discretizethe occupied space. Deformable volumes need to be broken-down into smaller sub-domains, typi-cally called elements, where we can simplify the equations of motion. Examples of these volumetricmeshes are shown in Figure 1.2. The size, shape, and connectivity of the elements have a directimpact on the numerics of the system. Oddly shaped elements can lead to numeric instabilities.Large elements can lead to inaccurate approximations. Highly connected elements can lead toreduced numerical efficiency, and certain element shapes can introduce other artifacts such as arti-ficial stiffening. There are many considerations when constructing a volumetric mesh, and there isnot always a straightforward approach for addressing them.Figure 1.2: Sample volumetric meshes. A volume is subdivided into small primitive-shaped sub-domains called elements over which material descriptions are discretized.Volumetric meshes for biomechanical systems can be onerous to construct because muscles andbones tend to have irregular shapes with twisting, bulging, and diverging topologies. Muscles areparticularly demanding because they can undergo large and convoluted deformations, which themesh must be able to support. To illustrate the challenge in volumetric meshing and our novel solu-tion, we examine a set of irregular volumes typically found in the volumetric meshing literature. Weshow how we can apply physical principles and efficient numeric algorithms to generate volumetricmeshes that conform to the geometry. With our approach, we have tighter control over elementquality compared to existing state-of-the-art methods, and we provide a systematic approach forgenerating the final mesh with explicit computational bounds.There are alternatives to the traditional volumetric discretization approach. This includes6Introduction 1.1 Motivationhigher-order element types or modified integration schemes to reduce numerical artifacts; non-conforming meshes, where the volume of interest is ‘embedded’ in a simpler bounding volume; andmesh-free techniques that instead use point-cloud representations rather than disjoint elements.We will examine the use of these alternative discretization strategies in the context of simulation ofincompressible tissue, and present biomechanically-inspired tasks to evaluate them. We discuss theadvantages and drawbacks of the techniques, and when they may be suitable for a given application.1.1.4 Data: Dynamics-Driven RegistrationRegistration is the process of transforming sets of data into a common coordinate space. Thereare two main applications of registration in biomechanical modelling: for data fusion, combininginformation from multiple sources such as two different imaging modalities; and for subject-specificmodelling, where an existing model is adapted to fit completely new data. We will discuss both ofthese applications.Surface-based data fusionMost of the data required for modelling will come from medical imaging. Each modality has itsadvantages and limitations, and is often best-suited for capturing a specific type of information.CT is ideal for extracting bone geometries, MRI for soft-tissues, and US for real-time measurementof motion or deformations. A model will therefore likely rely on information acquired from differentsources, at different times, and in different subject positions or poses. To combine this informationinto a single model, the data needs to be mapped, or ‘fused’, together into a common space. Notonly will registration need to account for a basic change of coordinates between images (rigidregistration), but also for any deformations of the soft-tissues (non-rigid registration).A common approach for registering data between two sources is to trace the boundaries ofthe anatomy of interest, known as segmentation, and move or deform the volume to bring theseboundaries together. However, because the various imaging techniques will highlight differentaspects of the tissue, it is often difficult to precisely see full and consistent boundaries. Thus themethod applied for the registration must be sufficiently robust to account for inconsistencies or tounreliable or missing information. Here is where biomechanics can help. Apart from the imageand boundary information collected, we often have some other ‘outside’ knowledge of the system.For example, if it is the same physical structure in two images, then its motion or deformationis governed by laws of mechanics. We may also have population-derived data such as statisticalrepresentations of the anatomy that can be incorporated. We will show that be leveraging thesetypes of information, we can improve accuracy of the data fusion process.7Introduction 1.1 MotivationAn interesting application for segmentation-based data fusion is in modelling applied to prostatecancer interventions. Prior to an intervention such as brachytherapy, biopsy, or a prostatectomy,the prostate is often imaged for planning purposes. This data could be an MRI or high-resolutionultrasound. During an intervention, however, the procedure can only be guided using a transrectalultrasound (TRUS) probe. As the probe is inserted, it pushes on the prostate, causing it to moveand deform. The prostate boundary in TRUS is often noisy and has poor contrast, making itsfull boundary difficult to discern. The challenge is then to map the intra-procedural data to theoriginal 3D plan. We will apply our novel surface-based registration techniques to map data frommultiple together, and demonstrate a reduction in registration error of landmarks located withinthe prostate volume.Physically-constrained registrationOnce we have a biomechanical model complete with multiple interacting components and con-straints, we may need to transform it fit new information. This could either be data from the samesubject with in a different position or pose, or from a completely new subject where we also needto account for variations in shape. Adjusting an existing model removes the burden of having toconstruct an entirely new one from scratch for every application. During this process, it’s oftenbeneficial to consider the model-imposed constraints while performing the registration.We investigate the application of registering a complete model of the arm to data collected froma dissection and digitization study (Figure 1.3) [140]. This application is particularly interestingbecause the arm, wrist and hand have many points of articulation, and the data is from a completelydifferent subject. Therefore, we need to account for both differences in shape of the individualmuscles and bones, as well as differences in pose. If we were to register the model’s components one-by-one, we would need to redefine all the connections, constraints, and interactions between them,but by registering a complete constrained model, these are automatically transferred. Furthermore,since we already have a model tuned to behave realistically, we can exploit the physics of it: anelbow should not need to bend backwards to fit another elbow, and a person’s fingers should notswap positions to fit another hand. We show that by taking advantage of the physical constraintsin the original model, we can more robustly perform the model registration.1.1.5 Detail: Structural Intricacies and Missing InformationWhen assembling a biomechanical model, it is important to include all the necessary details thatsignificantly impact the function being examined. Sometimes, however, essential structural orfunctional details cannot be observed. As an example, consider the masseter muscle depicted inFigure 1.3. The masseter is the largest muscle involved in chewing (i.e. mastication) and is the8Introduction 1.1 Motivation(a) Forearm (b) MasseterFigure 1.3: Digitized representations of human muscles, from cadaveric dissection studies [139,140]. The arm contains many bones, joints, and muscles, which makes registration to other posesor subjects a challenge. The masseter contains several compartments of muscle tissue, separatedby interleaving aponeurotic sheets. The various colours represent distinct muscles (arm) or musclecompartments (masseter) inserting onto different tendon structures.strongest muscle per unit volume in the human body [62]. Internally, it has a complex interleaving ofmuscle fascicles and sheets of aponeuroses which act to increase the muscle’s mechanical advantage,and are likely responsible for its strength. Unfortunately, the details of the muscle fascicles are fartoo small and the aponeuroses are far too thin to appear in conventional MRI or ultrasound. Howcan we incorporate this missing information into a model?We will examine methods for mapping structural details from other data sources, such ascadaveric studies, to fill-in crucial information necessary for simulating function. We then showhow to include this in the model in such a way as to balance functionality with simplicity andcomputational efficiency. We apply these approaches in the context of the forearm, where we havea large number of distinct intertwined muscles, and in the masseter where we demonstrate theimportance of the details on dynamics.Another important aspect regarding absent or incomplete information is in dealing with artificialgaps between model components. For example, consider a model of the upper airway to be usedin the simulation of swallowing [5]. The model may contain all the major components involved,such as the jaw, tongue, palate, pharynx, larynx, et cetera, but to support a simulation to studyswallowing of fluids, the geometry needs to be water-tight. Any missing feature, such as tonsils, ormucosa layers, though may not be functionally relevant to the application, may in their absenceleave holes or discontinuities in the model. We discuss a method for handling these gaps byproviding a ‘wrapping’ layer that conforms to the geometry that is present, and smoothly coversany false openings otherwise. The intermediate layer is able to transmit pressures and forces9Introduction 1.2 Contributionsbetween the existing model components and airway, allowing for a fully-coupled simulation despitethe incomplete information.1.2 ContributionsThe contributions contained within this dissertation involve developing state-of-the-art methodsfor constructing, registering, and simulating biomechanical models. Each method is focused onaddressing a particular challenge faced during the modelling process.Discretization: volumetric meshing and alternatives• Developed a novel method for hex-dominant meshing. We designed and implementeda method for automatically generating hex-dominant volumetric meshes of complex geome-tries for use in finite-element simulations. This includes a novel approach to finding a low-distortion approximate polycube deformation map for estimating regular structure within thevolume, and an extremely efficient tet-to-hex recombination scheme for constructing a finalhex-dominant mesh.• Adapted and compared alternative discretization strategies for incompressiblesoft tissues. In the context of simulation of soft tissue, and muscle in particular, we deriveand implement alternative discretization strategies and compare them to the standard finiteelement approach: nodal incompressibility for tetrahedral models; quadratic tetrahedral el-ements; the non-conforming ‘embedded’ finite element method; and the mesh-free Galerkintechnique. We show that all these techniques are able to converge to the same solutionat sufficiently high resolutions, and analyze their convergence rates. We then discuss someof the advantages and disadvantages of each approach, suggesting when each may be mostappropriate for a given application.Data fusion: dynamics-driven registration• Developed a model-based method for surface-to-surface registration. We designeda robust method for deformably registering two anatomical surfaces through the use of adynamic model, where one surface may contain unreliable or missing sections. We applythe algorithm to the prostate, where it is used to merge information from two data sources:high-resolution magnetic resonance imaging, and lower-resolution transrectal ultrasound. Wevalidate the method using data obtained from patients undergoing either a prostate biopsy,a prostatectomy, or brachiotherapy.10Introduction 1.2 Contributions• Extended our dynamic registration to include statistical information. We extendour dynamics-driven registration algorithm to register multiple ‘partial’ observations, using astatistical shape model to estimate a shared subject-specific reference geometry. The abilityto register multiple partial surfaces reduces the effort required in segmentation, allowing theapproach to be applied more readily in practice. This method is also applied in the contextof image fusion for prostate interventions.• Developed a deformable registration approach for articulated anatomy. Extendingour work on registering single anatomical structures, we develop a technique to handle morecomplex models of anatomy, involving articulation and allowing for differences in both shapeand pose. We apply this method in registering a complete model of the arm to new detailedmuscle architectural data obtained from a series of dissection studies. This not only providesus with a subject-specific model instance, but also allows us to augment the model with thenew muscle fibre information.Detail: structural intricacies and missing information• Developed a workflow for incorporating realistic muscle architecture into volu-metric models. We developed a framework for mapping detailed muscle fibre fields, digitizedfrom a human cadaver, to subject-specific muscle geometries, and to use them to define theinternal muscle structure in models. We show that using these detailed fields as opposed tosimplified fibre distributions can lead to significant differences in predicted force and contrac-tion geometries.• Demonstrated the importance of muscle and tendon architecture for force trans-mission in mastication. We examine the structure of the human masseter, and show theimportance of capturing both the complex muscle fibre architecture as well as the interleavingsheets of aponeuroses when modelling force transmission. We also demonstrate a simulationtechnique for efficiently incorporating the thin tendinous structures into a model using sepa-rate mechanically-coupled muscle and tendon representations.• Developed a method for covering model gaps to create a mechanically-coupledenclosed surface. We developed a method for covering gaps in a model with a geometric‘skin’, providing a water-tight seal around components. This skin is capable of transmittingforces to and from its driving components, permitting the simulation of fully-coupled air orfluid flow within the model.In addition to presenting and publishing our work in conferences and journals, we also placean emphasis on disseminating knowledge and research through open-source development. Through11Introduction 1.3 Outlinethe course of this work, we have made large contributions to the free simulation toolkit ArtiSynth(https://www.artisynth.org) [68, 148] to support adoption and continuation of the research methodsand tools. All of the methods developed in this work are included in ArtiSynth, which is releasedunder a BSD license for both academic and commercial use. Some of the work involved in prostateinterventions is also released as a stand-alone MatlabR© tool [67].1.3 OutlineThe remainder of this dissertation is structured as follows. In Chapter 2, we provide some back-ground information on modelling, the types of components involved, required data, and some usefultools for model building and simulation.The subsequent chapters each address one of the three aforementioned topics in biomechanicalmodel construction. In Chapter 3 we examine volumetric discretization, introducing our novel hex-dominant meshing algorithm, as well as comparing a set of discretization alternatives in the contextof the simulation of incompressible muscle tissue. In Chapter 4 we develop our dynamic registrationapproach to address issues relating to data, either for fusion of information from multiple imagesources, or adapting a model to information from a new subject. We apply our approach in thecontext of image-fusion for prostate interventions, and to model registration of the articulated armto digitized muscle and bone geometries. In Chapter 5 we tackle issues related to model detail: theneed for high-fidelity muscle fibre and tendon information, how to map this data to subject-specificgeometries, and how to efficiently incorporate it into biomechanical models. We also discuss howto cover model gaps to create air-tight dynamics-driven geometries that permit fully-coupled fluidsimulations.Finally, in Chapter 6 we summarize our contributions, briefly describe important directions offuture work, and provide concluding remarks.The appendices are used for supplementary content, including mathematical notations and defi-nitions (Appendix A), complete details of our hybrid multi-body and continuum dynamic simulationframework (Appendix B), a derivation of the equations governing continuum mechanics consistentwith this work (Appendix C), and proofs of certain useful identities (Appendix D).12CHAPTER 2BackgroundCreating a model of a biomechanical system involves identifying the relevant components, col-lecting appropriate data, and using tools to design, construct, and simulate the model. In thischapter, we provide an overview of these steps and review related material. We begin by breakingdown the components of a typical biomechanical system and their typical numerical representa-tions. We then discuss how the required data can be collected. We end the chapter by discussingavailable simulation platforms and tools.2.1 Components of a Biomechanical SystemA typical biomechanical model consists of bones, muscles, ligaments, and other soft tissues. Theseare coupled together using constraints – such as joints and attachments – to create a complexdynamic system.2.1.1 BonesBones support the body and protect our internal organs. They are considered a form of ‘hardtissue’. While bones can bend or deform slightly in practice, they are often treated simply as solidstructures in dynamic simulation: the amount of deformation is usually negligible when comparedto their overall motion and to the deformation of the surrounding soft tissues. Bones provide astructural frame for other organs to attach to. By adding motion constraints between bones in theform of joints, we can restrict large-scale motions of the anatomy.In biomechanical simulation, a bone is typically characterized by its geometry, pose, mass, andmoment of inertia.GeometryBone geometry is usually represented by a three-dimensional mesh composed of a set of connectedpolygons – such as triangles or quadrilaterals – that define the bone’s surface. These polygons areconnected in such a way as to define a closed volume, with water-tight connections between them.The connected corners of the polygons are referred to as vertices, and the polygons themselves as13Background 2.1 Biomechanical ComponentsBWorldxATWB`Figure 2.1: Rigid bones. The surface geometry of a bone is described by a set of connected polygonscalled a polygonal mesh. The vertices of this mesh are described in body coordinates with respectto the bone’s pose. A point pB in body-coordinates can be expressed as pW in world-coordinatesthrough the pose transform TWB.faces. Most polygonal surface meshes are extracted via image segmentation (Section 2.2.2), thoughin some cases they are manually constructed based on anatomical knowledge and references.The bone surface geometry not only allows for visual rendering, but can also be used as part ofthe physics simulation: for computing moments of inertia by integrating a density function withinthe enclosed volume, in computing points of attachment with muscle and tendon tissues, and fordetecting collisions with other structures.PoseThe bone’s geometry is described with respect to some initial coordinate frame, typically centeredat the bone’s center-of-mass. Coordinates described relative to this frame are referred to as bodycoordinates. Since a bone is treated as rigid, the body coordinates of its geometry as well asanything physically attached to it remain fixed (i.e. constant).The pose of a bone describes the three-dimensional location and orientation of the body’sreference frame with respect to a fixed world reference frame. As a bone moves through space, itspose changes to reflect the new state. The pose can be used to transform the bone’s geometry, aswell as any attachments, into world coordinates (Figure 2.1).Mass and InertiaThe mass of a bone can be computed from the bone’s geometry and its density. There are twotypes of bone tissue, with differing densities: compact (or cortical) bone, which is very dense andforms a protective shell; and cancellous (or trabecular) bone, which is sponge-like and contains the14Background 2.1 Biomechanical Componentsmarrow. Compact bone has been measured to have a bone density of approximately 1900 kg/m3 onaverage [30], and cancellous bone to be in the range of 100 kg/m3 to 700 kg/m3 depending on thesite [184]. For simplicity, bone is often considered to have a uniform density of approximately 1500kg/m3. The total mass of the bone can therefore be approximated by the produce of its volumeand this constant density.The moment of inertia, a rotational analogue of mass, is a function of mass distribution, so canbe computed based on the bone’s geometry and density. Since a bone is considered rigid, we cancompute a fixed moment of inertia in body coordinates.2.1.2 Soft-TissueSoft tissue includes organs, tendons, ligaments, fat, muscles, and anything that can substantiallydeform. Like bones, soft tissues require a geometric representation, as well as a description of thephysical behaviour of the tissue.GeometryIf the particulars of the geometry are much less important than the overall net forces soft tissuesapply, then they may be represented using simplified point-to-point force effectors. For example,the action of a tendon or ligament may be adequately reproduced by a single one-dimensionalspring acting between two points of attachment. However, if the geometric details are importantfor the application, or if the behaviour cannot be accurately represented by such a simplified lumpedmodel, then volumetric approaches such as the finite element method are required.Point-to-Point ModelsIn point-to-point models, the action of the soft tissue is represented by spring-like forces actingbetween pairs of points (Figure 2.2). To allow for more complex contraction and force distributionpatterns, such as those exhibited by the broad muscles in the back [161], the tissue can be definedby a collection of segments which can be activated differentially. For longer tissues that wraparound other structures, such as the triceps brachii in the arm or pectorals in the chest [214],the muscle can be divided into a multi-point segmented path. Smoothness can be accomplishedby interpolating a set of splines, leading to more accurate wrapping of muscles around bones andjoints. This approach has been applied in modelling the forearm and hand to demonstrate forcetransmission and sliding action of musculo-tendon structures over bones surfaces [231].The benefit of these simplified models is ease of construction and computational efficiency. Theyact mainly as force effectors, adding a minimal number of degrees of freedom to be solved during15Background 2.1 Biomechanical ComponentsFigure 2.2: Point-to-point models. Muscles, ligaments and tendons can be represented by sim-plified force effectors. Examples include a model of the jaw [86], back [161], and shoulder [214].the course of a simulation. They are typically assumed to be attached to bones at the centroids oftheir attachment sites, from origin to insertion. The main drawbacks relate to the accuracy of therepresentation, and to the limited amount of internal information they provide. For example, thetongue cannot be accurately represented using any collection of line segments since its motion ismainly driven by volumetric effects [226]. Force distribution from the masseter muscle also stronglydepends on volume-dependent characteristics, as well as the interactions between internal muscleand tendon tissue which are not well-described by point-to-point representations [205]. Thesesimplified models are also not able to predict deformation patterns, or stresses within a tissuewhich might be of importance to the application, such as the development of pressure ulcers in thebuttocks [155]. For these, we need volumetric representations of the tissue.Volumetric ModelsFor a more detailed representation of a soft tissue, a volumetric description can be incorporatedinto a model. Much like with bones, the geometry can be represented using a mesh-like structure.However, a polygonal mesh alone is usually insufficient: we are also interested in how the tissueinterior behaves, since soft tissues can bend and deform.The most common approach to representing a volume is to divide it into a set of simplifiednon-overlapping sub-volumes, such as polyhedra, forming a polyhedral mesh, or more generally, avolumetric mesh (Figure 1.2). In the case of the finite element method, the sub-volumes are calledelements and the corners between them called nodes. Quantities throughout the volume are theninterpolated based on values at the nodes. The outer-most faces – those not adjacent to otherelements – define the surface of the tissue. The location of the tissue in space is thus described bythe collection of positions of all the nodes, and their connectivity.16Background 2.1 Biomechanical ComponentsTo construct a volumetric mesh, we first need a surface description, usually in the form of apolygonal mesh obtained via segmentation from imaging. Based on this surface, the volume is thendivided into elements using one of many algorithms or tools, such as TetGen [220], CUBIT [263],ANSYS Meshing [7], Mimics [165], or Synopsys Simpleware [234] to name a few. The process ofvolumetric meshing will be discussed in more detail in Chapter 3, but remains one of the majorhurdles in widespread adoption of volumetric simulation techniques.With volumetric representations, we can now estimate quantities throughout a volume, as wellas define spatially-varying properties that affect the behaviour of the tissue. However, this comesat a high cost of computational complexity. Each volumetric node adds three degrees of freedomto the model that needs to be solved, and single tissue volumes may each consist of thousands ofnodes. Values such as densities and strains also need to be numerically integrated over the volumeto compute the dynamics of the system.DynamicsLike bones, soft tissue motion is affected by mass and inertia. In addition, soft tissues exert internalforces as they stretch and deform. The deformation-force relationship is described by a constitutivelaw for the material, which will involve a set of stiffness parameters.Point-to-point modelsIn the case of point-to-point models, the tissue’s mass is typically lumped into the mass of theattached bones (e.g. [276]). This will help account for any momentum and inertial effects. As thetissue stretches, a tension will develop within it that resists further stretching. This tension willresult in a force applied at its points of contact with the rest of the model, including attachments andany wrapping points around bones for multi-point representations. The amount of tension/forceis governed by material-dependent stiffness parameters, which are often found experimentally bymeasuring the tension exerted by the tissue under a varying loads (see [60]).Volumetric modelsFor volumetric models of soft tissue, the mass distribution within the volume can be defined bya density function. For most tissues, due to the high composition of water content, that densityis typically around 1000 kg/m3. Fat contains a little less water, so is usually a less dense ataround 900 kg/m3 [64]. See Table 2.1 for more common density values. The density function caneither be integrated as part of the numerical simulation to determine the total time-dependent17Background 2.1 Biomechanical ComponentsTable 2.1: Density of body tissuesType Avg. Density SourceWater 1000 kg/m3Blood 1050 kg/m3 [94]Bone 1500 kg/m3 [30, 184]Skin 1090 kg/m3 [106]Fat 910 kg/m3 [64]Muscle 1050 kg/m3 [260]mass (consistent mass matrix), or the mass can be assumed to concentrate at the nodes of therepresentation (diagonally-lumped mass matrix).For computing elastic forces, the continuous analogue of tension is stress, and the continuousanalogue of stretch is strain. The constitutive law of a soft tissue describes its stress-strain rela-tionship (Section B.2.3). We can determine net forces within a volume by numerically integratingthe stress throughout the material. These will affect the positions of the nodes with the model, aswell cause the tissue to exert forces externally between the surface and tissue attachments.2.1.3 MusclesMuscle is a soft tissue with a special composition and structure giving it the ability to contractwhen activated. In vertebrates, muscle can be classified into one of three distinct categories:cardiac, smooth, and skeletal. Cardiac and smooth muscles make up the walls of the heart, organs,and blood vessels. They are involuntarily controlled by the autonomic nervous system, and areresponsible for many life-critical processes of the human body such as controlling the digestive,respiratory and circulatory systems. Skeletal muscles, on the other hand, are controlled by thesomatic nervous system. As the name suggests, they are attached to bones (often through tendons)and are responsible for skeletal posture and locomotion, our main interest in this work. In whatfollows, we will focus on skeletal muscle.StructureSkeletal muscle is a striated soft tissue. At the largest scale, the bulk of the muscle is known asthe muscle belly. This is made up cylindrical bundles called fascicles, which are each composedof many individual muscle fibres, called myofibers, which in turn are composed of parallel proteinstructures called myofibrils, each of which are made up of serially connected contractile unitscalled sarcomeres. The sarcomeres are responsible for force generation when chemically activated[89, 104]. All sarcomeres along a fibre are theorized to contract at the same time, and a uniformforce is transmitted along the entire length of the fibre.18Background 2.1 Biomechanical Components(a) Fusiform (b) Unipennate (c) Bipennate (d) MultipennateFigure 2.3: Muscle fibre pennation.The striated structure is important to consider when modelling the behaviour of muscle tissue.Since the sarcomeres are arranged along the direction of muscle fibres, forces are also generatedalong these lines. Thus, the fibre architecture plays an important role in the muscle’s behaviour.Different fibre arrangement patterns can lead to different force-generating capabilities. It hasbeen shown that fibre arrangement can affect muscle speed and force output by as much as 10–20 times [141]. The acute angle between the muscle fascicles and tendon attachment is knownas the pennation angle. This angle modifies the force-transmission characteristics of the tissue.Muscles that are composed of fibres with non-zero pennation angles are known as pennate muscles.Examples of various pennation patterns are illustrated in Figure 2.3. For a given muscle size, ahigher pennation angle results in a higher maximum output force, but slower speed of contractionand range of motion.DynamicsThe properties of muscle are often studied under two separate conditions: isometric contraction,where the steady-state muscle force is measured for a given fixed muscle length and level of acti-vation; and isotonic contraction, where an activated muscle is subjected to a constant tension andthe muscle shortening velocity (or lengthening velocity) is measured. These two experiments formthe basis of the Hill-type model [93]. Most existing muscle models used in dynamic simulation arebased on Hill-type descriptions.By stretching a muscle and measuring the net tension, a force-length (F-L) curve can be con-structed to represent the steady-state behaviour. This experiment can be repeated for both com-pletely passive and fully activated muscle tissue. The difference between the two resulting F-Lcurves is called the active muscle force. The length at which the active muscle force peaks is the19Background 2.1 Biomechanical Components0.4 0.6 0.8 1 1.2 1.4 1.600.511.52`M/`M0FM/FM 0passiveactivetotal(a) Force-Length Curve-1 -0.5 0 0.5 100.511.52vM/vMmaxFM/FM 0shorteninglengthening(b) Force-Velocity CurveFigure 2.4: Dynamic characteristics of muscle. The combination of the force-length and force-velocity curves make up the normalized Zajac model [275]. To create a muscle-specific model, thesecurves are scaled by the muscle’s maximum isometric force (FM0 ), the optimal fibre length (`M0 ),and the maximum shortening velocity (vMmax).optimal muscle fibre length, lM0 , and the net force at this optimal length is the peak isometric activeforce, FM0 . An example of normalized force-length curves is presented in Figure 2.4a. When acollection of muscle fibres are less-than-fully activated, the active muscle force is often assumed tobe linearly scaled by the activation fraction [264, 275]. However, several experiments have shownthat the scaling of the active F-L curve is often accompanied by an activation-dependent shifting[103, 147, 196].When activated muscle is subjected to a constant tension (isotonic contraction), it will quicklyshorten or lengthen until it reaches a steady-state that balances the applied forces. By estimatingthe shortening velocity for a number of different tensions, a force-velocity (F-V) curve can beconstructed at particular fibre length (Figure 2.4b). At optimal fibre length, a maximum shorteningvelocity can be estimated, vMmax. This is the velocity at which the muscle fibre can sustain no force.The F-V relationship is often assumed to be independent of muscle fibre length [166], and to bescaled by the level of muscle activation [275].Measuring the contraction force of a specific muscle is a challenge, since muscles rarely actindependently or in isolation. Instead, the maximum isometric contraction force is most oftenestimated using its physiological cross-sectional area (PSCA) [27]. PCSA is the area of a cross-section that is perpendicular to the muscle’s fibres. Since fibres contract in parallel, the net forcetransmitted is approximately this PCSA times the density of muscle fibres times the tension perfibre. The fibre density times tension-per-fibre is referred to as the specific tension of the muscle20Background 2.2 Data Acquisition(σM0 ) , and has been shown to be consistent between subjects and across ages for an individualmuscle type [183]. This leads to the approximate maximum force equationFM0 ≈ PCSA× σM0 ≈ vol(M)×σM0`M0.We can estimate this by measuring the volume of the muscle and finding a reference for the specifictension value for the muscle of interest.There have been several modifications to this standard model. Typical ones include: adding aseries tendon component [275], linearizing the force-length curve [188, 275]; and assuming contrac-tion is “slow enough” that force-velocity curve can be ignored [86, 188, 231]. Thelen et al. [249]modified Zajac’s normalized model to include age-specific parameters. To account for observednon-linearities in the force-activation profile, Lloyd et al. [147] introduced an activation-dependentshifting of the active force-length curve.For volumetric models, the same force-length and force-velocity relationships are typically used,except they are applied to normalized strains along the fibre direction rather than length. This issimply the continuous approximation to the large-scale behaviour.2.2 Data AcquisitionTo construct biomechanical models, we usually need to start with data on which to base them.A typical workflow will begin with imaging to gather structural and geometrical descriptions.From the images, we identify and delineate the relevant components through a process knownas segmentation. We may also need to gather other relevant information not captured by imaging,such as physical measurements.2.2.1 ImagingComputed TomographyIn CT, a series of X-rays are sent through the body, acquired at multiple angles as the scannerrotates around the body. The x-rays are reflected and absorbed by matter in bones and tissue, soeach detected signal received at the detector represents a form of integration of an “attenuation”function. By acquiring measurements at many angles, a three-dimensional map of the attenuationcan be reconstructed, measured in Hounsfield Units (HU).The benefit of CT is that it provides strong contrast and high resolution images. Bone tissuehas a particularly high x-ray absorption rate, making this type of imaging ideal for extracting bone21Background 2.2 Data AcquisitionFigure 2.5: MRI and CT of the head and neck. Data courtesy of Dr. Anne Agur, Division ofAnatomy, Department of Surgery, University of Toronto.geometries. However, CT is rarely used on healthy subjects due to the large amount of radiationexposure. A single CT image of the head or chest exposes a subject to the equivalent of six monthsto two years of natural radiation exposure.Magnetic Resonance ImagingMRI measures the magnetic decay rates of hydrogen atoms after they have been aligned with astrong background magnetic field. In essence, this measures water density throughout the body.Since the soft tissues of the body have a very high water content, MRI is well-suited for highlightingtheir geometry. Notice in Figure 2.5 that the MRI image shows many of the details of the brain,neck and tongue, but that areas of bone such as the outline of the skull or any cavities, such asthe space between the tongue and hard-palate, appear black. The CT image, on the other hand,highlights the outline of the skull, jaw, and spine, but the soft tissues appear with a uniform texture.MRI allows for high resolution images and is radiation-free. The parameters of the scan canalso be tuned to highlight different tissue types (i.e. by changing the timings to highlight T1-decayversus T2-decay). The main drawback is the lack of observable skeletal information, and the longimage-acquisition times, which make dynamic imaging a challenge.Diffusion Tensor ImagingDiffusion tensor imaging (DTI) is a form of MRI that measures the diffusion of hydrogen atoms.Its purpose is to highlight fibrous structures within soft tissue, such as muscle fibres, blood vessels,nerves, and neurons, along which we should observe higher rates of diffusion. DTI works by applyingmagnetic gradients to encode the motion, and acquiring multiple images to detect diffusion along22Background 2.2 Data AcquisitionFigure 2.6: DTI of the brain and tongue. The left column depicts the diffusion-weighted images(DWI), middle the principal diffusion images (PD), and the right depicts the diffusion tensors.Colours represent the axis of principal diffusion (red: medio-lateral, green: antero-posterior, blue:longitudinal) multiplied by the fractional anisotropy. Data courtesy of Dr. Jonghye Woo, GordonCenter for Medical Imaging, Massachusetts General Hospital, Harvard Medical School.each of a set of axes. The result is a symmetric diffusion tensor at each voxel that can be visualizedusing ellipsoids, with the major axis defining the principal direction of flow.As seen in Figure 2.6, DTI can effectively highlight the fibrous structure of the brain, andsomewhat the muscle structure within the tongue. There have been several studies that attemptto reconstruct muscle fibre architectures using DTI [71, 90, 133]. Images are typically much lower-resolution compared to regular MRI, however, and can be quite noisy.UltrasoundUltrasound is a real-time radiation-free imaging technique that measures the reflectance of soundwaves sent through the tissue of interest. Boundaries between tissues, bones, or air-pockets causestrong reflections which are clearly visible. Resolution, however, is typically quite low, imagescan be noisy, and the imaging depth is highly limited. US is often used for dynamic imaging ofsoft-tissue structures, such as motion of the tongue, or of the prostate during medical interventions.2.2.2 SegmentationExtracting geometries from an image is accomplished through a process known as segmentation,whereby individual structures within an image are labelled and outlined. We may have a complete23Background 2.2 Data AcquisitionFigure 2.7: MRI and US of the prostate. Internal details of the prostate are much more visible inMRI, but US can be acquired much faster, and can be used intra-operatively.(a) Segmented CT of the head and neck. (b) Segmented MRI of the arm.Figure 2.8: Segmentation of CT and MRI. The CT segmentation was performed using thresholdingon the image, retaining values greater than 220 HU. The MRI segmentation was performed manuallyusing Amira.labelling of a volume, in which case we can generate a closed surface that encapsulates the regionof interest. Otherwise, we may have an incomplete outlining of a target region, such as in caseswhere the boundary of an organ is not completely visible in the image. In such cases, we may onlygenerate partial surfaces, with holes in regions of uncertainty.For images where a structure has strong contrast with its surroundings, such as bone tissue inCT, segmentation can often be accomplished by thresholding: any pixels with an intensity withina certain range are considered to be part of the target, and anything below the threshold valueis to be excluded. For example, in Figure 2.8a, the bone geometry is segmented by thresholdingabove 220 HU. For soft tissues, such as the muscles in the arm in Figure 2.8b, segmentation is oftenperformed manually or semi-automatically by a trained anatomist. There are various tools usefulfor this purpose, such as Amira [73], Mimics [165], 3D Slicer [66], and TurtleSeg [250]. The outputof thresholding or of manual/semi-automated segmentation is a label map, either in the form of avolume, or a surface outline of a region of interest.24Background 2.2 Data AcquisitionFigure 2.9: Segmentation surface post-processing. The skull surface resulting from MarchingTetrahedra (left) has block-like artifacts, and very high resolution (700k faces). The mesh is post-processed to reduce the number of faces (150k) using Quadric Edge Collapse, and is smoothed usingTaubin Smoothing to generate the final segmented surface (right).Figure 2.10: Dissection and digitization of the masseter [139]. Muscle fascicles and tendon outlineson the surface of the tissue are traced with a MicroscribeTM MX Digitizer (left). The fibres arethen excised to expose deeper layers of tissue, and the process repeated. The result is a digitalrepresentation of the muscle fascicles and tendinous sheets of aponeuroses (right).To turn a volumetric segmentation into a geometry useful for simulation, we need to encapsulatethe labelled regions into volumes defined by a polygonal mesh surface. This is typically accomplishedusing one of two methods: Marching Cubes [153], or Marching Tetrahedra [57]. Both traversethe label-map, filling the space with polyhedra until the label is contained. The surface of thepolyhedral geometry is then exported as the final surface mesh. Depending on the resolution of theimage, this process often leads to unnecessarily high-resolution meshes, and can have odd block-likeartifacts aligned with voxels in the image. These can be reduced in post-processing by applyingmesh-reduction such as Quadric Edge Collapse [96], and a smoothing operator such as TaubinSmoothing [239] (Figure 2.9). Useful tools for this type of mesh post-processing include MeshLab[36] and Blender [23].2.2.3 Dissection and DigitizationSome anatomical features are not easily visible using conventional imaging techniques. For example,the sheets of aponeuroses in the masseter are too thin to be discernible in MRI [195]. Tissue25Background 2.3 Registrationarchitectures can also be a challenge to extract from imaging. Though there has been work on usingDTI to extract muscle fibre directions [71, 90, 133], the resolution is limited and the techniquesoften rely on heavy filtering and post-processing of the data. For a more detailed architecturaldescription, we rely on the more traditional dissection process, combined with modern digitizationtechniques.Agur et al. [1] first applied dissection and digitization to extract the fibrous architecture of thehuman soleus. They carefully dissected a cadaveric specimen to expose the surface of the muscletissue, and landmarked the muscle fascicles using pins. The location of the pins were computed usinga set of spatially calibrated cameras, resulting in a 3D digital representation of the fibre bundles.The top layer of tissue was then serially excised to reveal deeper layers, and the process repeateduntil a full architectural description of the muscle was obtained. This painstaking process was laterrefined to use a 3D coordinate digitizer rather than pins and cameras to extract the architectures.With the MicroscribeTM Digitizer, the fascicles and outlines of tendons and attachment areas cansimply be traced and the coordinates recorded at short intervals (e.g. every 3–5 mm), resulting ina very high resolution digital reconstruction of the tissue (Figure 2.10). This process was appliedto study the structure of the human masseter [132], supraspinatus [122], lumbar multifidus [206],all the muscles of the forearm [129, 202], and the complete orofacial and hyoid musculature [139].2.3 RegistrationRegistration is the process of merging data from two or more sources into a common referencespace. This mapping, or fusion of the data, brings together complementary information, allowingfor better-informed medical analysis or model construction. The goal of registration is to find an‘optimal’ transform that best maps the data to the target space. Optimality will typically involvemaximizing a similarity measure between the source data and target, while minimizing or limitingdistortion in some way based on desired qualities or prior knowledge of the system.There are three general classes of registration in the literature:Image registration:the inputs to the registration process are images, with intensities defined on a regular grid.Feature registration:the inputs are a set of features, such as points, lines, or polygonal faces.Model-to-image registration:the inputs are a model, which may contain a set of features, and a target image.There is an entire field devoted to medical image registration, where the data consists of twoor more medical images. These may be multiple images of a single subject with different modal-26Background 2.4 Simulation Platformsities (e.g. MRI and CT), different viewpoints of the anatomy of interest, or acquired at differenttimes. The images may also come from multiple subjects, for example when computing populationstatistics or labelling a new image based on an atlas. Image-based methods rely on image intensityinformation, or on intensity statistics, to measure the similarity between source and target images.Feature-based methods, on the other hand, rely on extrinsic features extracted from the image.One important such feature is boundary information extracted by segmentation. This leads tosurface-based registration techniques, in which either multiple segmented surfaces are registeredtogether (feature registration), or a surface from one source is registered to the image informationin a target (model-to-image registration).The goal of registration is to move or deform the source information – also called the movingimage – in a way that increases the similarity with the target data – also called the fixed image,with certain restrictions applied. A common approach is to define a transformation function withparameters that describe the space of possible mappings (e.g. rigid, affine, deformable), a similaritymetric that computes the similarity or ‘goodness-of-fit’ between the transformed source and target,and regularization that limits the transformation parameters to introduce desired properties, suchas smoothness. The registration process is then converted to an optimization task: find the trans-form parameters to maximize similarity, while simultaneously minimizing a regularization residualenergy.We will discuss registration in more detail in Chapter 4. For a review of medical image regis-tration, see [185]. For surface-based methods in medical imaging, see [10].2.4 Simulation PlatformsAfter registering data to common reference space, segmenting appropriate structures, and definingand discretizing the various components such as bones and muscles, we can put together a digitalmodel that represents a snapshot of our anatomy of interest. To study the behaviour of this modelin response to physical stimuli – including external forces like gravity, and internal forces such asmuscle contractions – we need simulation. In biomechanical simulation, the model is augmentedwith a mathematical description of how the objects and materials behave physically, allowing for itto evolve over time and make predictions. The mathematics behind this is the topic of AppendixB. There are, however, various existing platforms and tools that will automatically handle thesimulation details given a sufficient model description.For multibody musculoskeletal models (i.e. those consisting only of bones and line-based mus-cles) SIMM and its open-source counterpart OpenSim [55], have become extremely popular withinthe biomedical research community. Many user-contributed models are also freely provided forresearch use (https://simtk.org/). AnyBody [8, 46] is a commercial tool that provides multibody27Background 2.5 Chapter Summarysimulation of the human body. Though specifically focused on ergonomic studies, they do havefunctionality for the design of implants and surgical tools. For general multibody simulations,several other commercial tools exist such as Adams [173] and SolidWorks Simulation [48].On the other end of the simulation spectrum are finite-element platforms. FEBio is a research-focus toolkit with specific support for soft tissue modelling, and allows for some handling of rigidstructures with contact [158]. Similarly, CMISS [20] and its open-source variant OpenCMISS [26]are developed for the study of soft tissue mechanics, and have an open model repository for researchuse (https://models.physiomeproject.org). Commercial FEM packages such as ANSYS [6], ABAQUS[47], and COMSOL Multiphysics [41] are more geared towards problems in structural mechanics,but are still often used in biomedical research.Few platforms allow both multibody and finite-element simulations within a single model. SOFA[4] is an open-source framework developed for physical simulation. It does allow multiple interactingsub-models to exist within a single simulation, such as multibody and FEM. However, it doesnot have the capability to represent a fully-coupled hybrid system. That is the major benefit ofArtiSynth [68, 148], the platform we use for the majority of our work. ArtiSynth is specificallydesigned to integrate both hard and soft materials together. This mixing of FEM and multibodydynamics allows the creation of computationally efficient models, where higher resolution continuummodels need only be used where finer detail or accuracy is desired, allowing us to tailor models tospecific applications.2.5 Chapter SummaryIn this chapter, we reviewed many of the basics of model building: the components and theirrepresentations, the data required to construct these components, some methods of processingthat data, and the available tools, both research-focused and commercial, useful for biomechanicalsimulation. In the next chapters, we develop our methods for addressing some of the challenges inmodel building, and applications of our methods to demonstrate their utility.28CHAPTER 3Discretization: Volumetric Meshing and AlternativesThe first major challenge in creating deformable anatomical models such as a muscle or organ, isin designing an appropriate spatial discretization. The discretization itself has a strong impacton both function and accuracy: its degrees of freedom directly limit the types of reproducible mo-tion, the size of the discrete elements affect numerical integration accuracy, oddly shaped elementscan lead to numeric instabilities, and patterns of structural connectivity can introduce numeri-cal artifacts. Unfortunately, there is not always a straightforward approach for addressing theseconcerns.In the traditional finite-element method, the volume is partitioned into small non-overlappingsubdomains, called elements, which are based on a template shapes such as simple polyhedra.The parameters of the elements are called nodes, usually corresponding to element corners, and thecollection of nodes and elements is referred to as a volumetric mesh. The task of volumetric meshingis to split a prescribed target volume into these small template-based elements while respectingregularity, element size, and element quality requirements. Certain types of volumetric elements canlead to numerical artifacts, particularly when modelling highly deformable incompressible materialssuch as soft tissues. Methods have been developed to address these, restoring a less error-pronebehaviour, but often introduce additional costs.To make the meshing process easier, an alternative is to use a non-conforming mesh, where thevolume of interest is embedded in a larger, simpler bounding volume. The bounding volume may bediscretized into elements as before, or represented as a set of distributed points in an unstructuredfashion. This introduces other potential issues related to simulation: the discretized volume nowcontains empty space which should not severely affect the behaviour of the true material. Thisleads to sharp discontinuities in properties within elements, which must be accounted for by thenumerical integration.In this chapter, we discuss the challenges and solutions related to spatial discretizations. Weintroduce a novel technique, PolyMerge, for generating hexahedral-dominant conforming volumetricmeshes. We demonstrate the algorithm on a set of complex geometric shapes often used in thevolumetric meshing literature. We also examine the use of alternative discretization strategies,comparing techniques for accuracy, and discussing the advantages and drawbacks of each approach.We apply the alternatives in the context of incompressible tissues, with a focus on muscle and29Discretization 3.1 Volumetric Meshingbiomechanical applications, to gain some insights into when each may be appropriate for a givenapplication.3.1 Volumetric MeshingThe process of meshing requires partitioning the domain into discrete elements of known shape.There are commonly four basic element types: the tetrahedron, pyramid, wedge (or prism), andhexahedron (Table 3.1). These are the building blocks for constructing full volumetric models, suchas the ones shown in Figure 3.1. Shape functions are defined within each element for interpolatingvalues throughout the volume based on values defined at the nodes. For example, positions x ofthe material throughout the element can be written asx(X) =N∑i=1φi(X)ni, ∀X ∈ Ω0, (3.1)where X is a position within the element’s ‘rest’ state (i.e. Lagrangian coordinates), φi is the shapefunction associated with node i, ni is the position of the ith node, and Ω0 is the volume occupied bythe element at rest. As the nodes move around, so does the material inside the element accordingto Equation (3.1). The shape functions are usually written in terms of three natural coordinates,{ξ, η, µ}, which relate the volume to a natural ‘reference’ configuration.With these basic element types and shape function definitions, we can derive equations of motionin relation to the natural coordinates within an element and the element’s initial rest shape. Forinstance, the deformation gradient – a measure of local deformation – is given byF = ∂x∂X= ∂x∂(ξ, η, µ)∂(ξ, η, µ)∂X=(N∑i=1ni[∂φi∂(ξ, η, µ)]T)( N∑i=1ni0[∂φi∂(ξ, η, µ)]T)−1= F˜ F˜−10 (3.2)where ni0 is the original (rest) position of the ith node. Since we have an explicit form for the shapefunctions with respect to the natural coordinates, the gradients F˜ and F˜0 can be readily evaluated.The rest Jacobian F˜0 gives us a sense of how deformed an initial element is, and will play a rolewhen assessing the quality of the mesh in terms of how much deformation it can support.30Discretization 3.1 Volumetric MeshingTable 3.1: Finite element typesTetrahedron:φ0(ξ, η, µ) = 1− ξ − η − µφ1(ξ, η, µ) = ξφ2(ξ, η, µ) = ηφ3(ξ, η, µ) = µ{ξ, η, µ} ∈ [0, 1], 0 ≤ 1− ξ − η − µ ≤ 1Pyramid:φ0(ξ, η, µ) = 18 (1− ξ)(1− η)(1− µ) φ4(ξ, η, µ) = 12 (1 + µ)φ1(ξ, η, µ) = 18 (1 + ξ)(1− η)(1− µ)φ2(ξ, η, µ) = 18 (1 + ξ)(1 + η)(1− µ)φ3(ξ, η, µ) = 18 (1− ξ)(1 + η)(1− µ){ξ, η, µ} ∈ [−1, 1]Wedge:φ0(ξ, η, µ) = 12 (1− ξ − η)(1− µ) φ3(ξ, η, µ) = 12 (1− ξ − η)(1 + µ)φ1(ξ, η, µ) = 12 ξ(1− µ) φ4(ξ, η, µ) = 12 ξ(1 + µ)φ2(ξ, η, µ) = 12η(1− µ) φ5(ξ, η, µ) = 12η(1 + µ){ξ, η} ∈ [0, 1], µ ∈ [−1, 1]Hexahedron:φ0(ξ, η, µ) = 18 (1− ξ)(1− η)(1− µ) φ4(ξ, η, µ) = 18 (1− ξ)(1− η)(1 + µ)φ1(ξ, η, µ) = 18 (1 + ξ)(1− η)(1− µ) φ5(ξ, η, µ) = 18 (1 + ξ)(1− η)(1 + µ)φ2(ξ, η, µ) = 18 (1 + ξ)(1 + η)(1− µ) φ6(ξ, η, µ) = 18 (1 + ξ)(1 + η)(1 + µ)φ3(ξ, η, µ) = 18 (1− ξ)(1 + η)(1− µ) φ7(ξ, η, µ) = 18 (1− ξ)(1 + η)(1 + µ){ξ, η, µ} ∈ [−1, 1]Quadratic Tetrahedron:φ0(ξ, η, µ) = ζ(2ζ − 1) φ5(ξ, η, µ) = 4ξηφ1(ξ, η, µ) = ξ(2ξ − 1) φ6(ξ, η, µ) = 4ηζφ2(ξ, η, µ) = η(2η − 1) φ7(ξ, η, µ) = 4ζµφ3(ξ, η, µ) = µ(2µ− 1) φ8(ξ, η, µ) = 4ξµφ4(ξ, η, µ) = 4ζξ φ9(ξ, η, µ) = 4ηµζ = 1− ξ − η − µ, {ξ, η, µ} ∈ [0, 1], 0 ≤ ζ ≤ 131Discretization 3.1 Volumetric Meshing(a) Tongue (b) Vertebra (c) BrainFigure 3.1: Example finite element meshes. The tongue model is constructed using only hexelements, the vertebra using only tet elements, and the brain has a mix of element types.3.1.1 ConsiderationsThere are three main considerations affecting simulation accuracy when constructing a volumetricmesh for finite element analysis: resolution, element quality, and susceptibility to artifacts.ResolutionThe higher the resolution of the volumetric mesh, the finer the details and variations we are ableto resolve, which leads to an expected reduction in interpolation error. The trade-off is that higherresolutions lead to higher computational demand. Since finite elements have limited connectivity,the matrices involved in the numerical solution of a dynamic finite element system are generallyquite sparse. With typical usage, the numeric complexity for solving the system grows approxi-mately O(n1.7) with the number of nodes [226], so by doubling the resolution in each dimension –increasing the node count 8× – we can expect the simulation to take approximately 34× longer. Itis therefore important to try to keep model resolution low while still adequately representing thevolume.Once a ‘suitable’ volumetric mesh at a given resolution is generated, it is common practice torefine the mesh and repeat the simulation until numerical convergence is achieved. In a straight-forward element-by-element refinement, hex elements are split into 8 hexes, tets into 8 tets, wedgesinto 8 wedges, and pyramids into 6 pyramids and 4 tets. The new elements usually maintain similarquality, connectivity patterns, and susceptibilities to artifacts. Thus, it is often useful to try togenerate as best a volumetric mesh as possible at the lowest resolution that can accurately representthe domain, and then refine as necessary.32Discretization 3.1 Volumetric MeshingQualityElement quality refers to how far the initial (rest) shape differs from the “ideal” or “natural” shape.As nodes move, the deformation gradient inside the elements changes according to Equation (3.2).If ever this deformation gradient has a zero or negative determinant, it means there is a non-physical condition: we have compressed part of the material into a zero or negative volume. Whenthis occurs, we say the element has inverted. Inverted elements can appear as flipped inside-out,as highly twisted with intersecting edges, or as having a concavity at a corner. Element inversionsoften also lead to numerical instabilities, since a zero-Jacobian determinant often implies an infiniteinternal stress in most material representations. Thus, we would like to design a mesh that providesthe greatest potential for node movement before being at risk of an element becoming inverted.This means we prefer polyhedral elements to be as regularly-shaped as possible.One measure of an element’s quality is simply the determinant of the rest Jacobian F˜0 within theelement. On highly twisted or elongated elements, the determinant will be near zero, indicating asusceptibility to inversion in response to small perturbations. However, there are three main issueswith this measure:1. It is scale dependent: doubling the volume of an element doubles the Jacobian2. It is non-symmetric: in the rest shape for an ideal tet, the first corner has a solid angle of2.57 srad, while the other three have an angle of 0.34 srad3. It misrepresents pyramid quality: the isoparametric shape functions for the pyramid collapseat the top node, leading to a singularity regardless of the element’s shapeTo combat these, Knupp [125] defines a new metric called the scaled Jacobian. It provides anestimate of the element’s quality at a corner:JSe(n0) =(n1 − n0) · (n2 − n0)× (n3 − n0)‖n1 − n0‖‖n1 − n0‖‖n1 − n0‖ , (3.3)where n0 is the node at the corner of interest, and {n1,n2,n3} are three connected nodes, orderedas in a tetrahedral element. As defined, the metric applies well to tets, wedges, and hexes, forwhich each corner is only connected to three others. For pyramids, the top node is connected to allfour nodes on the base, giving us four possible scaled Jacobian values to consider. The element’soverall quality is then defined as the minimum of all scaled Jacobians for the element. This metric,the minimum scaled Jacobian, has become the standard for computing finite element quality in thehex and hex-dominant meshing literature. The value ranges from -1 to 0.707 for tets and pyramids,-1 to 0.866 for wedges, and -1 to 1 for hex elements. This metric is scale-invariant, symmetric(treating all nodes in the same way, apart from the pyramid), and is maximized when a polyhedronis the most regular (i.e. all edge-lengths are equal). When constructing a mesh, we therefore wish33Discretization 3.1 Volumetric Meshing(a) Hex Tesselation (b) Tet Non-Tesselation (c) Tet PartitionFigure 3.2: Spatial partitioning of hexes and tets. Regular hexahedrons tessellate space with eachnode adjacent to 8 elements. Tetrahedrons do not tessellate space: regular tets leave a 0.13 radiangap around an edge, and a total of 1.54 steradians around a node. If we adjust the tets so they areslightly non-regular, we can generate a spatial partition with each node adjacent to 20 elements.to maximize the minimum scaled Jacobian of each of the elements.ArtifactsThe element composition of a mesh can also introduce numerical artifacts when it comes to simulat-ing a material with constraints. In particular, for incompressible or near incompressible materials,tetrahedral-dominant meshes are susceptible to volumetric locking – making the model behave ar-tificially stiff. Incompressibility imposes restrictions on the way elements are allowed to deformwithin the model: the net volume ratio must remain constant,vol(Ωe)vol(Ωe0)= 1vol(Ωe0)∫Ωe0|F |dΩe0 = 1, (3.4)where Ωe is the deformed element space, and Ωe0 is the space occupied by the element at rest. Intetrahedral elements, the deformation gradient is constant across the element, so we must have |F | =1. Enforcing incompressibility therefore adds a single constraint for every tet. A similar argumentcan be made for other element types if using reduced integration techniques such as the mean-dilation method (see Appendix B.3.3.5) [102]. Thus, for each element, a near-incompressibilityconstraint removes approximately one degree-of-freedom (DOF).In an infinite hexahedral grid, the ratio of elements to nodes is 2 : 1. Each node has threespatial DOFs, and each element contributes one constraint, so 2/3 of the original DOFs remainfree. However, in a near-regular infinite tetrahedral partitioning of space (Figure 3.2), each node is34Discretization 3.1 Volumetric Meshingadjacent to 20 elements. Each tet consists of four nodes, so the ratio of elements to nodes in a largetetrahedral model is approximately 5 : 1. This means that for N nodes we have approximately5N constraints but only 3N degrees-of-freedom. An incompressible mostly-tetrahedral model istherefore overly constrained, causing the volume to ‘lock’.The avoidance of volumetric locking is one of the main reasons that hex-dominant meshes arehighly preferred in biomechanical applications. Biological tissue is mainly composed of water,making it nearly incompressible. While reducing interpolation errors or improving element qualitycan often be addressed by increasing mesh resolution – though with considerable performancepenalties – this will not help prevent locking.The goal of volumetric meshing for physical applications is to therefore to partition the volume ofinterest into these primitive finite element polyhedra while balancing the previous considerations:the resolution must be high enough to limit interpolation errors but low enough to be numericallytractable; the element shapes themselves need to be as regular as possible, as defined by qualitymetrics such as the minimum scaled Jacobian, to support large deformations without being at riskof element inversions; and the element composition is preferably mostly of hexahedrons in order toavoid volumetric locking in incompressible or nearly incompressible materials.3.1.2 Existing Meshing TechniquesThe most common volumetric mesh type for arbitrarily shaped volumes consists solely of tetrahedra.Tetrahedral meshing is the three-dimensional analogue of triangulation. Most modern automatedtechniques are based on Delaunay triangulation, with methods for introducing interior nodes inan unstructured fashion to help satisfy element quality requirements. TetGen [220] remains one ofthe most popular, though other excellent solutions exist [e.g. 212, 270]. These tetrahedral meshingmethods offer many advantages: automatic construction for arbitrary domains, bounds on elementquality, and the ability to support local mesh refinement. However, due to the volumetric lockingartifacts for near-incompressible materials, tetrahedral meshes are poorly suited to biomechanicalmodelling of soft tissue.Hexahedral meshes are often preferred in numerical simulations because they lead to fasterconvergence rates, result in better sparsity structures when constructing stiffness matrices, andavoid volumetric locking artifacts [24]. Unfortunately, automatic construction of high-quality hexmeshes is still an open problem. The challenge is that it requires estimating a smoothly-varyingregular structure inside the target shape. Ideally, this structure would follow surface curvature,35Discretization 3.1 Volumetric Meshingtopological branches, and account for any sharp features. The more deformed the estimated grid-like structure, the lower the quality of the resulting elements. This has a direct impact on modelstability.In many practical situations, we can aim for hex-dominance rather than hex-exclusivity. As longas the volume is composed mostly of hex elements, their better convergence and sparsity propertiescan be harnessed, and any locking artifacts mitigated. Sharp features or transition regions can thenbe filled with other element types, ensuring a high overall mesh quality.There have been many techniques proposed for constructing all-hex or hex-dominant meshes.Blacker [19] provides a nice overview and lists the advantages and disadvantages of some of theearly attempts. One class of popular algorithms rely on octree-based subdivisions to fill the interiorof the domain with hex elements, then fill any gaps between the interior and surface with otherelement types [108, 162, 216]. Lobos and Gonza´lez [150] introduce a mixed-element approach basedon octree subdivisions that also allows for interior transitions between coarse and fine regions. Thisadds flexibility, allowing for varying levels of refinement. The drawback of these octree approachesis that hex elements generally follow a single orientation, and element quality is poorest along thesurface, which is typically where interactions occur in simulation.Other hex-dominant meshing techniques attempt to parameterize curvature within the domainby using cross-frame fields [99, 181], or singularity-restricted fields [138]. Hex elements are thenconstructed between isosurfaces of the parameterization. These methods have the advantage thatelements follow the curvature of the surface. However, it is challenging to control hex quality, sincethe isosurfaces are entirely governed by the distortion of the field.Another interesting approach for estimating regular structure within a domain is to find alow-distortion mapping between a volumetric shape and it’s nearest polycube form [83, 100]. Apolycube, consisting of a union of axis-aligned rectangular prisms, naturally allows for hexahedralmesh construction. Polycube maps can be generated with the help of user-guidance [238, 259, 266],from a set of basic templates [143], or by iteratively deforming a shape to align normals with localaxes [83, 100]. With the deformation approach, however, producing a valid polycube that is free ofartifacts and singularities often requires a significant amount of post-processing [146]. Once a hexmesh is created in the space of the polycube, it can then be transformed back to the original shapeusing the distortion map. The drawback of this procedure is that the quality of the final elementsdirectly depends on the amount of distortion in the polycube mapping. While the goal is to finda map that minimizes distortion, there is no guaranteed minimum bound, and therefore no way todirectly control the final element quality.Instead of generating a hex or hex-dominant mesh directly, another approach is to first generatean intermediate high-quality tetrahedral mesh, then search for groups of tets that can be merged36Discretization 3.1 Volumetric Meshinginto well-conditioned hexahedal elements [14, 169, 242, 271]. This process is sometimes calledtetrahedral recombination. Automatic tetrahedral mesh generation with control on quality is amore tractable problem, allowing these methods to build on much of that existing work. Given atetrahedral mesh, finding patterns of tets that form valid hexes becomes a graph-search exercise.Once identified, these groups of tets are used to transform the original volumetric mesh into ahex-dominant one.For finding groups of tets that can be merged to form valid hex elements, Meshkat and Talmor[169] represent the mesh with an element-connectivity graph, and search for sub-graphs that repre-sent valid hex subdivisions. Finding these patterns in the mesh involves searching the combinatorialspace starting from each element. To improve efficiency, they propose a greedy strategy for interiortets: an unmerged tetrahedron is selected, its possible merges identified, and only the best oneselected. This reduces the search space for future elements. A similar graph search algorithm waspresented by Yamakawa and Shimada [271]. Instead of element-connectivity, they look for patternsof node connectivity along edges. This again leads to a combinatorial search space. To improveefficiency, they ignore certain branches: for a given tetrahedron, they only allow the node with thelargest scaled Jacobian (most corner-like) to form the corner of a hex. This effectively eliminatesthree-quarters of the possible search trees. Once a pattern of eight nodes is detected that forms avalid hex, it is added to a priority queue sorted by hex element quality. Groups of tets are thenmerged in priority order as long as quality and conformity constraints are observed.Later works applying the tet-merging strategy differ mainly in the way nodes are distributedthroughout the volume in order to encourage the formation of more hexahedral elements. Le´vy andLiu [134] use an Lp-based Voronoi tessellation algorithm to distribute nodes in a grid-like pattern.They then apply [169] to generate the final hex-dominant mesh. Baudouin et al. [14] use cross-framefields and a frontal approach for initial tetrahedral mesh generation, then apply [271] to create thefinal hex-dominant mesh.The tetrahedra-merging technique allows for tighter controls on quality, since merges are onlyperformed if certain criteria are met. However, the number of hex elements in the final meshhighly depends on the connectivity and structure of the input tetrahedral mesh, so the challenge isto distribute nodes in such a way as to encourage well-conditioned recombinations. It can also becomputationally demanding to search the mesh connectivity graph to find valid tet-to-hex patterns.In our work, we address both of these challenges.3.1.3 PolyMergePolyMerge is a novel method we developed for constructing hex-dominant meshes of arbitrary vol-umes. We begin by computing a map to estimate a regular internal structure using an approximate37Discretization 3.1 Volumetric MeshingInitial Polycube Nodes Tet-Mesh Hex-DominantFigure 3.3: PolyMerge: overview. An initial surface mesh is deformed to fit its nearest polycubeusing a finite-element framework. The polycube is used to distribute surface and interior nodes,which are tetrahedralized to form an intermediate tet-mesh. This is converted to a hex-dominantmesh by merging groups of tetrahedra. (“Stanford Bunny”, Stanford Computer Graphics Labora-tory)polycube deformation technique. This allows us to distribute nodes in a regular pattern throughoutthe volume. We rely on existing algorithms for generating a high quality tetrahedral mesh usingfrom these nodes, then detect and merge patterns of tets that form valid and well-conditionedhexahedral elements. Our tet-to-hex detection algorithm is both extremely simple and efficient,performing the pattern search in linear time and merging in linearithmic time. Unlike previousapproaches, we do not ignore any branches in the search tree, so we are guaranteed to find allpatterns of 8 nodes that can create a valid hex. We demonstrate our algorithm on a variety ofinputs, generating meshes consisting of more than 80% hexes by volume in a matter of minutes.An overview of our approach is as follows:1. The volume is deformed to a polycube-like form using a finite element framework.2. New nodes are distributed inside and on the surface of the mesh in a grid-like pattern.3. The volume and nodes are mapped back to the original space.4. An intermediate tetrahedral mesh is generated, with control on quality.5. Groups of tets are identified and merged to form hex elements using a greedy approach.This process is depicted in Figure 3.3.There are two main contributions in this volumetric meshing work. First, we introduce afinite-element-based technique for finding a minimum distortion map between a volume and anapproximate polycube form. The finite element method provides a convenient metric for measuringdistortion in the form of volumetric strain energy, which directly relates to the typical applicationof these models. To determine this map, implicit forces are applied to the surface of the model,driving the deformation towards a more cube-like form. Since we are aiming for hex-dominance,the final shape need not be a true polycube. This eliminates the need for significant post-processingthat is required by other techniques. This method is described in Section 3.1.3.1. The polycube is38Discretization 3.1 Volumetric MeshingvicfRf(a) Vertex Penaltyaˆfnfng1ng2ng3(b) Rotation PenaltyFigure 3.4: Polycube penalty terms. The penalty corresponding to vertex vi on face f is thedistance between its location and where it would be if the face were rotated about its centre (redarrow). The target rotation is computed by aligning the face’s normal nf to a weighted combinationof the nearest axis aˆ and the surrounding face normals {ng}. Weights are based on angle betweenfaces and face areas.used to tetrahedralize the volume in a way that is more conducive to tetrahedral merging.The second contribution is an efficient algorithm for identifying groups of tets that can bemerged to form hex elements during the recombination stage. The method is extremely simple,and unlike previous techniques, does not involve searching a combinatorial space. Instead, weidentify a fixed set of patterns that make up all possible hex-to-tet subdivisions. The graph-searchreduces to a linear search through the mesh, seeking these patterns. This algorithm is describedin Section 3.1.3.2. Tet-to-hex merges are then performed in a greedy manner based on a set ofconstraints. This gives us fine control over mesh quality.3.1.3.1 Approximate Polycube MappingA polycube is defined to have all surface normals pointing in one of the six coordinate directions:±xˆ, ±yˆ, ±zˆ. As Gregson et al. [83] suggest, one way to accomplish this is to simply rotate eachface such that its normal points in the direction of its nearest axis. However, without any type ofregularization, this would destroy the structure of the mesh. Ignoring this for the moment, we candefine a quadratic error function as follows:Wl =12∑fAf∑vi∈f∥∥∥(v′i − c′f )−Rf (vi − cf )∥∥∥2 , where cf = 1Nf∑vj∈fvj . (3.5)The first sum is over all faces f , and the second is over all vertices {vi} belonging to that face. Aprime (′) indicates the new desired location for the point. The face’s centroid, cf , is the average of39Discretization 3.1 Volumetric MeshingSurface β = 0.2 β = 0.8Figure 3.5: Varying the complexity parameter. A larger value of β maintains the planar structureof the dragon’s mouth. (“Dragon”, Stanford Computer Graphics Laboratory)all its vertices. The rotation matrix, Rf , represents the required rotation to transform the face’snormal to some desired orientation, such as the nearest coordinate axis. For every face not alignedwith a target orientation, this contributes Nf error terms, one for each vertex belonging to thatface. The error for a single vertex is depicted in Figure 3.4a. This error represents the `2-distanceof that vertex to where it would be if the face were rotated. To account for differences in face size,each term is weighted by the fractional area of the corresponding face, Af = area(f)/area(surface).Assuming triangular faces, if we differentiate Equation (3.5) with respect to target vertex loca-tion v′i, we arrive at∂Wl∂v′i=∑f∈F (i)Af3[2v′i − v′j0 − v′j1 −Rf (2vi − vj0 − vj1)], (3.6)where the summation is over all faces to which vi belongs. The vertices {vj0 ,vj1} are the two othervertices belonging to the face. This defines a Laplacian-type system,Lv′ = b, (3.7)where L is a sparse symmetric matrix that depends on face areas, v′ is a concatenated vector ofdesired vertex positions, and b depends on the current vertex locations as well as the rotationmatrices {Rf}.40Discretization 3.1 Volumetric MeshingControlling Polycube ComplexityRotating each face toward its nearest coordinate axis can lead to a large number of stair-like patternsin the resulting polycube; there is nothing to keep coherence between adjacent faces. We wouldlike to include an interdependency so that surface patches which are flat in the original mesh willremain flat. We introduce a control parameter, β ∈ [0, 1), that trades-off between rotating towardsthe nearest axis, and maintaining consistency with surrounding faces (Figure 3.4b). If there is asharp angle between a given face and its neighbour, however, that edge should be free to bend. Wetherefore define the desired rotation, Rf in Equation (3.5), as the one which minimizes:Wf = (1− β) ‖aˆf −Rfnf‖2 + β∑gwfg ‖ng −Rfnf‖2 , (3.8)wfg =Ag max(nf · ng, 0)∑k Ak max(nf · nk, 0).The first term considers the distance between the face’s normal, nf , and the nearest coordinateaxis, aˆf . The sum parameter, g, is over all faces in a patch around and including face f . Currently,we define a patch to consist of f and its three immediate neighbours. Errors are weighted by thedot product of their normals, as well as by face area, Ag. The max(·) function is used so that if aneighbouring face is at an angle of more than a 90◦, it is not considered.The minimization of Equation (3.8) is the well-known Wahba’s problem [257], the solution ofwhich satisfiesRf = arg minRtr(RnfmTf), s.t. RTR = I, |R| = 1, (3.9)mf = (1− β)aˆf + β∑gwfgngThis is typically solved using a singular value decomposition. However, we note that nfmTf hasrank one, so the solution is not unique: there is a two-parameter family of solutions [163]. Weare interested in the one corresponding to the smallest rotation angle. To isolate this, note thatwe would also arrive at Equation (3.9) by minimizing ‖nf − Rmf‖2. Therefore, the minimizingrotation is the one which rotates the normal nf directly towards mf .The free parameter, β, can be viewed as a control on the polycube complexity. If β = 0, theminimizing rotation is the one that directly rotates the face to the nearest axis. Otherwise, we tryto keep the normal consistent with surrounding faces. An example of results for different valuesof β is shown in Figure 3.5. For each face, we compute the desired target direction, mf , whichuniquely determines the face rotation Rf .41Discretization 3.1 Volumetric MeshingMinimizing DistortionTo minimize distortion in the polycube, we include an energy term that resists deformations. Forthis, we use a finite element framework, where deformation of the model is limited by minimizingthe strain energy. This requires generating an initial tetrahedral mesh to represent the volume. Togenerate the initial mesh, we use the freely-available TetGen library [219].For linear materials, where the internal stress of a model is linearly proportional to displace-ments, the net strain energy is given byWk(x) =12(x− x0)TK(x− x0), (3.10)The vector x contains the concatenized list of finite element node positions, and has an initial valuex0. The stiffness matrix K is sparse, symmetric, constant, and can be systematically constructedbased on mesh connectivity and the material’s stiffness parameters (Appendix C.4.2). Linearmaterials have two parameters: the Young’s modulus E, and Poisson ratio ν ∈ [−1, 0.5). TheYoung’s modulus controls the stiffness of the material, and Poisson’s ratio controls Poisson effects:when a material is compressed in one direction, it will tend to expand perpendicularly. As ν → 0.5,the material becomes incompressible, maintaining its volume.Note that we typically have more finite-element nodes than surface vertices, since the volumetricmesh will contain additional nodes interior to the volume. To account for this, we arrange the nodelist so that the ith node corresponds to the ith vertex. This makes x longer than v:x =[vx˜], (3.11)where x˜ are the appended interior nodes. By default, this is also the order that TetGen will returnnodes when generating the volumetric mesh.Polycube MinimizationTo generate the overall polycube, we wish to simultaneously minimize the polycube surface errorand volumetric distortion:WT (x) = Wl(x) +Wk(x). (3.12)42Discretization 3.1 Volumetric MeshingDifferentiating with respect to node positions yields∂WT∂x=[L 00 0]x−[b0]+ Kx−Kx0 = (L + K)x− (b+ Kx0), (3.13)where L and b are extended with zeroes to account for the interior nodes. Thus, for a linear model,the node positions are governed by the linear equation:[L + K]x = b+ Kx0. (3.14)This system is sparse and symmetric, so can be easily solved using any sparse linear solver. As aboundary condition, we keep one node at a fixed location to resolve translational invariability.As nodes move, the resulting shape will become more polycube-like. However, in doing so, somefaces might rotate such that their nearest coordinate axis has changed. This will affect Rf , whichwill in turn modify b. Therefore we solve the system iteratively, updating b until it converges.Clearly, surface normals will only align with coordinate axes if the finite element model issufficiently flexible (i.e. small Young’s modulus E). Conversely, if E is too small, there will belittle resistance to distortion. Similar to [100], we resolve this by adopting a weighting schedule.We begin with a value of E = 1000/ 3√V , where V is the volume of the mesh. This treats themodel as essentially rigid to start.Once the system converges, we reduce E by half, and repeat. Wecontinue iterating and reducing E until the overall polycube error does not change significantly.For approximating the polycube error, we only evaluate the surface error, Wl, since the distortionerror highly depends on the value of the Young’s modulus.Global OrientationThe “nearest-axis” to a face is highly dependent on the initial orientation of the mesh. To removethis dependency, we should define a starting orientation based on surface-intrinsic properties. Livesuet al. [146] suggest to use a principal component analysis (PCA) on mesh normals. However, wehave found this can produce unnatural starting orientations when the singular values of the normalcovariance matrix are not well separated. For example, a regular cube naturally defines an orientedcoordinate system, but since all singular values are identical, PCA on its mesh normals can produceany arbitrary set of axes. Thus, we adopt a simple iterative technique. The starting orientationis defined as the one which minimizes the distance between all surface normals and their current43Discretization 3.1 Volumetric Meshingnearest axes:Rglobal = arg minR12∑fAf ‖aˆf −Rnf‖2 . (3.15)This is again Wahba’s problem, and can be solved using a singular value decomposition.Once rotated, the nearest coordinate axis may have changed for some of the faces, so againwe iterate until convergence is achieved. We have found that this typically only requires up to 5iterations.Corotated and Non-Linear MaterialsThe previous derivation relied on the material having a purely linear constitutive law. The advan-tage of this is that the system matrix is constant and can be prefactored. However, linear materialsalso exhibit artifacts under rotation, resulting in unnatural distortions.Alternatively, we can use either a corotated linear or non-linear material model. In the generalcase, the strain energy given byWk(x) =∫ t0dxdtTddt,where d(x) is the non-linear internal force due to strain. Comparing to the dynamic systemdeveloped in Appendix B, Equation (B.41) and its quasi-static linearization in Equation (B.48), wenote that our system is equivalent to adding an external forcefl = b,∂fl∂u= −L.We solve using the quasi-static solution method described in Appendix B.3.7. In the current work,however, we only present results for the linear material case.Summary of Polycube AlgorithmA summary of the polycube deformation algorithm for linear materials is as follows:1. Compute the global rotation parameter, Rglobal.2. Create an initial tetrahedral mesh.3. Set initial E = 1/ 3√V .4. Compute matrices L and K.5. For each face, compute the optimal normal direction, bf , and corresponding rotation, Rf .6. Solve for xk+1, and compute Wl(xk+1).44Discretization 3.1 Volumetric MeshingFigure 3.6: PolyMerge node distribution. The surface/volume of the polycube are intersectedwith a regular grid to distribute new nodes. New surface nodes are shown in gold, and interiornodes in blue. These nodes are then mapped back to the original shape before tetrahedralization(“Fertility”, Utrecht University).7. If ‖Wl(xk+1)−Wl(xk)‖/Wl(xk) > 0.01, go to step 5.8. If Wl(xk) > Wl(x0)/100, then set E := E/2, go to step 4.There are two free parameters in the system. The first is the fraction that determines the complexityof the polycube, β ∈ [0, 1). In all examples in this chapter, we set β = 0.5 for a roughly equalbalance between axis-alignment and normal consistency with neighbours. The second parameter isPoisson’s ratio, ν ∈ [0, 0.5). For a trade-off between volume-preservation and flexibility, we chooseν = 0.4. Once we have an approximate polycube map for the shape, we can use this to map aregular grid of nodes to the original volume.Node distributionTo distribute nodes through the volume, we intersect the polycube with a regular grid of pointsaccording to the desired resolution of the final hex-dominant mesh (Figure 3.6). These points arethen mapped back to the original space based on the inverse of the polycube map. Since we donnot prevent self-collisions during polycube deformation, the distortion map may not be one-to-one:if two tetrahedra overlap in the deformed state, then the overlapping volume is mapped back to twodifferent locations. In such a case, we duplicate the contained grid points. To accelerate detectingwhich elements contain a given point, we use an axis-aligned bounding box tree [253]. All newnodes are mapped back to the original space based on their barycentric coordinates within thepolycube’s initial tetrahedral volume.We must also remesh the surface so that it, too, will reflect the regular structure. Otherwise,the organization of all tets along the surface will solely depend on the triangulation of the originalmesh. Remeshing the surface involves two stages: adding new vertices aligned with the grid,45Discretization 3.1 Volumetric Meshingfollowed by a vertex removal scheme to remove vertices that are not grid-aligned. Adding newvertices is accomplished by intersecting the polycube surface with each grid line and placing a newvertex on the mesh at the intersection points. When inserting a vertex on an existing triangularface, we na¨ıvely split that face at the intersection point. To speed up the intersection procedure,we again use an axis-aligned bounding box hierarchy. In the second stage, vertices are removed,ideally leaving behind only the grid-aligned ones. This should create a surface mesh with a quad-likestructure. Removing all original vertices, however, may result in a loss of features. Therefore, wetransform the surface mesh back to the original space, and perform the vertex removal procedureonly if the underlying surface geometry is not substantially affected. For each original vertex, wecompute the Gaussian curvature and the surface area of the attached umbrella of faces. If the twoare less than a supplied threshold, the vertex is removed using the method of Schroeder et al. [217].The modified surface mesh and new grid-mapped interior nodes are used to create a new tetra-hedral mesh, again using TetGen. This new volumetric mesh, however, should have a more regularinternal structure that follows the curvature of the original volume. It is to this intermediate meshthat we apply our novel recombination procedure to form a final hex-dominant mesh.3.1.3.2 Tetrahedral RecombinationBoth the method of Meshkat and Talmor [169] and of Yamakawa and Shimada Yamakawa andShimada [271] involve starting at a particular location in a graph, and branching out to detectpatterns that can be formed into valid well-conditioned hexahedra. Meshkat and Talmor’s algorithmstarts at a tet element and branches toward neighbouring tets that share a common face to findone of six specific sub-graphs. Yamakawa and Shimada’s algorithm starts at a node and branchesout along edges until a certain connectivity criteria of eight nodes are met. Unfortunately, boththese methods can lead to redundant searches: in Meshkat and Talmor’s algorithm, each valid sub-graph will be detected multiple times, once for each tet in that sub-graph; and in Yamakawa andShimada’s algorithm, the same group of eight nodes making up a hex will be found from multiplestarting nodes. Both algorithms also involve undesired searches: many of the search paths willfail to find a valid hex. The authors present simplifying strategies to combat this at the cost ofmissing potential recombination patterns. To improve upon these existing methods, we address thequestions:• How can we reduce or eliminate redundant search paths?• How can we maximize the likelihood of finding a valid hex (i.e. avoid searching fruitless pathsin the connectivity graph)?The key to answering these is to examine possible tet-to-hex recombination patterns to identifyunique features.46Discretization 3.1 Volumetric Meshing(a) 3-diag. (×2) (b) 4-diag. (×8) (c) 4-diag. (×4) (d) 5-diag. (×10)(e) 6-diag. (×2) (f) Central (×1) (g) Slivered (×12)Figure 3.7: Tetrahedra-merging patterns. The seven patterns for splitting a cube into tetrahedra.The multiplier indicates the number of unique rotated or flipped versions of the pattern that exist.We present a new algorithm, heavily inspired by the two originals. It is not only extremelycomputationally efficient, but also easy to implement and visualize. This new algorithm allows usto search through meshes with hundreds of thousands of tetrahedral elements in seconds, identifyingall potential groups of tets that can be merged, as identified by the original algorithms. Unlike theprevious works, we make no simplifying assumptions to reduce the number of search paths.Hex DecompositionsThere are a limited number of ways a hexahedron can be decomposed into tetrahedra. These havebeen previously enumerated by Bigdeli [18], and Meshkat and Talmor [169] provide a proof thatthere are 6 distinct patterns that consist of either 5 or 6 tetrahedra. If we allow for tetrahedralslivers, which are flat and hence poorly conditioned (see Figure 3.7g), then a hexahedron can bedecomposed into up to 13 tets. To eliminate redundant and unnecessary searches, we examinethese decompositions. To recombine a group of tets back into a single hex, it must follow one ofthe same patterns.Consider a single hexahedron with corners labelled A–H, as in Figure 3.7. We will refer to a line47Discretization 3.1 Volumetric Meshingthat cuts diagonally across the element, from A to G, as the through-diagonal. The key observationis the following.Theorem 3.1. The through-diagonal in a hex-to-tet subdivision must be shared by either 3, 4, 5or 6 tets within the hex.Proof. There cannot be only one tet attached to the through-diagonal: if there were, that edgemust lie on the surface of the cube, but we know that the diagonal cuts through the centre, so isnot on the surface.If there are only two tets connected to the edge, then those two tets must share two faces, withthe edge between them. This implies the two tets have four common nodes (the two along theshared edge, and one each to complete the two shared faces). Tets only have four nodes, whichmeans they are the same tet.Similarly, there cannot be seven tets that share the through-diagonal: there are only eightunique nodes in a hex, two are used by the through-diagonal, and neighbouring tets can share atmost 3 nodes total (i.e. a face) before being considered the same tet. Maximally packing tets in acycle around one edge using the six remaining nodes – each tet sharing a face with its neighbour –leads to a maximum of six tets (Figure 3.7e). Thus, for any valid hex-to-tet subdivision, there canonly be three, four, five or six tets that share the through-diagonal.Assuming there is an edge passing through the diagonal of a hexahedron, we can construct the hexby first adding the 3, 4, 5, or 6 tets around the diagonal in a cycle, then searching for remainingtets that share exactly one face with the existing group. Each added tet will contribute one newnode, so if the through-diagonal cycle consists of• 3 tets (i.e. 5 nodes), we add 3 more tets,• 4 tets, we add 2 more,• 5 tets, we add 1 more,• 6 tets, we already have a hex.There are a finite number of ways tets can be added to the existing structure to complete thehexahedron. Enumerating these reveals that there are only five distinct patterns, shown in Figures3.7a–e, plus all rotated and flipped versions of these. Note that in some cases, rotating the patternabout the diagonal by one element returns to the same configuration (e.g. Figure 3.7a). These fiveunique patterns correspond directly to five of the six connectivity subgraphs identified by Meshkat48Discretization 3.1 Volumetric Meshingand Talmor [169]. However, we can now visualize what these subgraphs correspond to, and canquickly identify them in a tetrahedral volume by simply counting the number of tets around a givenedge.If there is no edge through the diagonal of a hex, then there must be a tet occupying thecentre (Figure 3.7f). This leaves four remaining tet-shaped corners, and defines the sixth and finalsubgraph identified in [169].Allowing for slivers, it is possible to have two through-diagonals in a hex-to-tet subdivision.This results in a flat hex dividing two wedges (Figure 3.7g). There are three orientations the slivercan have in splitting the hex (4-choose-2 ways of choosing two faces to form the base of one of thetwo wedges), and two ways to divide each wedge, leading to a total of 12 possible subdivisions.There cannot be three or more through-diagonal edges in a hexahedron. That would mean thatone of the diagonals must pass through a sliver element, implying that the tets are overlapping.In this way, we have identified all possible ways to split a hexahedron into five or six tetrahedra,plus one way to split it into seven using a sliver. The remaining possible subdivisions all involveslivers adjacent to one of the square faces of an existing hex decomposition. The maximum numberof tets that can be merged to form a hex is therefore 13: the degenerate case of seven, plus oneadditional sliver attached to each face of the cube. This introduces connectivity patterns notconsidered in [169].Pattern SearchAfter identifying these simple and unique patterns for dividing a hex into tets, the detection algo-rithm becomes relatively straight-forward:• For each edge in the tetrahedral mesh– Count the number of tets around edge, n– Given n, check all k possible hex-to-tet divisions• For each tetrahedron in the mesh– Check for the “central tet” pattern– Check for slivers separating wedgesFor our implementation, we require two operations:• findOppositeTet(tet, node): Given a particular tet, find the other tet that shares the faceopposite the supplied node• findOppositeNode(tetA, tetB): Given a particular pair of tets that share a face, find thenode in tetA that is opposite from tetB (i.e. doesn’t belong to the common face). This can49Discretization 3.1 Volumetric MeshingAB(a) findOppositeTetAB(b) findOppositeNodeFigure 3.8: Finding opposite tets and nodes. Two operations are required for a given tet A whensearching for tet-to-hex recombination patterns.actually be accomplished using findOppositeTet by checking which node satisfies tetB ==findOppositeTet(tetA, node)These operations, depicted in Figure 3.8, can be efficiently implemented by creating a data structurethat stores 4 pointers for each tet: one for each node that points to the adjacent opposite tet (or nullif it is on the surface). Counting tets around a given edge, finding tet elements that share a givenface, and finding the next node to check when reconstructing a hex can then all be accomplished inO(1) time. To cycle around an edge, start with the two nodes that form the edge, then repetitivelycalltetLoop[i] = findOppositeTet(tetLoop[i-1], nodeLoop[i-1])nodeLoop[i+1] = findOppositeNode(tetLoop[i], tetLoop[i-1])until we are back where we started. Based on the number of tets in the loop, we then need to searchthrough all candidate hexes for that particular pattern, again using findOppositeTet to add anadjacent tet to our recombination group, and findOppositeNode to determine the new node. Aspotential hexes are found, they are added to a priority queue, sorted in descending order by theminimum scaled Jacobian (Equation (3.3)).Finally, we can account for the remaining sliver cases: after detecting a valid hex pattern, checkfor a sliver on each side of the six hex faces. If found, include it as an optional member of thegroup. We need to treat the sliver as optional, since there may be a valid hex on the other side ofthe face who may want to merge with the sliver first.The search algorithm presented here only requires two passes through the mesh: once alongedges to search for the through-diagonal patterns, and once for all tets to find the central tet andsliver patterns.50Discretization 3.1 Volumetric MeshingFigure 3.9: Hexahedral face conformity. Adjacent tet-hex pairs are allowed to have non-conformingfaces during the recombination process. Adjacent hex-hex pairs, however, must meet conformityrequirements.By identifying a unique feature in each hex-to-tet subdivision – the through-diagonal cycle length,or central/slivered tet – we have eliminated all redundant search paths. We are guaranteed to findevery possible tet-to-hex recombination pattern identified by the original approaches of Meshkatand Talmor [169] or Yamakawa and Shimada [270], but do so in a systematic way. With thisapproach, we can also provide an explicit upper-bound on the number of global patterns to search.In the worst case, we have 4 + 8 configurations to check for each edge, which would occur if everyedge was shared by exactly four tets. We also have one configuration for the central tet case,or 2 + 12 if we consider slivers. Therefore, the upper-bound on possible tet-to-hex recombinationpatterns in the entire mesh is: 22×# edges + 23×# tets. This bound guarantees that our detectionalgorithm has linear complexity in the number of edges and tets.Given the global list of all possible tet-to-hex patterns, we employ the same greedy algorithm forcreating the final hex-dominant mesh as Yamakawa and Shimada [271]. As potential hex elementsare found, they are added to a priority queue, sorted in descending order by the minimum scaledJacobian. The next potential hex in the queue is added only if two conditions are met:1. All tets still exist in the mesh, and2. Every quadrilateral face either shares a quadrilateral face with another hex, or with twotriangular tet faces (never with half another quad face).The second condition prevents the non-conformity pattern shown on the right in Figure 3.9. Ifeither of these tests fail, the potential hex is discarded.Note that our final mesh may still have non-conformal faces between tet and hex elements.This is a requirement of the tet-to-hex conversion, but can be corrected with post-processing. Forexample, in the method by Owen et al. [187] they add pyramid transition elements to restoreconformability. The simple approach we take is to insert a node in the middle of an offending hex,51Discretization 3.1 Volumetric MeshingTable 3.2: Summary of performance statisticsInput Output Hex fraction Avg. Jac. Timings (s)Model Tets Hexes Tets # Elements Volume Tet Hex Polycube Tetra. Tet MergeBunny 47521 6225 8180 43.2% 88.5% 0.506 0.952 48.3 0.73 0.463366095 51151 46351 52.5% 90.9% 0.520 0.957 - 12.6 5.05Cow 33627 4003 7691 33.5% 85.3% 0.489 0.945 53.2 0.498 0.243325732 35159 90963 27.9% 82.2% 0.470 0.955 - 20.4 2.64Dragon 35124 3758 11191 25.1% 80.9% 0.481 0.983 68.5 0.688 0.342292378 38002 53180 41.7% 85.3 % 0.527 0.939 - 9.56 3.53Fertility 30373 3391 9263 26.8% 77.1% 0.504 0.911 48.4 0.610 0.258490988 55761 133869 29.4% 80.8% 0.476 0.926 - 23.57 6.68Kitten 47001 6283 8033 43.9% 85.3% 0.549 0.931 58.8 0.91 0.756404237 55645 59056 48.5% 87.0 % 0.545 0.933 - 12.6 4.93Rocker-arm 30332 3570 7863 31.2% 81.6% 0.511 0.931 63.2 0.395 0.241314549 42738 46578 47.9% 88.2% 0.538 0.946 - 9.47 3.77Sphere 72759 9926 9734 50.5% 86.6% 0.575 0.941 45.2 1.58 0.835630755 88517 76506 53.6% 88.4% 0.584 0.938 - 23.9 13.6Tongue 83465 9879 19332 33.8% 85.4% 0.462 0.911 46.2 0.95 1.35333872 48745 62154 43.9% 84.6% 0.489 0.932 - 18.4 2.32Soft Palate 32142 3614 10141 26.3% 83.1% 0.492 0.923 56.3 0.516 0.371257272 40391 59190 40.5% 81.1% 0.513 0.938 - 8.31 1.14and split the element into a combination of pyramids and tets, depending on whether an adjacentface should remain a quadrilateral (pyramid) or two triangles (tets).3.1.3.3 ResultsWe tested our algorithm on a variety of shapes typically used in the hex and hex-dominant meshingliterature – Utrecht University’s “Fertility” and “Kitten” models, Stanford’s “Stanford Bunny”and “Dragon”, INRIA’s “Cow1” and “Rocker-arm” – on a simple sphere, as well as on surfacesof the tongue and the soft palate. For the polycube deformation process, we use a “flatness”parameter of β = 0.5 and a Poisson’s ratio of ν = 0.4 to allow for sufficient flexibility while stillresisting substantial changes in volume. For generating both the initial finite-element mesh, andthe intermediate tetrahedral-mesh used for the merging process, we used TetGen [219]. Two hex-dominant meshes were generating for each of the models: one at a relatively coarse resolution, andone at a finer resolution. However, the polycube deformation stage only needs to be applied once togenerate output volumes at both resolutions. When performing the tet-to-hex merges, we continuethrough the priority queue, adding a hex element as long as all its minimum scaled Jacobian isgreater than a given threshold, 0.3, and the face-conformity condition is satisfied.Results of the algorithm on the various geometries are presented in Table 3.2. Visual resultsare shown in Figure 3.10. All volumetric meshes were generated in under two minutes on a regulardesktop machine (Intel Core i7). The most time-consuming step, by far, is the polycube deforma-tion stage. This took between 45 and 70 seconds to converge. The tetrahedral-meshing process,including node-distribution, is the second most time-consuming. This took up to 24 seconds for the52Discretization 3.1 Volumetric MeshingFigure 3.10: Hex-dominant meshing results. Hex elements are shown in yellow, and tets in gray.53Discretization 3.1 Volumetric Meshingfertility statue at a high resolution, resulting in nearly 500k tet elements. The fastest step is thetet-to-hex recombination procedure, which has traditionally been the bottle-neck in existing works.Even when applied to a mesh of 631k elements (121k nodes), the entire recombination algorithmonly took 13.6 seconds. This is two orders of magnitude faster than times reported by [270], whichhad a recombination time of 35 seconds for a model consisting of only 7000 nodes . Furthermore,we did not make any assumptions to reduce the search space.Our algorithm produces meshes that are composed of over 80% hexes by volume on average. Bynumber of elements, the fraction of hexes is significantly smaller. This is mainly due to the presenceof many sliver elements introduced in the Delaunay triangulation stage. In terms of quality, theaverage scaled Jacobian of hexahedral elements is in the range of 0.911–0.983 for all meshes. Theminimum scaled Jacobian for hex elements was capped at 0.3. For tet elements, the average scaledJacobian was somewhat lower, between 0.470–0.584. Note that the scaled Jacobian for a tetrahedralelement is bounded above by 2/√2 ≈ 0.707.There are a few things to note in Figure 3.10. First, for “kitten”, the remaining tetrahedra onthe surface of the model mostly fall along the corners of the polycube shape. It is these regionswhere the deformation is the most extreme. Had we constructed an all-hex mesh in the polycubespace, elements along these seams would be highly distorted in the resulting mesh. Since ourmeshing is performed in the undeformed volume, we can better control quality by instead keepingthe space filled with tetrahedra. Also note that the polycube for the “Rocker-arm” model is nota true polycube: it contains two blue wedge regions in the middle extending outward. For all-hexmeshing, these regions would need to be corrected via post-processing [83, 100]. However, for ourhex-dominant meshing, the polycube artifact does not interfere with the meshing process.3.1.3.4 LimitationsAlthough we have fine control over hex quality, the quality of the tet elements highly depends on thetetrahedralization algorithm. The constrained Delaunay method we currently employ introducesmany tetrahedral slivers throughout the volume. Although our tet-merging algorithm attempts toaccount for these when searching for potential hex patterns, those which cannot be merged remainin the mesh. To correct for this, Baudouin et al. [14] suggest applying the advancing front method[152], which is less prone to producing slivers. With a more direct control over the triangulationalgorithm, we may also choose to reject adding a node if it would produce a sliver. An alternativeis to apply sliver removal – or exudation – algorithms, such as the vertex perturbation approachof Tournois et al. [251], the set of topological transformations of Klingner and Shewchuk [124],or the mesh refinement approach of Labelle [127]. Any slivers remaining after all attempts toalgorithmically remove them can be corrected manually.54Discretization 3.2 Volumetric MeshingOnce we have detected all set of potential tet-to-hex merges, we currently proceed with a simplegreedy algorithm, merging groups of tets to form hexes in sequential order of quality. This is thesame approach taken by Yamakawa and Shimada [270], though it sometimes leaves stray tetrahedralelements between the hexes, evident inside the meshes produced in Figure 3.10. Unfortunately,finding the globally optimal solution is an NP-complete problem [169]. One possible approachis to use local combinatorial search techniques, as Meshkat and Talmor suggest: given an initialpotential solution, search for nearby solutions which may be more optimal.3.1.3.5 Section SummaryWe presented a simple algorithm for generating hex-dominant volumetric meshes. First, we deforma shape to fit its closest polycube, rotating face normals to a desired orientation (a function oflocal normals and the nearest coordinate axis), while at the same time regularizing with a finite-element-based stiffness term. In the deformed space, we distribute nodes in a regular grid pattern.These nodes are transformed back to an undeformed space, were they are used to create a newtetrahedral mesh. We then convert this intermediate tet mesh to a hex-dominant one by merginggroups of tets together in priority order based on quality. Since the actual mesh is generated inthe undeformed space, we can easily control hex element conditioning. The advantage of this hex-dominant approach is that it allows us to create well-structured meshes that are still able to resolvefine features with tetrahedra if required.There are two main contributions in this work: the finite-element-based method for deformingvolumes to their nearest polycube approximation, and a efficient algorithm for detecting and con-verting a tetrahedral mesh to a hex-dominant one by merging hexes. The tet-merging algorithmis simple, and can find all potential tet-to-hex groups in linear time. We can also bound the totalnumber of configurations needed to be searched by: 22×# edges + 23×# tets. When the twocomponents are combined, we can produce volumetric meshes that are composed of more than 80%hexes by volume. This can be done within minutes, generating meshes consisting of hundreds ofthousands of elements.Ideally, we would test how these volumetric meshes perform under the nearly-incompressiblelarge-deformation conditions required by soft tissue simulation. However, currently the presenceof tetrahedral slivers prevents this, since even a few poorly conditioned tetrahedral elements cancause highly unstable behaviour in non-linear materials. Therefore, in the next section, we focus oncomparing systematically-generated or hand-crafted hexahedral and hex dominant meshes to a va-riety of alternative discretization approaches. The results of the following comparisons will provideinsights into the behaviour of hex-dominant meshes constructed using our PolyMerge approach,assuming a better handling of slivers when generating the intermediate mesh.55Discretization 3.2 Alternative Discretizations3.2 Alternative DiscretizationsRather than addressing challenges in constructing a well-conditioned and well-suited volumetricmesh, an alternative is to avoid them by relaxing the initial requirements of the discretizationprocess itself. In this section we review several popular techniques, and compare them to thestandard hex or hex-dominant approach under a set of tasks that demonstrate their advantagesand disadvantages.3.2.1 Quadratic ElementsThe simplest way to avoid volumetric locking artifacts in incompressible materials is to adjust thediscretization’s interpolation functions. We have already discussed how linear tetrahedral elementsare susceptible to locking. However quadratic tetrahedra – elements with quadratic interpolationfunctions – are not [156, 159, 235, 258]. These elements have additional nodes to control thecurvature of the shape functions, providing them with greater flexibility (see Table 3.1). A quadratictetrahedral mesh can be easily constructed from a linear one by adding a node at the midpointof each each edge in the original mesh, allowing us to exploit the much easier task of creatingwell-conditioned tetrahedral meshes. For non-linear problems, however, such as those involvinglarge deformations, Puso and Solberg [193] warn that quadratic tetrahedral elements can introduceother numerical issues.3.2.2 Nodal Pressure FormulationsAnother alternative for avoiding locking artifacts is to adjust the pressure discretization using anode-based pressure formulation [24, 193, 256]. Incompressibility is enforced on Voronoi regionsabout each node, rather than for each element, as described in Appendix B.3.3.5. For full in-compressibility, a single constraint is added per node, regardless of the element type, leaving 2Ndegrees of freedom free for deformation. This allows linear tetrahedral elements to overcome volu-metric locking. However this technique has been shown to introduce spurious modes of deformation[157, 159].3.2.3 The Embedded FEM TechniqueThe main challenge with hexahedral or hex-dominant meshing is having the elements conform tocomplex surface geometries. To avoid this challenge, we could relax the conformance constraint, andinstead create a bounding hex mesh, with the target volume of interest embedded within it. Points56Discretization 3.2 Alternative Discretizationsalong the target surface are bound to the finite element model using point-attachment constraintsvj =∑iφ(Xj)ni,where vj is the jth point from the target, Xj is its location in material coordinates within thebounding model, and ni is the ith node location.In the area of biomechanics, this idea was introduced by Capell et al. [31] for skeletal-basedanimation. Since we are no longer constrained to having element boundaries follow the contours ofthe desired target geometry, this type of formulation makes it easy to adjust and refine the meshresolution. A potential drawback, however, is that the method may fail to capture the true object’stopology: for example, if two tendrils are embedded in a single element, they will not be able tomove independently. To account for this, Teran et al. [240] use duplicated overlapping elements,where components that are supposed to move independently are embedded in distinct elements.The finite element mesh is no longer manifold, but the simulated tissue’s topology is preserved.Another difficulty with the embedding approach is representing the varied material propertiesof parts inside a single element, including void space where there is no material. Due to the natureof a finite element, all material inside will deform the same way, regardless of whether some partsare more stiff than others. To compensate for this, Nesme et al. [178] use “material-aware” shapefunctions in what they call composite elements. The shape functions are adjusted so that softerregions deform more than stiff regions as forces are applied to the nodes.Note that both the topological and varied-material issues are reduced as the bounding volu-metric mesh is refined. Elements containing only empty space can be removed from the model,leaving behind a voxel-like representation. With higher resolutions, the voxel-like surface will slowlyconverge to the true surface.3.2.3.1 ImplementationTo construct an embedded FEM model, the target’s surface is placed within a hex grid of elements,and any elements not containing any part of the original volume are removed. To determine if ahex element intersects any of the original volume, we perform a set of inside-outside tests using anoriented bounding box tree [82]. This leaves behind a volume resembling a lego-block construction.To this model, we apply two modifications: we ‘trim’ hex elements around the surface to obtaina tighter encapsulation, and we adjust the numerical integration by introducing a volume densityfunction to help account for empty space in the model.57Discretization 3.2 Alternative Discretizations(a) Tet (b) Pyramid (c) Pyramid (d) Wedge(e) Pyramid-Pyramid (f) Pyramid-Tet-TetFigure 3.11: Hexahedral trimming patterns. The six patterns considered for removing nodes froma hexahedral element.Element TrimmingIn the voxelized representation, we may have a few elements containing substantially little mass. Wecan try to form a tighter-fitting surface by subdividing hex elements on the surface, and removingportions which are empty. For this we use a simple pattern-based procedure. We consider all casesof removing two, three or four nodes, and re-dividing the remaining volume into tet, pyramid andwedge elements, as in Figure 3.11. To test that the removed portion is empty, we use a Monte-Carloapproach: we sample points from within the region to be removed and check that they fall outsidethe target surface. If all tests pass, the pattern is accepted.Similar to the tet-to-hex recombination algorithm from the previous section, trimming hexelements may introduce incompatible faces. To restore compatibility, we split each offending hexinto pyramids and tets by inserting a node at the center of the element. With this approach, wecan remove much of the empty space from the model.58Discretization 3.2 Alternative DiscretizationsVolume DensityTo can account for the remaining empty space in elements, we define a volume density functionη(X) =1, if X ∈ Ω00, otherwise =⇒∫Ω0f(X) dΩ0 =∫ΩB0η(X) f(X) dΩB0.where Ω0 is the target’s volume at rest, ΩB0 is the bounding volume at rest, and the integral holdsfor some arbitrary function f . This discontinuous volume density can introduce numerical artifactsaround the surface, particularly if an element contains very little true volume. Thus, we use asmoothed density function by defining a volume density at nodes and interpolating values usingthe finite-element shape functions,η˜(X) =∑jφj(X)η˜j0,where η˜j0 is an unknown volume density at node j. We would like to have η˜ be as close as possibleto η. To solve for the unknown densities, we use the same weak formulation as for the finite elementmodel, insisting that for a set of test functions {ψi} we have∫ΩB0ψi(η − η˜) dΩB0 =∫Ω0ψiη dΩ0︸ ︷︷ ︸ηi−∑j[∫ΩB0ψiφj dΩB0]︸ ︷︷ ︸Mijη˜j0 = 0.Taking the set of test functions to be ψi = φj as in the Galerkin approach (Appendix B.2.5), wehave a system of N equations to solve for the N node densities of the form Mη˜ = η. The terms ηiare integrated using Monte-Carlo integration, using an oriented bounding box tree [82] to acceleratetesting if the sampled point falls within Ω0. Once Mij and ηi are computed, we solve the systemfor η˜i using a sparse linear solver. This allows us to approximate volume integrals as∫Ω0f(X) dΩ0 ≈∫ΩB0η˜(X)f(X) dΩ0.We use this volume density to modulate the material’s mass and stiffness computations whenbuilding the finite-element system matrices.Forces and ConstraintsSince the target surface no longer corresponds to the FEM surface, if we wish to apply a force orconstraint directly to the target, we need a method to project this to the underlying FEM nodes.59Discretization 3.2 Alternative DiscretizationsTo apply a force at a point x within a model, we divide the force amongst the nodes as in [178].For conservation of energy, we requireδW = δu(x)Tf(x)−∑iδuTi fi =∑iδuTi [φi(X)f(x)− fi] = 0=⇒ fi = φi(X)f(x), (3.16)where δu is an incremental displacement, ui is the displacement of the ith finite element node, fi isthe force applied to the ith, and X is the material-coordinate location of x. The material locationX for a point on the target surface x is simply that point’s location at rest, which remains fixed.We store this information for every vertex on the target surface.Bilateral and unilateral constraints are similarly projected using the finite element shape func-tions. To see this, consider a constraint on the target surface satisfying g(x) = 0. Using thefinite-element interpolation for x, and differentiating the constraint in time to obtain the constraintmatrix on velocities, we haveg(x) = g(∑iφi(X)ni)=⇒ dgdt =∑i[∂g(x)∂xφi(X)]vi =∑iGivi = Gv. (3.17)Thus, the constraint is simply distributed across the underlying nodes, modulated by their shapefunctions.3.2.4 The Element-Free Galerkin TechniqueThe last option for addressing the challenges in constructing a volumetric mesh of a target volumeis to avoid meshing altogether. In a mesh-free approach, an alternative discretization is employed,typically by distributing control points (i.e. nodes) in an ad-hoc fashion, and defining a set ofradial-based interpolation functions between them.Mesh-free methods were originally designed to simulate fluid flow and structures that exhibitchanges in topology, such as those that occur in explosions or crack growth. For details of themany existing mesh-free methods and their applications, see [75, 145]. In our applications, we aremainly interested in the use of mesh-free methods for elastic/viscoelastic materials that maintaintheir topology.The focus of existing work on mesh-free methods for simulating biological tissue is real-time sim-ulation for graphics and surgical simulation, where accuracy is sacrificed for computational speed.De et al. [52] developed a point-collocation method for rapid simulation of tissue deformations wheninteracting with a rigid surgical tool. The original method was only applicable to linear materi-60Discretization 3.2 Alternative Discretizationsals, but it was later extended to non-linear ones by Lim and De [142]. In these point-collocationmethods, material properties are lumped at the nodes and the system of partial differential equa-tions are solved at these locations only. Horton et al. [97] created a mesh-free method based onthe Petrov-Galerkin finite element formulation for application in surgical simulation. By simplify-ing the integration scheme, they managed to show a 3× speed-up when compared to traditionalFEM using tetrahedral elements. For simulating muscle tissue, Yang et al. [272] designed a point-collocation method that included a Blemker-like muscle material model (see Appendix B.2.3.4) toallow muscles to contract. By simulating a series of actions, they created a library of deformedshapes which could later be referenced and interpolated to create real-time animations.In this work, we will continue with a Galerkin formulation consistent with our finite-elementimplementation, as described in Appendix B.2.5, and use a mesh-free approach for defining theinterpolation functions {φi}. The result is the Element-Free Galerkin method [16], which allows usto easily incorporate this mesh-free technique within our hybrid simulation framework.3.2.4.1 ImplementationTo create an mesh-free model based on the Element-Free Galerkin technique, we need to: dis-tribute nodes throughout the volume; define the shape functions for interpolation throughout thespace; develop an appropriate numerical integration scheme; and allow the application of forcesand constraints.Node DistributionIt is generally considered good practice to distribute nodes as uniformly as possible, except in caseswhere additional resolution is required in specific regions. There are several common methods toaccomplish this: by using automatic tet-mesh generation methods; sphere-packing and advancingfront methods [137, 267]; octree-based methods [123]; and centroidal Voronoi tessellations [59].We use a method based on Martin et al. [164]. First, we fill the domain with a regular pattern ofdensely packed points in a face-centered cubic lattice pattern, keeping only those which fall withinthe target volume. To accelerate inside-outside tests we use an oriented bounding box tree [82].We call this set of points the background lattice. We then apply the fast marching farthest-pointsampling technique [170] to select a desired number of nodes. In farthest-point sampling, the nextpoint selected is the one which is furthest from all other chosen points. In this way, we generatean approximately uniform set of nodes. To further equalize the node distribution, we apply LloydRelaxation: cluster the densely packed-points about their nearest node, and reposition the nodesto the center of their clusters. This is repeated iteratively under convergence is achieved.61Discretization 3.2 Alternative DiscretizationsOnce a uniform set of background nodes are added, new nodes can be inserted in any regions ofinterest, such as areas of varying tissue stiffness, or regions expected to undergo large deformations.The next step is to define shape functions for interpolating between the nodes.Interpolation FunctionsCurrently, the most common method for constructing shape functions is the Moving Least Squares(MLS) approach [16, 75]. The displacement u, is approximated locally about some point x¯ by apolynomial:u(x) ≈ up(x) = p(x)Ta(x),where p is a vector of polynomials that form a complete basis for the space we wish to be able toreproduce (e.g. p = [1, x, y, z] for piece-wise linear), and a is a set of coefficients. The goal is todetermine a by minimizing the objective functionJ(a,x) =N∑i=1wi(x− xi)[p(xi)Ta(x)− u(xi)]2, (3.18)where xi is the location of node i, and wi is a kernel function that determines the radius of influenceof node i. This kernel function should be smooth and have compact support. A typical choice isthe spline functionwi(q) =(1− q2)k, −1 ≤ q ≤ 10, otherwisefor q = ‖x− xi‖/ri, (3.19)where ri is the radius of support of the node, and k− 1 is the desired continuity of both the kernelfunction and shape function (i.e. k = 2 for linear continuity) [75]. Minimizing (3.18) to find theoptimal a and substituting this back to the polynomial approximation yieldsup(x) =N∑i=1p(xi)T N∑j=1w(x− xj)p(xj)p(xj)T−1w(x− xi)p(xi)ui = N∑i=1φi(x)ui, (3.20)where φi is the computed shape function associated with node i. Note that shape functions con-structed this way do not possess the Kronecker delta property (i.e. φi(xi) 6= 1). This means thatthe displacement field has u(xi) 6≡ ui, although they are close in a least-squares sense. For thisreason, the set {ui} are often referred to as false displacements.Computation of the shape functions requires a matrix inversion at every point we wish to62Discretization 3.2 Alternative Discretizationsevaluate them. This includes the node locations, as well as any points of numerical integration.Due to the small regions of support for each node, the matrices are typically small (e.g. 20 × 10),and these shape functions only need to be computed once in a preprocessing stage, so the addedcomputational complexity is not prohibitive.For the shape functions to be uniquely defined by the MLS procedure, every point within thevolume must lie in the support of at least four non-coplanar nodes. We achieve this by computinga separate radius of influence for each node in the following way:• Initialize all node radii {ri} to zero• For every point p in a densely packed grid– Determine the k-nearest non-coplanar nodes {npj}kj=1– Extend radius rj about node nj to encapsulate pFor the densely packed grid, we use the same face-centered cubic background lattice from the nodeselection process. For finding the k-nearest nodes, we use a k − d tree, continuing selection untilwe have at least 4 non-coplanar nodes.Once shape functions are determined, the element-free Galerkin procedure continues exactly likethe standard Galerkin FEM, as described in Appendix B.2.5, where a weak form of the differentialequations are constructed using shape functions as test functions.Numerical IntegrationGiven a set of nodes and interpolation function, we need to evaluate a set of integrals that describemotion throughout the continuum, given in Equations (B.33a–d).Since the shape functions no longer have a known polynomial form, and we do not have nicelyshaped regions of integration. It is therefore difficult to derive a set of Gaussian quadrature rulesin the domain. Common techniques include distributing quadrature points based on a backgroundmesh [58], and integrating about each node separately using Gaussian quadrature over spheresor intersections of spheres [50]. In De and Bathe [51], it was shown that rather than true Gaus-sian quadrature rules, more regularly-spaced integration points can often lead to better numericalresults. This is the approach we take.Since we are often integrating over irregular domains, we distribute a set of integration pointsuniformly over the volume, each with approximately equal weighting. For the number of integrationpoints, a typical rule is to use 3× that which would be used for linear FEM with the same numberof degrees of freedom [145]. In an infinite hexahedral grid with linear Gaussian quadrature, theratio of integration points-to-nodes is 8 : 1, so this would suggest using approximately 24 : 1 for themesh-free approach. The increased number of quadrature points is required since the MLS shapefunctions are less smooth than in the finite element method.63Discretization 3.2 Alternative DiscretizationsTo select our integration points, we use the same method as for selecting nodes: sample from thethe densely packed background lattice using farthest-point sampling, and applying Lloyd relaxationto obtain an approximately uniform distribution. To determine the quadrature weights, we clusterthe background lattice about each integration point and setwi = vol(Ω0)Bi∑Mj=1Bj,where Bi is the number of background lattice points assigned to the ith cluster. In the case whereeach cluster contains the same number of points, each integration point occupies the same amountof volume, leading to an equal weighting. We then proceed as usual, approximating an integralover some region Ωe as ∫Ωef(x)dΩe ≈∑i∈Ωewi f(xi).This allows us to compute the mass, force, and stiffness terms in Equation (B.34).Forces and ConstraintsThe lack of the Kronecker delta property of the MLS interpolation functions has implications whenapplying forces or constraints to nodes: the force or constraint must be distributed to the set ofdependent nodes at the “true” location based on the interpolation functions. Since we have therelationsu(xj) =∑iφ(Xj)ui, and v(xj) =∑iφ(Xj)vi,with ui and vi the false displacement and velocity of node i, this is identical to the case of applyinga force or constraint within a finite-element. Thus, we use Equations (3.16) and (3.17) to projectthe force and constraint to each dependent node, just as in the embedded FEM process.To enforce incompressibility, since there are no elements in our mesh-free implementation, weuse the node-based approach as described in Appendix B.3.3.5. We assign the contribution of eachintegration point to its nearest node in rest coordinates, and estimate a single averaged pressureper node to avoid volumetric locking.3.2.5 ComparisonsWe compare models using quadratic tetrahedra, the embedded FEM technique, and the element-free Galerkin technique to standard linear finite-element models under a set of circumstances to64Discretization 3.2 Alternative DiscretizationsFigure 3.12: Cylindrical beam. Several models of an incompressible beam are simulated undergravity, with one side fixed.highlight the advantages and disadvantages of each approach. We consider three examples: acylindrical beam falling under gravity, a simplified bipennate muscle with thin tendinous structures,and a muscle model of the tongue during a protrusion task.3.2.5.1 A Cylindrical BeamIn our first task, we aim to analyze accuracy and rate of convergence by simulating an incompressiblehorizontal cylindrical beam under gravity, with one end fixed in space. We examine the followingmodels:Wedge: a cylinder made up entirely of approximately uniform wedge elements.Hex: an all-hex cylinder generated by mapping a uniform hexahedral beam to a cylindricalsurface using the conformal map described in Ros¸ca [207].Tet: a cylinder made of approximately uniform tetrahedral elements, with nodal incompress-ibility constraints to avoid volumetric locking.Q-Tet: a cylinder made of approximately uniform quadratic tet elements.Embed: an embedded FEM model with trimmed elements and a volume density function toaccount for the empty space.MFree: a mesh-free cylinder with MLS shape functions.These models are depicted in Figure 3.12. The cylinders are 10 cm long, with a radius of 1 cm. Weuse a Mooney Rivlin material model (Appendix B.2.3.3), with µ10 = µ20 = 10 kPa and κ = 1 MPa.To measure accuracy of the simulation, we track and compare the position of the center node ofthe moving circular face. Both the dynamic and static responses are shown in Figure 3.13.All models are shown to converge to approximately the same final results, both for dynamicsimulations and static simulations. After 13k nodes, all results are within one millimeter of each65Discretization 3.2 Alternative Discretizations0 0.5 1 1.5 2−80−70−60−50−40−30−20−100z(mm)Low Res. (N ≈ 100)0 0.5 1 1.5 2Time (s)Med. Res. (N ≈ 1k)0 0.5 1 1.5 2 2.5High Res. (N ≈ 10k)wedge hextet q-tetembed mfreeDynamic Convergence0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75×103−62−61−60−59−58−57−56−55−54−53−52# Nodesz(mm)Static Convergencewedge hextet q-tetembed mfreeFigure 3.13: Convergence comparisons. The dynamic and static convergence is depicted for allmethods. At N ≈ 13k nodes, the static solution for all discretizations are within 1 mm of eachother. The quadratic-tetrahedral model has the fastest convergence rate in terms of number ofnodes, while the regular tetrahedral model has the slowest. Static analysis of the mesh-free methodwas limited to 17k nodes due to memory limitations.other. The quadratic tetrahedral model was found to converge the fastest with respect to thenumber of nodes in the model, followed by the embedded FEM model, then the hexahedral, wedge,mesh-free, and finally the linear tet model. Note that the maximum number of nodes used for themesh-free method is 17k. This is due to memory limitations, which will be discussed in Section3.2.5.4.66Discretization 3.2 Alternative DiscretizationsFigure 3.14: Bipennate muscle. Several low-resolution models of an incompressible bipennatemuscle attached to two fixed plates are simulated under isometric muscle contraction. Red linesindicate the direction of the muscle fibres within the elements.3.2.5.2 A Bipennate MuscleIn our second task, we examine force outputs with a simplified bipennate muscle. This bipen-nate example is useful to test the effects of muscle activations and varying stiffness parameters,particularly for thin tendinous regions. We examine the following models:Hex: an all-hex model with separate muscle and tendon elements.Tet: an all-tet model with separate muscle and tendon elements.Q-Tet: an all-quadratic-tet model with separate muscle and tendon elements.Embed: a bipennate muscle embedded within a hex grid, with outer-most elements trimmedto remove space. Two separate volume density functions are defined within the elements:one for muscle, and one for tendon, allowing us to modulate the numerical integration toaccount for the two materials.MFree: a mesh-free bipennate muscle with MLS shape functions, and additional nodes dis-tributed within the tendon to better localize stiffness.The bipennate muscle is designed to have a muscle volume of 25 cm3, divided into two compartmentsseparated by thin sheets of tendon for transmitting the force between two plates (Figure 3.14). Thefibres act at a pennation angle of α = 45◦. We model the muscle tissue as a Mooney Rivlin Materialwith the fibre-activation term from the Blemker model [22] (Appendix B.2.3.4). For the MooneyRivlin parameters, we take µ10 = µ20 = 10 kPa, κ = 1 MPa. For the activation-dependent musclefibre stress, we use a specific tension of σM0 = 200 kPa, and optimal fibre stretch of λofl = 1. Notethat these values are not necessarily representative of any particular muscle, but do fall withintypical ranges found in the literature [22, 183, 205, 208]. For the tendon portion, we follow Blemker67Discretization 3.2 Alternative Discretizations0 5 10 15 20×1030100200300400500600# NodesForce(N)Max Force vs. Nodes0 0.2 0.4 0.6 0.8 1ActivationForce vs. Activationhex tetq-tet embedmfree103 104 105 106 107Stiffness scale (×)Max Force vs. Tendon StiffnessFigure 3.15: Bipennate muscle forces. Left: assuming tendon stiffness scaling of 200×, maximumisometric contraction force vs node count. Middle: muscle force increases with muscle activation(N ≈ 20k). Right: by increasing the tendon stiffness scale factor, we reproduce the predictedmaximum isometric contraction force of 500 N.et al. [22] and scale the stiffness by a factor of 100. According to the muscle’s physiological cross-sectional area, we should expect a maximum isometric contraction force ofFmax = σM0 × PCSA× cos(α) = 500Nacting between the two plates at full activation.We compare the generated contraction force with respect to number of nodes, muscle activation,and the stiffness scale factor for the tendon material. Force results are shown in Figure 3.15.According to simplified point-to-point models (e.g. [93, 275]), force is expected to increase linearly.However, although we find that force does increase with activation, we note that some of thecontraction energy is converted into elastic potential energy as the muscle deforms. This leads toa decrease in net force transmission. Note also that when taking the tendon to be 100× morestiff than the muscle belly, we do not reproduce the maximum isometric contraction force of 500N predicted by PCSA. Even with this high stiffness, the tendon does not transmit all the force,instead absorbing some of the energy in its deformation. If we increase the stiffness of the tendonto the point where it becomes essentially rigid, we do recover forces near the predicted 500 N.In real physiological conditions, the tendon does not typically behave rigidly – in fact, itsdeformation can play an important role in the transmission of muscle forces, particularly for longtendons [249, 275]. However, the maximum force estimate from PCSA is derived using a geometricargument, which does inherently assume a rigid tendon and constant cross-sectional area. It should68Discretization 3.2 Alternative DiscretizationsN = 2945 N = 7671 N = 27861Figure 3.16: Embedded bipennate models. At N < 5000, the left-most and right-most elementsconsist mostly of muscle tissue, leading to an under-estimated stiffness for transmitting forces. AtN > 5000, the left-most and right-most elements consist mostly of tendon, leading to an over-estimated stiffness for transmitting forces. At N = 27861, the tendon tissue is well-isolated.therefore only be considered an approximation when estimating maximal forces. For short tendinouscomponents such as the sheets of aponeuroses found in the masseter and other muscles in the jaw,the change in functional tendon length has been reported to be minimal (i.e. less than 4%), inwhich case a rigid tendon assumption may be appropriate [188, 225].The tetrahedral, hexahedral, and quadratic tet models all quickly converge to their maximumisometric force output by approximately 3000 nodes. Even at low resolutions, however, these modelsare all within 1 N of their final predicted force. The mesh-free and embedded force outputs requirea higher resolution in order to better isolate the stiffness of the thin tendon sheets. We can see inthe embedded model that at less than 5000 nodes the outer-most elements consist mostly of muscletissue, and above 5000 nodes they consist mostly of tendon, depicted in Figure 3.16. This causes a‘stiffness leaking’, which leads to the jump in simulated maximum contraction force. As the numberof nodes increases further, the embedded model is better able to isolate the contribution of the stifftendon, eventually converging to the all-hex model’s result.To better isolate the high tendon stiffness in the embedded models at lower resolutions, wepresent an alternative model:Coupled: separate tendon and muscle tissue models, coupled together with constraints.By generating completely separate models of the muscle and tendon tissue, we can tailor the reso-lution of each, allowing us to better resolve the thin tendon sheets without affecting the resolutionof the muscle tissue. However, the action of the tendon and muscle must be coupled, since they69Discretization 3.2 Alternative DiscretizationsMuscle Tendon Coupled0 0.2 0.4 0.6 0.8 10100200300400500600ActivationForce(N)Force vs. Activationhexembedcoupled103 104 105 106 107 108Stiffness scale (×)Max Force vs. Tendon StiffnessFigure 3.17: Coupled bipennate model. The muscle and tendon are meshed separately, and themodels are coupled together with a set of attachment constraints (green spheres). This allows betterisolation of the tendon stiffness contribution for force transmission. All models have similar nodecounts (Hex N = 1215, Embed N = 1241, Coupled N = 1295).are physically attached. We couple the motions using a set of linear constraints of the form:g(XT ,XM) =[∑iφi(XT )ni]−∑jφj(XM)nj = 0,where XT is the material coordinate location of the constraint within the tendon, XM is thecorresponding material coordinate location of the constraint within the muscle, i sums over allnodes in the tendon element, and j over nodes in the corresponding muscle element. This constraintenforces that a particular common spatial location contained within both a muscle element and atendon element remain coincident. For adequate coupling without over-constraining the system, we70Discretization 3.2 Alternative DiscretizationsMaterial ParametersParameter Valueρ 1040 kg/m3µ10 1037 Paµ20 486 PaσM0 300 kPaλofl 1.0Figure 3.18: Original tongue model. The hexahedral finite element model (left) has 11 musclegroups defined (middle), which are used to determine muscle fibre directions within the tongueelements (right).choose to constrain the centroid of one tendon element per muscle element. The model and resultsare shown in Figure 3.17. The added constraints do limit some of the muscle contraction capability,so we no longer show a maximum isometric force of 500 N as tendon stiffness is increased. However,we have recovered most of the force when compared to the low-resolution embedded model alone.By separating out the tendon this way, we are able to use a low resolution volume mesh for themuscle, and still produce contraction forces similar to the fully hex model at the same resolution.3.2.5.3 Tongue-ProtrusionIn our final set of simulations, we examine a biomechanical modelling task: protrusion of the tongue.The original finite element tongue model was developed by Buchaillard et al. [29] and modified byStavness et al. [228] (Figure 3.18). It consists of 946 nodes, 740 hexahedral elements, and has11 separated symmetric muscle groups that can be activated independently. To model the tissuematerial, we use a Mooney Rivlin model as a base, with the added activation-dependent Blemkermodel along the direction of the fibres.For the tongue protrusion task, we place a marker on the tip of tongue, define a protrusiontarget point moving forwards, and determine the muscle activations required for the target tobe tracked. To compute the necessary muscle activations, we use the forward-dynamics trackingmethod of Stavness et al. [229]. In our simulations, we attempt to protrude the tongue by 1.5 cmforwards within one second, then return back to the resting position. We examine five models:Hex: the original all-hex model.Tet: an all-tet model generated by triangulating the original surface mesh and using TetGento populate the volume with tets.71Discretization 3.2 Alternative DiscretizationsFigure 3.19: Tongue models. Several muscle models of the tongue are generated for simulating aprotrusion task, where the tip of the tongue is extended forwards up to 1.5 cm.Q-Tet: an all-quadratic-tet model created by reducing the original surface mesh using QuadricEdge Collapse [96], and applying TetGen to generate the tets.Embed: the tongue geometry embedded within a hex grid, with outer-most elements trimmedto remove space, and volume density function to modulate integration.MFree: a mesh-free tongue with MLS shape functions.These are depicted in Figure 3.19. To test the impact of resolution, we also create higher resolutionmodels by subdividing the hex and tet models, using the original surface to generate the quadratictet model, and by doubling the resolution in each dimension for the embedded and mesh-freemodels. We examine the tracking error as well as computed muscle activations for all models. Thelargest contributing muscle for the task is the intrinsic tranverse muscle, so we only report thisactivation, though the other muscle exhibit similar patterns.From the results shown in Figure 3.20, we find that the tetrahedral model exhibits the most errorat low resolutions, and the quadratic tetrahedral model the least error. The quadratic tet modelis also the one with error closest to its higher resolution counterpart, implying that it convergesthe fastest with respect to number of nodes. At the higher resolution, all discretizations lead toapproximately the same predicted muscle activations, as well as tracking error. The 2.5 mm errorat one second is mainly due to the tongue model’s inability to extend outwards by a full 1.5 cm, asshown on the right of the figure.3.2.5.4 Other ConsiderationsAside from accuracy and rate of convergence, there are other considerations that may play animportant role in choosing which type of discretization is appropriate for a given task.72Discretization 3.2 Alternative Discretizations0.0 0.5 1.0 1.5 2.00.00.51.01.52.02.53.03.54.04.55.0Time (s)TrackingError(mm)Low Res. (N ≈ 1000)0.5 1.0 1.5 2.0Time (s)High Res. (N ≈ 8k)hex tetq-tet embedmfreet = 00.0 0.5 1.0 1.5 2.00.00.10.20.30.40.50.60.70.80.91.0Time (s)TransverseActivationLow Res. (N ≈ 1000)0.5 1.0 1.5 2.0Time (s)High Res. (N ≈ 8k)hex tetq-tet embedmfreet = 1Figure 3.20: Tongue protrusion results. A target for the tongue tip is extended outwards from0 cm at t = 0, to 1.5 cm at t = 1. Tracking error, as well as activation of the transverse muscleis shown above for the various models at two resolutions. Right: the original tongue at rest andprotruded, with the tongue tip (magenta) and target (cyan) highlighted.73Discretization 3.2 Alternative DiscretizationsMesh ConstructionWe have already discussed how hexahedral meshing of arbitrary volumes poses a major challenge,and have presented an automated hex-dominant approach in Section 3.1.3 in an attempt to addressthis. But tetrahedral meshing may also present some issues, and require preprocessing of the data.In automated tetrahedral meshing approaches such as those offered by TetGen, the generatedvolumetric mesh is highly dependent on the resolution and quality of the original triangulatedsurface. The higher the surface resolution, the higher the volume resolution will need to be in orderto produce well-conditioned tetrahedra. Also, if the original surface contains elongated triangles,then the volume will contain elongated tets. Thus, it may be important to modify the originaltriangulated surface resolution and quality prior to using such automated algorithms. With toolssuch as MeshLab [36] and Blender [23], this surface preprocessing stage is not prohibitive.The advantage of the embedded and mesh-free approaches is that the generated discretiza-tion is entirely disconnected from the quality of the surface mesh, leading to a fully-automatedconstruction.Variable Spatial ResolutionWe have seen with the bipennate muscle example that it may be desirable to locally adjust theresolution of a discretization to isolate thin structures within the volume, or to provide higheraccuracy in regions of importance.In hexahedral and tetrahedral meshing, this variable resolution is often accomplished by dividingthe original volume into sub-volumes. Lobos and Gonza´lez [150] present a hex-dominant meshingapproach that allows for variable mesh resolution, as does the method in TetGen by Si [220]. Withthe mesh-free approach, additional nodes can simply be inserted in areas of interest in an ad-hocfashion, and the MLS shape functions will automatically adjust. For the embedding approach,spatially adjusting mesh resolution presents more of a challenge. Select hexahedral elements can besubdivided in regions of interest to locally refine the resolution, however this will either introduceT-junctions (where a node from one element falls in the middle of an edge or face of another), orwill require a set of transition elements to restore face compatibility [113, 150].Time and Memory ComplexityBoth the computational complexity and memory requirements of a discretization are heavily de-pendent on node connectivity, since this determines the number of terms that need to be computedwhen integrating the mass and stiffness matrices, and also the sparsity structure of the system’ssolve matrix. In Figure 3.21, we plot the sparsity structure of the stiffness matrices for the cylinder74Discretization 3.2 Alternative Discretizations0 100 200 300 400 5000100200300400500Wedge, nnz=25420 (7.8239%)0 200 400 6000200400600Hex, nnz=41772 (7.4261%)0 100 200 300 400 5000100200300400500Tet, nnz=73027 (22.4768%)0 100 200 300 400 5000100200300400500Q-Tet, nnz=34510 (10.6217%)0 200 400 6000200400600Embedded, nnz=42133 (7.4903%)0 100 200 300 400 500 6000100200300400500600MFree, nnz=159151 (40.4831%)Figure 3.21: Sparsity structure comparison. The sparsity structure of the stiffness matrix isdepicted for each of the discretizations. The hexahedral and embedded models are the most sparse,with highly banded structure. The mesh-free stiffness matrix is the least sparse.75Discretization 3.2 Alternative Discretizations(a) Spheres (b) Inverted (c) ElementFigure 3.22: Quadratic element inversion. Mapping a tetrahedral and quadratic tetrahedral gridto a sphere leads to element inversions (red) in the quadratic model. The node displacements inboth linear and quadratic models are identical.models depicted in Figure 3.12. In this case, the hex models had the smallest connectivity, followedby the wedge, quadratic tet, tet, and mesh-free. The high connectivity for mesh-free models isdue to the large number of sphere-sphere intersections that arise when distributing points evenlythrough space.The largest bottle-neck in terms of both memory and computation time was found to be the di-rect matrix solve of the KKT system describing the system dynamics (Equation (B.50) for dynamicsimulations, or Equation (B.53) for static simulations). To solve these we use the Pardiso directsparse solver [215]. With Pardiso, memory requirements seem to grow linearly, and computationtime grows approximately quadratically with connectivity. At 17k nodes, the mesh-free matrixsolve required 14 GB of memory, and the entire simulation took about 8 hours to compute. Theequivalent hexahedral model only occupied 3 GB of memory and took 12 minutes to compute.StabilityA final consideration is the numerical stability of the elements. In practice, we find the mesh-freeand quadric tet models to be somewhat less stable than the linear hex and tet models, particularlyin the presence of large deformations or abrupt changes in muscle activations. We attribute this tothe complexity of the shape functions. In the case of quadratic shape functions, small perturbationsin node locations can lead to larger deformations internally, which can cause element inversions. Todemonstrate this, consider the case of deforming an incompressible tetrahedral cube to a sphere.The analytic volume-preserving deformation is given by the bi-Lipschitz mapping of Griepentroget al. [84]. Applying this deformation field to both linear and quadratic tetrahedral cube modelsleads to the states depicted in Figure 3.22. In the case of the quadratic model, the deformationcauses inverted elements along the diagonals of the original cube. For the exact same node displace-ments, the linear model remains inversion-free. We find similar issues with the mesh-free approach:76Discretization 3.2 Alternative Discretizationssmaller node displacement can cause element inversions due to the increased complexity of theshape functions. Element inversions within a model volume are usually preventable by reducingthe step size in dynamic simulations, or the line-search parameter in static simulations, thoughthis means that simulations will take longer. If the inversion is caused by a prescribed boundarycondition applied directly to nodes, then there is no prevention.3.2.6 Section SummaryWe have analyzed several spatial discretization methods in the context of incompressible materialsand biomechanical modelling.From our cylindrical beam experiments, we show that all presented approaches will eventu-ally converge to the same solution given sufficient node resolution, though at varying rates. Thequadratic tetrahedral model converges the fastest in terms of number of nodes, and the tetrahedralmodel with nodal incompressibility converges the slowest.For the bipennate muscle, again all methods of discretization eventually converge to similarsolutions. In the embedded meshing technique, however, a poor approximation of the thin tendonstructures at low resolutions leads to the least accurate force reproduction. It may be possibleto address this by internally adjusting the shape functions as in [70]. We present an alternativeapproach of meshing the tendon and muscle separately, then coupling their motion together with aset of bilateral constraints. In doing so, we are double-counting the volume shared by the muscle andtendon. However, this seems to work well in this case of thin structures with very high stiffness,where the tendon properties dominate the mechanical behaviour. With this embedded-couplingtechnique, we are able to recover most of the force production at low resolutions.In the dynamic tongue protrusion task, we again find that at high node resolution, all dis-cretizations converge to the same response, both in terms of predicted muscle activations and indisplacement accuracy. At lower resolutions, the quadratic tetrahedral model seems to be the mostaccurate for displacement, followed by the hex and embedded finite element models. Though formuscle activation predictions, the tet and quadratic tet models seem to under-estimate the re-quired activation value. We postulate that this is due to the lower resolution of numerical quadra-ture points, which can lead to an overestimation of the muscle sizes. This is corrected at higherresolutions.The three simulated examples considered suggest that a desired simulation accuracy can beachieved regardless of the discretization, as long as a sufficiently high resolution is considered. Thus,for a given application, the most appropriate choice should be driven by other considerations. Thehex or hex-dominant approach may be best when the volume of interest has a relatively simpleregular structure, for which such meshes are easier to construct. With complex geometries, a77Discretization 3.3 Chapter Summaryquadratic tetrahedral mesh may be most appropriate, as long as the stability requirements aremanageable. Otherwise a tetrahedral or embedded meshing approach may be best. The element-free Galerkin technique is particularly useful when a variable resolution is desired, such as in theneed to resolve thin tendinous structures. However, this comes at the cost of a much highercomputational complexity, as well as other potential issues of stability. Each technique thereforehas its own set of advantages and disadvantages, and the most appropriate choice seems to comedown to convenience: balancing between construction and time/space requirements for a givenapplication.3.3 Chapter SummaryIn this chapter, we tackled issues related to the spatial discretization for volumetric deformablemodels. In Section 3.1, we discuss volumetric meshing for the finite element method, with afocus on hex-dominant meshing. We introduced a novel technique for generating well-conditionedhex-dominant meshes using a FEM-based polycube deformation map to estimate structure, and atet-to-hex recombination algorithm to construct a hex-dominant mesh. Unlike previous works, ourrecombination approach has linear complexity in detecting all valid tet-to-hex patterns, drasticallyimproving the computational speed of the process. This work was presented at SIGGRAPH 2014(Vancouver) [C5], and received first place in the SIGGRAPH ACM Student Research Competition.In Section 3.2, we discuss alternative discretization techniques, including tetrahedral modelswith nodal incompressibility, quadratic tetrahedral models, embedded finite element models, andthe element-free Galerkin technique. We addressed implementation details, and compared the tech-niques for accuracy and rates of convergence in displacements, force production, and in a dynamictongue protrusion task. Work towards the tongue protrusion task was presented at CMBBE 2013(Salt Lake City) [A4]. The FEM-embedding technique was used for modelling white and grey-matter within the spine, presented at ESB 2017 (Seville) [A1]. We also introduced an embedded-coupling method for resolving thin tendinous structures within lower-resolution models. This latterwork was presented at CMBBE 2014 (Amsterdam) [A3], and is used in modelling of the masseter(Section 5.2) [C2]. Use of embedded finite-element meshing with bilateral constraints and forces,along with other reduced simulation techniques was also presented at CMBBE 2018 (Lisbon) andwon the Best Paper award [C1].78CHAPTER 4Data Fusion: Dynamics-Driven RegistrationBiomechanical models are usually constructed based on measured data such as medical imaging,as well as prior knowledge of the anatomy and its physical behaviour. Since different imagemodalities highlight different aspects of biological tissue, we may need to acquire image data frommultiple sources. Prior knowledge can take the form of geometric models describing an expectedvolumetric shape, statistical models of how that shape may differ between subjects, and full dynamicmodels that describe how we expect the anatomy to move and deform. To construct a biomechanicalmodel, we need a method to merge all this information into a single common reference space, with aprocess known as registration. Registration of data from multiple sources is inherently challenging:the anatomy undergoes motion and deformation due to changes in body position, posture, and inresponse to external pressures. As a result, deformable (or non-rigid) registration is often needed.In this chapter, we tackle challenges related to data registration, addressing them by develop-ing methods that harness prior knowledge of the underlying system. By incorporating physical,statistical, and biomechanical information into the framework, we show we are able to addressissues of noise and knowledge gaps caused by hidden or missing data. We begin by presenting ageneral-purpose model-based registration technique that uses a full hybrid multi-body/continuumdynamic system with constraints (Appendix B). We then derive a simplified version and apply it inthe context of prostate interventions for fusing image data from multiple sources. We augment thismethod to incorporate shape variance information from a population in the form of a statisticalshape model. Finally, we exploit dynamic constraints to further control the registration task ofmapping an articulated model of the arm to muscle-fibre and bone geometry data acquired by dis-section and digitization. The end product is a new subject-specific model that can be augmentedto include the new information.4.1 Registration Energy MinimizationTo combine information from multiple sources into a single reference space, we need to identifycommon features present in the data, and compute a map that brings these together. In images,features are often based on image intensities or changes in image intensities such as sharp boundariesbetween tissues. For models, features typically involve surfaces or specific identifiable landmarks.79Data Fusion 4.1 Registration Energy MinimizationIn either case, the registration task is typically formulated as an energy minimization problem:Minimize E(X,Y,Θ) = −S (T (X,Θ), Y ) +R(Θ), (4.1)where Y is the target information in the reference space, X is the source information that ismapped to the target, T is a transform that depends on parameters Θ, S is a similarity metricapplied between the transformed source and target information, and R is a regularization termthat seeks to constrain the transform parameters. The goal of registration is to find the optimalset of transform inputs Θ that maximize similarity between the two data sets while remainingwithin a space of valid or desirable solutions. We can therefore separate the problem of registrationinto three parts: determining an appropriate similarity metric between the source and target data;defining a transformation that will be used to map the source to target; and applying restrictionson the transform parameters through regularization.4.1.1 Similarity MetricsThe precise form of the similarity metric, S(·, ·), will depend on the form of the data. For images,the inputs are typically the 2D/3D arrays of image intensities. For features, the inputs are usuallysets of coordinates corresponding to points of interest or identifiable landmarks in the data.Image-BasedThere is a large field dedicated to image intensity-based registration, including of multimodalimages, where the data is acquired using two or more imaging technologies (e.g. [190, 224]). Forimages of the same modality, a simple sum of squared differences (SSD) of the intensities maybe appropriate. However, when comparing separate image modalities, it is important to considervarying image properties due to the different imaging physics being exploited. For example, the highX-ray absorption of bones cause them to appear brightly in CT, however their lower water-contentmakes them much darker in MRI. For such comparisons, mutual information (MI) or its normalizedvariant normalized mutual information (NMI), are commonly applied [185]. Rather than directlycomparing intensities, these instead consider the probabilistic distribution of intensities. Anothermethod of particular note is the modality independent neighborhood descriptor (MIND)[92]. Thistechnique has been successfully applied to multi-modal prostate registration [232]. However, just aswith all intensity-based approaches, MIND still requires a sufficient number of anatomical featuresto appear in some form in both images [92]. This may be challenging when parts of the anatomyare only visible in one image type. For example, in transrectal ultrasound (TRUS), the boundaryof the prostate is not easily distinguished at its base, making it difficult to see where the prostate80Data Fusion 4.1 Registration Energy Minimizationends and the seminal vesicles begin.Due to challenges in defining appropriate intensity metrics, and for the need to also registerprior knowledge (typically in the form of a model or surface) we instead focus on surface-basedregistration.Surface-BasedIf images are segmented as part of a modelling workflow, we can avoid the issue of finding intensity-based correlations by instead using a surface-based similarity metric. This allows us to incorporateclinical or anatomical expertise into the process. The trade-off with surface-based techniques isthat registration then heavily relies on segmentation accuracy.In the context of prostate interventions, Smith et al. [223] found that there was a high vari-ability in segmentations, even when they were completed by clinical experts. The primary cause ofinconsistencies was ambiguities or lack of visibility of tissue boundaries, forcing clinicians to relyon prior knowledge of organ shape and expected symmetries. Such issues are not only present inprostate interventions [172, 223], but also in liver resections Cash et al. [32], Rucker et al. [209].Any segmentation inaccuracies in ambiguous regions can have a significant impact on predictedinternal deformations. Thus, any surface-based technique must be robust to segmentation errors,noise, and to missing or unreliable information, where there is little confidence in segmentationaccuracy.Original surface-based registration methods relied on manual landmark selection [44]. ‘Simi-larity’ is then defined using a distance metric, such as Euclidean distance, between correspondingsource and target landmarks. Identifying these landmarks manually, however, is a tedious, time-consuming process and is highly subject to user variability.To automatically identify corresponding pairs of markers, a number of methods have beenproposed based on the iterative closest point (ICP) algorithm [17, 278] and its variants [17, 204,209, 278]. In ICP, each landmark from one data source is assumed to correspond to its nearestlandmark in the other. After a single registration step, the landmarks will have moved, whichmay result in a change of nearest landmark pairs. Thus, the process is repeated until convergenceis achieved. Unfortunately, ICP-based algorithms are very sensitive to initialization, noise, andoutliers [81, 168, 176, 197].One approach to mitigate ICP sensitivity is to map the two surfaces into a topologically equiva-lent space [172, 273]. This was attempted by Moradi et al. [172] for MR-TRUS fusion. Its accuracywas found to still be highly sensitive to contiguous regions of missing data, such as around the apexof the prostate where the surface contour has poor visibility.81Data Fusion 4.1 Registration Energy MinimizationTo overcome limitations of ICP due to simple binary correspondences, some approaches computeprobabilistic, or soft, correspondences such as with a Gaussian-mixture model (GMM) [34, 81, 111,168, 176, 197]. These methods convert registration into a probability-density estimation problem,maximizing the likelihood that one set of points is drawn from a probability distribution governedby the other set. This approach directly accounts for having incomplete observations, since thereis no requirement to sample any particular coverage from the probability distribution.4.1.2 TransformationThe transformation function defines the space of allowable mappings. The simplest is a rigidtransform, which has six degrees-of-freedom (DOFs) when registering volumetric images: three forrotation, and three for translation. With this model, it is assumed the data is mostly solid, and wesimply try to estimate a change in pose. Affine transforms have twelve DOFs to allow for additionalscaling and skewing. This type of transform is typically used when large yet smooth changes involume are expected.Non-rigid or deformable registration has additional DOFs to allow for more complex patterns.Non-rigid deformations can rely on optical or fluid flow models, which allow motion of individualpixels, or on a set of control points and interpolation functions such as B-splines, radial basisfunctions, or finite-element shape functions.In our application, we will use a hybrid model to define the registration transform. The pa-rameters of this transform are precisely the degrees of freedom of the model, which include nodelocations for finite element models, and frame locations for rigid components. The deformationmap therefore includes both rigid and deformable parts: locally rigid transformations within rigidbody volumes, and locally deformable within finite element volumes.4.1.3 RegularizationOnce an appropriate transform function T (·,Θ) is identified, we can then attempt to solve for theparameters Θ that maximize similarity between the source information X and the target informa-tion Y . However, depending on the number of transform parameters and the size of the data, simplymaximizing the similarity metric S may lead to non-unique or unrealistic solutions. A commonapproach to address this is to add a regularization term to limit the solution.One method of constraining deformations is with a statistical deformation model (SDM). AnSDM describes the set of allowed transformations based on statistics derived from an initial popu-lation. Parameters of the transform are restricted to a linear combination of those present duringtraining [98, 171]. The drawback of SDMs is that they require an additional training step, and82Data Fusion 4.2 Dynamic Registrationregistration results are highly dependent on the quality of the training data. To generate a diversetraining set, Hu et al. [98] use finite element simulations. Applied to MR-TRUS fusion for prostateinterventions, they require a detailed segmentation of not only the prostate, but also the bladderand pelvic bone to create a complete personalized model. Subsequently, they use this model togenerate an SDM to be used in a model-to-image registration framework.Another approach to limiting transformation parameters is by adding a penalty term to limitundesirable characteristics in the transform. Many registration algorithms attempt to restrict sur-face bending of a model [34, 101, 114, 176, 211, 277, 279]. In the coherent point drift (CPD)algorithm [176], nearby points are constrained to move “coherently” as a group by introducing apenalty on high spatial frequencies. The coherence term, however, can only penalize local variationsin the deformation field; it does not consider volumetric properties such as Poisson effects or incom-pressibility. Other regularizers have been introduced to constrain volumetric deformation within aclosed surface. These are typically based on splines, radial basis functions, or finite element tech-niques [32, 72, 172, 182, 209, 237]. The choice of regularizer is especially important for deformabletissue: in addition to guiding the minimization search, it directly governs the deformation fieldinside the anatomy, particularly in regions where measured data is unreliable or missing.By combining an appropriate similarity metric, transformation parameters, and regularization term,we can then attempt solve for the optimal parameters by minimizing the energy in Equation (4.1).4.2 Dynamic Model-Based RegistrationIn this section, we present a novel model-based registration method that uses soft correspondencesto improve robustness, and dynamic modelling to regularize the transformation as well as to incor-porate prior physical and biomechanical knowledge into the framework. This allows us to improveperformance in the presence of missing or unreliable data. In what follows, we assume we have asource surface or surfaces on a dynamic model described by a collection of points X = {xn}Mm=1,as well as a collection of target points describing a target surface Y = {yn}Nn=1. There need notbe the same number of points in both source and target point sets, and the target points Y maynot necessarily be complete (i.e. they may not fully describe the source). Many of the derivationsin this section are based on a hybrid multibody-FEM simulation framework, which is derived anddescribed in detail in Appendix B.83Data Fusion 4.2 Dynamic Registration4.2.1 CorrespondencesThe ‘similarity’ between source and target points in our surface-based registration framework isdefined by a negative weighted Euclidean distance metric,S(X,Y ) = − 12NM∑m=1N∑n=1wmn ‖yn − xm‖2 , (4.2)where wmn is the correspondence weight between source point xm falling on the model, and targetpoint yn. These weights allow us to both assign likely correspondences between any two pairs ofpoints, as well as to place a higher importance on some features over others. The factor of 2/N isused to make the similarity metric independent of the number of target points, since this may varydepending on the resolution and reliability of the data.In ICP, we would define ym be the nearest point on the target surface to each source pointxm, and set wm = δmn. However, as previously discussed, ICP is highly sensitive to outliers andmissing data. Some model points may correspond very well to the target, but if the target surface isincomplete, others will not. We can improve this formulation by adjusting the weighting term wmnto favour source points xm that fall near their nearest target correspondence ym, and diminish theimpact of points that are far from their nearest target. For example, a simple Gaussian weightingof the formwmn = δmn1√(2piσ2)3exp(−‖yn − xm‖2/2σ2)for some radial distance parameter σ2 can help limit the impact of noise and outliers. Due to thepresence of local minima, and the binary nature of the correspondences, this approach can still bequite sensitive to initialization, and can lead to unstable oscillations when the nearest computedtarget points change with each iteration.A more robust and stable approach is to use soft correspondences, where instead of a one-to-onemapping between source and target points, there is a one-to-many mapping. Such an approachallows for a smoother objective function, which tends to lead to a more globally optimal solu-tion [221]. Gaussian mixture models take this one step further: rather than considering the sourcesurface as a simple collection of points, they are treated as the centroids of Gaussian distributions,describing the source surface as a probability density function [111, 176]. The similarity metric isthen defined as the log-likelihood that the target points are drawn from this surface distribution:S(X,Y ) = 1N∑nlog(1M∑mp(yn|xm)), (4.3)84Data Fusion 4.2 Dynamic Registrationwhere p(·|xm) is a Gaussian distribution centered at the current xm with variance σ2,p(y|xm) = 1√(2piσ2)3 exp(−‖y − xm‖2/2σ2),and each Gaussian is assumed to have an equal membership probability of 2/M . Due to noise orpoor quality data, there may be points on the target surface yn which do not correspond well toany points on the source surface. Similar to Myronenko and Song [176], we therefore add an outliergroup with membership probability w, so our final probability distribution describing the source isgiven byp(y) = w 1N+ (1− w) 1M∑mp(y|xm). (4.4)The additional uniform distribution with probability w determines whether an observed point isclassified as an outlier. For most applications, a value of w = 0.1 is found to work well [176], unlessthere are clearly a large number of outliers in the target observation. Given this probability density,and ignoring any constants independent of σ2, we can re-write the similarity metric asS(X,Y ) = − 12σ2N∑mnP (xm|yn)‖yn − xm‖2 +R(σ2), (4.5)with R(σ2) = 3Np2N log(σ2), (4.6)Np =∑mnp(xm|yn) ≤ N,P (xm|yn) = 1Mp(yn|xm)p(yn)= exp(−‖yn − xm‖2/2σ2)∑Mk=1 exp (−‖yn − xm‖2/2σ2) +√(2piσ2)3 w1−wMN.Comparing this with the generic case of Equation (4.2), we can definewmn =1σ2P (xm|yn). (4.7)We also introduce an additional energy term R(σ2) to the minimization that can be used to governthe behaviour of the unknown variance σ2. Employing an alternating minimization strategy, if wetake the source points, target points, and correspondence probabilities to be fixed and maximizingthe similarity metric in terms of σ2, we haveσ2 = 13Np∑mnP (xm|yn) ‖yn − xm‖2 . (4.8a)This can be used to evolve the variance parameter as iterations proceed. To initialize, we start85Data Fusion 4.2 Dynamic Registrationwith equal assignment probabilities P (xm|yn) = 1/M and Np = N ,σ20 =13MN∑mn‖yn − xm‖2 . (4.8b)Alternatively, the variance parameter can be reduced using deterministic annealing [176].4.2.2 Dynamic ModellingWe use a dynamic model both to define the space of allowed transformations, and to regularizethe registration process. This model could be a finite element model, a multi-body model, or ahybrid as described in Appendix B. The model itself has a set of underlying system positions qand velocities v, and may contain a set of constraints and external forces, such as gravity. Thedynamics of the system are driven by minimizing the residual energy,L(q,λ, z) = Wres(q)− λTg(q)− zTn(q), (B.42 revisited)where Wres is the residual energy of the model, encapsulating strain energy, kinetic energy, andexternal forces, g is a set of bilateral constraints, n unilateral constraints, and (λ, z) are theircorresponding Lagrange multipliers (see Section B.3.4). In the simple case of a linear finite elementmodel with no constraints or external forces, the residual energy is simply given byL(u) = uTKu,where u is the concatenated vector of finite element node displacements satisfying q = q0 +u, andK is the system’s linear stiffness matrix.The source points involved in the registration, xm, depend on the on the dynamic model’ssystem positions q. We can express the relationship asxm = ϕm(q) ≈ xˆm + Φmδu, dxmdt = Φmv, (4.9)where xˆm is the current source point’s location, δu is a small change in displacements, and Φm amatrix composed of blocks {Φmi} that interpolates the mth source point’s velocity from the set ofmodel system velocities {va}. In the case of a finite element model, the interpolation matrix Φm isconstant for each model point:xm =∑iφi(Xm)ni,dxmdt =∑i[φ(Xi) I]︸ ︷︷ ︸Φmivi,86Data Fusion 4.2 Dynamic Registrationwhere Xm is the location of xm in material coordinates, {φi} are the finite element shape functions,{ni} are the locations of the finite element nodes, and {vi} are the node velocities. For a pointattached to a rigid body, the interpolation matrix depends on the body’s orientation:xm = cb +Rb`m,dxmdt =[I −[Rb`m]×]︸ ︷︷ ︸Φmb[vbωb],where cb is the body’s origin, Rb is the rotation matrix representation of the body’s orientation,(vb,ωb) are the linear and angular velocities of the body, and `m is the fixed position of xm relativeto the body’s coordinate frame.We will use these definitions to determine the optimal set of incremental displacements δu inour model that will bring us closer to the optimal registration solution.4.2.3 OptimizationThe final objective function for the dynamic registration task isE(X,Y,q,λ, z, σ2) = β2N∑mnwmn ‖yn − xm‖2 +R(σ2) + L(q,λ, z), (4.10)where wmn are the GMM correspondence weights (or other correspondence weights in the generalcase), R governs the evolution of the correspondence variance σ2, and L is the Lagrangian energyfunction of the constrained dynamic system. The factor β is a tunable regularization parameterthat allows us to balance between deformation in our model and error between the source andtarget points.We wish to find the optimal positions q of our dynamic model, or equivalently the optimalset of displacements u, that minimize this energy. Given a fixed initial correspondence varianceσ2 and correspondence weights {wmn}, we determine the optimal incremental displacements bydifferentiating in time:dEdt = vT[∂E∂u]= βN∑mnwmn[dxmdt]T(xm − yn) + vT∂L∂u= vT[βN∑mnwmnΦTm(xm − yn) +∂L∂u]=⇒ ∂E∂u= βN∑mnwmnΦTm(xm − yn) +∂L∂u= 0.To express this in terms of a set of incremental displacements δu, we insert the linearization of xm87Data Fusion 4.2 Dynamic Registrationfrom Equation (4.9), as well as a linearized Lagrangian energy Jacobian for a Quasi-static solution(see Appendix B.3.5):∂E∂u≈ βN∑mnwmnΦTm(Φmδu+ xˆm − yn) + d− f −(∂f∂u−K)δu−GTλ−NTz.By grouping quantities appropriately, we can see that the similarity metric introduces terms equiv-alent to adding a registration forcefS =βN∑mnwmnΦTm(yn − xˆm),∂fS∂u= − βN∑mnwmnΦTmΦmδu. (4.11)The same result is obtained when using a full time-dependent dynamic system as well (Equation(B.46a)). We call this the registration force, and β can then be viewed as a force scaling param-eter. The larger β, the stronger the force that pulls the model points towards the target points.This formulation allows us to integrate the dynamic registration process directly into our existingsimulation framework.The final registration process therefore proceeds as follows:1. Initialize σ2 using Equation (4.8b).2. Compute registration weights {wmn} using Equation (4.7).3. Compute and add the registration force fS and its Jacobian from Equation (4.11).4. Compute the optimal set of displacements using either the dynamic or quasi-static system(Appendix B).5. Update the correspondence variance σ2 using Equation (4.8a).6. If the system has not converged, go to Step 2.To check convergence, we can either test if the estimated variance σ2 has changed significantly(i.e. by more than 1%), or if the optimal incremental displacements δu are sufficiently small.4.2.4 Registration ExamplesBefore applying our registration method to real-world examples in the next sections, here we demon-strate the utility on two test-cases: registering a deformable sphere to a cube, and registering asimplified multibody articulated arm to a target pose.SphereFor the sphere-to-cube registration task, we create a finite element sphere with volume 20×10×10cm3 composed of a corotated linear material with parameters E = 30 kPa, ν = 0.49. This is to be88Data Fusion 4.2 Dynamic Registration(a) Initial (b) β = 1k (c) β = 10k (d) β = 100k(e) Open (f) β = 100kFigure 4.1: Dynamic registration: FEM sphere. A finite element sphere (magenta) is registeredto a cube (cyan) of the same volume, with varying β. One side of the target cube is then removed(bottom) to examine the effect of incomplete data.registered to a target cube of the same volume. Since there are no noise or outliers in the data, wetake the outlier probability to be w = 0.Results of the registration task are shown in Figure 4.1 for varying force scaling parameters β.From these results, we can see that as β increases, we enforce a stronger adherence of the sourceto target surfaces. This is balanced by the FEM’s internal forces which are trying to maintainthe original spherical shape. The β parameter can be scaled according to the approximate level offorces expected in the physical system.We also remove one of the sides of the target cube to demonstrate performance when the datais incomplete. When we do so, the sphere still registers well to the remaining five sides. In theregion where data is missing, the FEM sphere exhibits a slight curvature, in accordance with theprior knowledge we are imposing. The probabilistic nature of the GMM correspondences ensuresthat this region of the model is not artificially pulled towards the remaining target data, whichwould occur with a basic ICP approach. This is what makes the GMM formulation so robust toincomplete observations.Simplified ArticulatorTo test the registration approach for a multi-body system, we create a simplified articulated armconsisting of 7 ellipsoid bodies connected using 7 joints of various type (one spherical, three uni-versal, and three revolute, Figure 4.2). For the target surfaces, we similarly create an arm but89Data Fusion 4.2 Dynamic Registration(a) Initial (b) Pose A(c) Pose B (d) Pose C (e) Pose DFigure 4.2: Dynamic registration: articulated arm. A simplified multibody articulated arm isregistered to incomplete data with different shape and pose.with prism-shaped bodies. We also eliminate a target surface from one of the fingers, and remove asection from the lower-arm to test the effect of missing information. In this case, there is a portionof the lower-arm target which does not correspond to the model (i.e. the faces that close the missingportion in the lower-arm), so we set the outlier fraction w = 0.1. We use a force-scaling parameterof β = 100 to drive the registration. Results for various poses are shown in Figure 4.2.If the initial model is far from the target surface, we have found that excessive forces cansometimes lead to unstable behaviours. This is especially true for articulated bodies that undergolarge rotations. In this example, a force scaling of β = 100 was very stable and led to an adequateregistration result. For β = 10000, however, the initial registration iterations quickly becomeunstable and diverge. If a larger correspondence force is desired, then we suggest starting with asmaller value of β and increasing it using deterministic annealing.90Data Fusion 4.3 GMM-FEMIn the next sections, we apply our dynamic registration framework to real-world tasks, includingimage registration for image-guided prostate interventions, and in registering a dynamic articulatedmodel of the arm to new data. We also augment the framework to include statistical information,allowing for estimating of a subject-specific model that accounts for shape variations amongst apopulation.4.3 GMM-FEM: Application to MR-TRUS Fusion for Prostate InterventionsSurgical planning often requires high-resolution pre-operative (pre-op) images in order to see theextent of disease, such as the size of a tumour in the prostate. These images are usually acquiredwith MRI or CT. However, for image-guidance during a procedure, a faster imaging modality isoften used such as ultrasound for practicality. One of the drawbacks of ultrasound is its poorimage contrast. This can lead to difficulties in distinguishing tissue boundaries of the organ ofinterest. In addition, soft tissue will move and deform, so there could be large differences betweenthe pre-operative and intra-operative images that need to be accounted for.Efficient and accurate multi-modal registration (such as MRI to US) is a difficult task, andintensity-based methods such as [232] still require a significant amount of correspondence in ap-pearance between the images. These methods may fail if the local appearance differs significantlybetween the two modalities. For example, the seminal vesicle boundary is clearly visible in MRIbut not in transrectal ultrasound (TRUS) [172]. This missing or unreliable data poses two chal-lenges: ambiguity in establishing correspondences during registration; and extrapolation of theglobal deformation field to those missing regions.We apply our dynamic surface-based registration algorithm to address these issues in the contextof image-guided prostate interventions. The pre-operative image is used to construct a dynamicfinite element model of the subject’s prostate. We then use this biomechanical model as priorinformation when registering to partially segmented ultrasound images (Figure 4.3). We validatethe registration approach on MR-TRUS image pairs acquired from two sets of patients: one groupwho underwent a prostatectomy, and the other a prostate biopsy. Our method is shown to leadto a significant reduction in target registration error (TRE) compared to similar state-of-the-artregistration algorithms, with up to 30% of target data is missing. The resulting mean TRE isestimated at 2.6 mm. The method also performs well when full segmentations are available, leadingto TREs that are comparable to or better than other surface-based techniques.In our comparisons, we consider three other similar surface-based methods: thin-plate splinerobust point matching (TPS-RPM) [34], coherent point drift (CPD) [176], and a finite elementregistration technique using ICP to determine forces (ICP-FEM) [72]. TPS-RPM and CPD aretwo popular registration methods that use soft-correspondences. The major difference is that TPS-91Data Fusion 4.3 GMM-FEMFigure 4.3: Overview of the FMM-FEM. The pre-operative MR is captured prior to the procedure,and is segmented to create a surface representation of the anatomy. This surface is then used tocreate a finite element model of the volume. At the beginning of the procedure, an intra-operative(3D-TRUS) image is acquired, and visible parts of the anatomy (i.e. midgland) are segmented. TheGMM-FEM registration maps targets from the surface in the pre-operative plan to that of theintra-operative space. c© IEEE 2015.RPM constrains surface deformations using thin-plate-splines, whereas CPD uses Gaussian kernels.By comparing to TPS-RPM and CPD, we isolate and evaluate the need for the FEM componentof our method, since both methods use similar correspondence schemes. ICP-FEM is a surfaceregistration method which uses ICP to estimate surface-correspondences, then applies elastic forceson the surface of an FEM to drive the deformation. This is the point-cloud equivalent to Ferrantet al. [72]. By comparing to ICP-FEM, we isolate the GMM component of our method, since bothmethods use an identical FEM regularizer. This isolates the need for soft correspondences whendealing with missing data.4.3.1 GMM-FEMAn outline of our proposed registration approach is shown in Figure 4.3. The inputs are two surfaces,referred to as the source (MR) and target (TRUS). We model the surface-to-surface registrationas an expectation-maximization (EM) problem. One surface is used to construct a probabilitydensity function that defines the boundary of the structure. The other surface is considered a setof observations, which may be incomplete (a partial observation). We wish to find the deformationfield that maximizes the likelihood that the observations are drawn from a transformed probabilitydistribution. This is regularized by a finite element model that adds prior biomechanical informationabout the source. In this work, the source surface is extracted from the MRI since this segmentationis assumed to be more reliable, and the target surface is extracted from TRUS.92Data Fusion 4.3 GMM-FEMFor this application, we do not need to consider the full constrained hybrid dynamic systemdescribed in Section 4.2. Instead, we only have an unconstrained finite-element model. Thus, wecall this reduced version GMM-FEM. For simplicity, we consider a linear material model. In sucha case, the elastic energy is given simply byL(u) = 12uTKu, (4.12)where u is the concatenated vector of FEM node displacements, and K is the linear stiffnessmatrix. This stiffness matrix is dependent on two elasticity parameters: the Young’s modulus, E,and Poisson’s ratio, ν, which are inputs to the registration. The volumetric mesh for the finiteelement model of the prostate is generated from the source surface segmented from MRI. This canbe done with most meshing tools, such as TetGen [219].To relate positions on the surface of the finite element model to the positions of the underlyingnodes, we interpolate using the shape functions:xm =∑iφi(Xm)ni = Φmn, =⇒ δxm = Φmδu, (4.13)where Xm is the material location of surface point xm, and {ni} are node locations. In ourimplementation, all points on the surface correspond to nodes, and internal nodes are appended tothe end of the node list. As a result, the interpolation matrix has the form Φmi = δmiI3×3. Thecombined objective function is therefore given byE(u, σ2) = β2N∑mnwmn ‖yn − xm‖2 +R(σ2) + 12uTKu,where wmn are the GMM correspondence weights from Equation (4.7), and R(σ2) is the GMMvariance term from Equation (4.5). Following Myronenko and Song [176], we evaluate the corre-spondence weights based on the previously computed displacements, and employ an alternatingminimization strategy. This allows us to separate the expectation and maximization steps in theEM algorithm. The next incremental displacements are obtained by minimizing and linearizing theobjective function:∂E∂u≈ βN∑mnwmnΦTm(Φmδu+ xˆm − yn) + K (uˆ+ δu) = 0,=⇒[1N∑mnwmnΦTmΦm +1βK]δu = 1N∑mnΦTm(yn − xˆm)−1βKuˆ. (4.14)where xˆm are the previous locations of the source points, uˆ is the previous displacement vector, δu93Data Fusion 4.3 GMM-FEMis the new incremental displacement to add, and we have divided both sides by β. After solving thissparse linear system, the new displacements are given by u = uˆ+ δu. The registration algorithmiterates between the expectation step (updating σ2 and wmn) and maximization (updating u) untilthe variance drops below a given threshold.For the biomechanical material properties, we apply a homogeneous, linearly elastic materialwith a constant Young’s modulus to all elements. We use a Young’s modulus of E = 5 kPa for theprostate and Poisson’s ratio of ν = 0.49, which are in the range of values reported in in Kemperet al. [117]. For other applications, material parameters should be chosen appropriately. Note thatfor linear materials, the Young’s Modulus can be factored out of the stiffness matrix, allowing it tobe combined with the force-scaling weight 2/β. This combined parameter controls the flexibility ofthe model, and can be tuned to allow an appropriate level of deformation for the application.In total, there are three free parameters: w, which controls the fraction of outliers in the targetobservation; ν, the Poisson ratio of the material; and E˜ = E/β, controlling model flexibility. Forrelatively smooth and reliable observations, we find that w = 0.1 works well in rejecting few outliersand avoiding local minima. Poisson’s ratio should be near 0.5 for most soft-tissues, ν = 0.49 being astable option. The final parameter, β, is tunable to fit the application. It is up to the user to decidehow much force is reasonable given the context, subject to the trade-off between surface-fitting andvolume-restoring regularization. We find that starting with a value of β = 5000 allows sufficientflexibility without over-fitting the surfaces in the context of prostate motion. If this results in anunrealistic amount of deformation, β should be decreased.4.3.2 Related Registration MethodsIn this section, we provide a brief description of the three registration methods used as a basis forcomparison. These were selected since they are the most relevant works in the field, and they allowus to isolate the importance of the two major components of our objective function. For furtherdetails on the methods, the reader is referred to [34, 72, 176].TPS-RPMThin-plate spline robust point matching (TPS-RPM) also uses an EM algorithm for the registration.Deterministic annealing is used in the expectation step to decrease both a temperature, T , andregularization weights, {λ1, λ2}. The temperature in their framework corresponds to the varianceof GMM centroids in ours. The weights λ1 and λ2 control the contributions of affine and spline-based (deformable) components, respectively, in their objective function. The implementation is94Data Fusion 4.3 GMM-FEM(a) In vivo TRUS (b) Ex vivo prostate(c) Ex vivo MRI (d) Initial alignmentFigure 4.4: Prostatectomy data collection protocol. Prior to the prostatectomy, a volumetric USis acquired using a side-firing probe (a). Following the prostatectomy, strand-shaped fiducials areattached to the prostate to mark anatomical coordinates of the prostate in the MR. These strandsare highlighted with a green circle on the prostate (b) and are visible in the ex vivo MRI (c). BothMR and TRUS images are segmented and brought to an initial alignment at the beginning of theregistration (d). c© IEEE 2015.freely available1. By comparing to TPS-RPM, we isolate and evaluate the contribution of the FEMcomponent in our method, since both use a similar soft-correspondence scheme. The two methodsalso handle outliers and missing data differently. In TPS-RPM, outliers are assigned to a separatecorrespondence-free cluster. In GMM-FEM outliers in the target observation are handled with aseparate cluster, and missing data is handled naturally by the probabilistic approach. The outputof TPS-RPM is a set of surface displacements. To recover a volumetric deformation field, we usethe TPS kernel to propagate the deformation field inside the prostate, as described by Sibson andStone [221].CPDCoherent point drift (CPD) is the closest work to GMM-FEM. The major difference is the choiceof regularizer, which introduces different prior information and results in a different maximizationrule. CPD attempts to maintain a smooth coherence between points on the surface, whereas GMM-FEM uses a biomechanical model to regularize the deformation throughout the volume. CPD hasthree free parameters: w is an estimate of noise and outliers, β controls smoothness, and λ controls1https://www.cise.ufl.edu/∼anand/students/chui/tps-rpm.html95Data Fusion 4.3 GMM-FEM(a) Segmented MRI (blue) (b) Segmented TRUS (red)Figure 4.5: Segmentation for the prostate biopsy data. An axial slice of the T2-weighted MRI (a).An axial slice 3D-TRUS volume (b). c© IEEE 2015.surface-to-surface fitting. By comparing to CPD, we isolate and evaluate the FEM component ofour method, since both use an identical soft-correspondence scheme, as well as an identical approachto handling outliers and missing data. We use the freely-available open-source implementation2.Just as with TPS-RPM, the output of the registration is a set of surface displacements. To recoverthe volumetric deformation field, we used a TPS kernel to propagate surface deformations insidethe prostate [221].ICP-FEMThe finite-element-based registration approach by Ferrant et al. [72] uses image-driven forces todeform the surface of the source object to that of the target. The system evolves by discretizing intime and using the semi-implicit update scheme:(I + τK)ut = ut−1 − τf t, (4.15)where τ is the time step, f t is the current estimate of surface forces, ut−1 are the previous nodaldisplacements and ut is the next estimate of nodal displacements. Image forces are determined bya local search, driving the surface nodes of the FEM to the nearest feature in the image. In oursurface-based registration, instead of image forces, we use ICP to estimate the nearest features inthe segmentation. By comparing to ICP-FEM, we isolate the GMM component of our method,since both methods use an identical FEM regularizer.2https://sites.google.com/site/myronenko/research/cpd96Data Fusion 4.3 GMM-FEM4.3.3 Experiments and ResultsWe evaluate the proposed non-rigid registration method on MR-TRUS image pairs acquired frompatients who underwent a prostate intervention. In Section 4.3.3.1, we discuss the data acquisition,segmentation protocol, and the initialization of the registration. In Section 4.3.3.2, we validate ourregistration method using intrinsic fiducials found in the interior of the prostate for both full andpartial target surfaces, and compare to TPS-RPM, CPD, and ICP-FEM.The prostatectomy and biopsy data were segmented at two different institutes, using 3DSlicer [66] and Stradwin [252]. We used TetGen [219] to automatically create a tetrahedral volu-metric mesh of the prostate based on the MR segmentations. The models used in this study arecomposed of (≈ 2000) surface nodes and ≈ 7500 elements. Throughout our experiments, we useda stopping condition of σ2 ≤ 1e−4 mm2, Young’s modulus of E = 5 kPa, which is in the range ofvalues reported in [117] for the prostate, and a Poisson’s ratio of ν = 0.49.4.3.3.1 DataProstatectomyWe acquired MR and TRUS volumes of ten patients scheduled for a prostatectomy. This dataconsists of in vivo TRUS and in vitro MRI. The advantage of the prostatectomy data is thatwe have access to a strong ground truth for the in vitro MRI segmentation. The major steps inthe data acquisition and alignment protocol are outlined in Figure 4.4. The TRUS volumes werecollected with an Ultrasonix Touch machine (Ultrasonix, BC, Canada) using a BPL-95/55 sidefiring transducer mounted on a motorized cradle. Parasagittal 2D-TRUS images were acquired at5◦ intervals, beginning with a sagittal view that marks the anterior-superior plane. These slices areused to reconstruct a 3D-TRUS volume with an axial and lateral spacings of 0.12 mm, expressedin patient-centered right-anterior-superior (RAS) coordinates. Following the prostatectomy, theprostate was fixated in a 20% buffered formalin solution for preservation and MR-visible strand-shaped fiducial markers were applied to the specimen using the protocol by Gibson et al. [76].These strand fiducials are used to mark RAS coordinates on the specimen, so that registrationcan be initialized with a consistent orientation. Specimens were scanned using a Discovery MR750scanner (GE Healthcare, Waukesha, USA) at 3T, with an endorectal coil (Prostate eCoil, Medrad,Warrendale, USA) to acquire T1-weighted MR images with a 0.3 mm slice thickness.Prostate BiopsyThe second set of data was acquired from 19 patients, each scheduled for a prostate biopsy. Weacquired T2-weighted MR images from 19 patients using a 3 Tesla GE Excite HD MRI system97Data Fusion 4.3 GMM-FEM(a) Affine (b) TPS-RPM withT0 = 10 andλ1/λ2 = 0.01(c) CPD with β = 3.5and λ = 0.1(d) ICP-FEM withτ = 0.1, E = 0.1 kPaand ν = 0.49(e) GMM-FEM withE = 5.0 kPa,ν = 0.49 and β = 5000Figure 4.6: Example GMM-FEM registration to full data. Registration results on prostatectomydata following surface based registration between the MR (blue) and TRUS (red) for: Rigid (a),TPS-RPM (b), CPD (c), ICP-FEM (d) and GMM-FEM (e). The estimate for missing data, w, wasfixed at zero for all three non-rigid methods. c© IEEE 2015.(Milwaukee, WI, USA) with a spacing of 0.27 × 0.27 × 2.2 mm. TRUS images were acquiredusing a 3D-TRUS mechanical biopsy system [15] with a Philips HDI-5000 US machine and a C9-5transducer using an axial rotation of the biopsy probe. For these images, the rotational sweepbegan with an axial view of the prostate mid-gland, thereby defining the right-anterior plane. Anexample slice of both MR and TRUS is provided in Figure 4.5. These were then reconstructed intoa 3D-volume with a spacing of 0.19×0.19×0.19 mm. Similar to the prostatectomy data, both setsof images were acquired in a patient-centered coordinate system.Full and Partial SegmentationBoth prostatectomy and biopsy MR and TRUS volumes were manually and fully segmented underthe supervision of an expert clinician. In regions where the prostate boundary was not fully visiblein TRUS (e.g., near the base and apex of the gland), segmentations were completed based on priorknowledge of the full prostate shape, and by exploiting symmetries. Such ambiguous regions werelabeled as “uncertain”, since they are not based on observed data, but rather on the clinician’s98Data Fusion 4.3 GMM-FEM(a) TRUS (b) GMM-FEM (c) Comparison (d) Deformation (mm)Figure 4.7: Example FMM-FEM results for prostatectomy data. The axial slice of TRUS (a)and the corresponding GMM-FEM (b) registration results. The deformation map of the non-rigidregistration and the corresponding TRUS slice (dashed line) are also shown in (c). The checkerboardof (a) and (b) is shown for comparison. c© IEEE 2015.experience. By ignoring any data labeled as uncertain in the segmentation, we created partialsurfaces. What remained represented approximately 70% of the original surface, concentratedmostly around the midgland. This is where segmentations are most reliable and consistent betweenclinicians. The complete segmentation surfaces, which include the uncertain or ambiguous regions,are referred to as full surfaces. No points were removed from the segmented MR, since thesesegmentations are assumed entirely reliable.4.3.3.2 Registration to Full and Partial SurfacesTo initialize registration, we first oriented the MR and TRUS surfaces based on consistent RAScoordinates. We then applied a centre-of-mass alignment, followed by a global registration tocompensate for large differences in position and orientation. For the prostatectomy data, we optedfor an initial affine registration to compensate for the bulk volume loss due to the ex vivo nature ofthe data. The residual deformation is then assumed to be mainly biomechanically driven. For theprostate biopsy data, we used a rigid transform for initial alignment, representing a basic changein image coordinates.ProstatectomyWe applied GMM-FEM, as well as the other methods for comparison (TPS-RPM, CPD, and ICP-FEM), to all prostatectomy MR-TRUS pairs. We then repeated registration using the partialTRUS surfaces to see the impact of ignoring the uncertain regions in the segmentation. An exampleregistration result for full TRUS surfaces is shown in Figure 4.6. For each of the three comparisonmethods, we tuned the parameters such that the best TRE was achieved. For TPS-RPM, theinitial temperature was set to T0 = 10 mm2 and the stopping condition was T ≤ 0.2 mm2. ForCPD, we used β = 3.5 and λ = 0.1. For ICP-FEM, we set a time step of τ = 0.1, Poisson’s ratio99Data Fusion 4.3 GMM-FEM(a) Rigid (b) TPS-RPM withT0 = 50 andλ1/λ2 = 1e− 5(c) CPD with β = 3.5and λ = 0.1(d) ICP-FEM withτ = 0.1, E = 0.1 kPaand ν = 0.49(e) GMM-FEM withw = 0.1, E = 5.0 kPa,ν = 0.49 and β = 0.05Figure 4.8: GMM-FEM results with missing data. Model surfaces (blue) are shown along withtarget points (red) following registration: (a) Rigid, (b) TPS-RPM, (c) CPD, (d) ICP-FEM, and(e) GMM-FEM. c© IEEE 2015.of ν = 0.49, and Young’s modulus of E = 0.1 kPa. Note that this Young’s modulus is significantlylower than the value of E = 5 kPa reported in literature. The discrepancy relates to a scaling ofthe surface-to-surface forces, which is the role β plays in GMM-FEM.To visualize the result of our GMM-FEM registration in the interior of the prostate, we resam-pled the MR image into the space of the TRUS using the FEM shape functions. An axial slice isshown in Figure 4.7. As seen in Figure 4.7c, the boundary of the prostate matches well to that ofthe TRUS. The magnitude of the deformation field along the slice plane is shown in Figure 4.7d.Note that the majority of the correction is made near the boundary of the prostate, which is wherethe surface-based forces are applied.Prostate BiopsyTo further validate our registration approach, we applied the algorithms to the biopsy data describedin Section 4.3.3.1. A typical registration result for a full-to-partial surface registration is shown inFigure 4.8. The registered images and deformation field on an axial slice are shown in Figure 4.9.Again, we find most of the deformation near the surface.100Data Fusion 4.3 GMM-FEM(a) TRUS (b) GMM-FEM (c) Comparison (d) Deformation (mm)Figure 4.9: Example GMM-FEM results for a prostate biopsy data. The axial slice of TRUS (a)and the corresponding GMM-FEM (b) registration results. The deformation map of the non-rigidregistration and the corresponding TRUS slice (dashed line) are also shown in (c). The checkerboardof (a) and (b) is shown for comparison. c© IEEE 2015.(a) Prostatectomy (b) Prostate BiopsyFigure 4.10: Example corresponding fiducial pairs in MR and TRUS. c© IEEE 2015.Quantitative ValidationTo quantify the registration results, we selected a set of intrinsic fiducials on the MR and TRUSimages. For our ten prostatectomy cases, we marked up to five calcification pairs in both modalitiesper patient, for a total of 30. These landmarks were validated by a radiologist. For the biopsy data,we also selected up to five fiducials per patient consisting of cysts and benign prostatic hyperplasia.This yielded a total of 93 landmarks across all subjects, distributed approximately 64.5% in themid-gland, 21.5% in the base and 14% in the apex. An example of corresponding fiducial pairs inMR and TRUS is shown in Figure 4.10. The L2 distance between corresponding fiducials was usedto quantify the TRE. We did not attempt to measure the fiducial localization error (FLE). Thebiopsy data used in this manuscript was collected as part of a separate study [114, 232], where anFLE of 0.21 mm was reported for TRUS, and 0.18 mm for MR.The mean and standard deviation of the TRE, as well as p-values for comparisons are shownin Table 4.1. For both prostatectomy and biopsy data, the proposed GMM-FEM registrationapproach was found to consistently outperform TPS-RPM, CPD, and ICP-FEM. Furthermore,the two FEM-based methods (ICP-FEM, GMM-FEM) led to lower TREs compared to the kernel-based ones (TPS-RPM, CPD), suggesting that there is an advantage to incorporating physical prior101Data Fusion 4.3 GMM-FEMTable 4.1: TRE (mm) for prostatectomy and biopsy data following affine/rigid and non-rigidregistration. The star (*) denotes statistical significance at the 95% (p ≤ 10−4) confidence intervalcompared to affine/rigid registration.Method ProstatectomyFull PartialAffine 3.17± 1.38 4.15± 1.23TPS-RPM 4.19± 1.76* 5.26± 1.49*CPD 4.02± 1.12* 4.76± 1.07ICP-FEM 3.00± 1.39* 3.89± 1.24GMM-FEM 2.65± 1.29* 2.89± 1.44*Method BiopsyFull PartialRigid 4.01± 1.45 4.65± 1.31TPS-RPM 3.76± 1.01 6.50± 2.12*CPD 3.81± 0.91 5.05± 1.11ICP-FEM 2.95± 0.94* 3.72± 1.23*GMM-FEM 2.72± 1.15* 2.61± 1.10*knowledge in the form of a finite element model.We found that TPS-RPM often failed to produce realistic deformation fields when dealingwith partial surfaces. In Figure 4.8, we see that the two ends of the prostate are flattened afterregistration, where there is data missing. This leads to a large increase in TRE internally, sincethe compressed deformation field is interpolated inside the volume. In TPS-RPM, points in thesource which cannot be matched to the target are assigned to an outlier cluster. We believe thisis the cause of the unrealistic inward pull. In Chui and Rangarajan [34], the authors acknowledgethat their handling of outliers is not optimal. The probabilistic approach in CPD and GMM-FEMeliminates the need for special handling of missing data.To investigate statistical significance of results compared to a simple rigid or affine global reg-istration, we first determined whether the TRE was normally distributed. Using a one-sampleKolmogorov-Smirnov test, the TREs were found not to be normally distributed at the 95% signifi-cance level. As a result, we performed a set of Wilcoxon signed-rank tests, with the null hypothesisthat the TREs of initialization versus subsequent non-rigid registration share a common median atthe 95% significance level. In Table 4.1, results for which this hypothesis was rejected (and henceare statistically significant) are indicated with a star (*), each resulting in p < 10−4. Note thatin some cases, TPS-RPM and CPD led to an increase in TRE compared to initialization. Theincrease in TRE for TPS kernels has been previously reported in the literature [43].To investigate the statistical significance of the proposed algorithm over TPS-RPM, CPD, andICP-FEM, we performed additional Wilcoxon signed-rank tests between the methods (Table 4.2a).Results show that GMM-FEM does lead to statistically significant improvements in TRE over CPD,TPS-RPM, and ICP-FEM when applied to partial surface observations (p ≤ 10−6), as well as overCPD and TPS-RPM when applied to full surfaces (p ≤ 10−6). When comparing to ICP-FEM forfully segmented TRUS surfaces, however, although the TRE is reduced, the signed-rank test failsto reject the null hypothesis. The improvement is not found to be statistically significant if fullsegmentations of both MR and TRUS are available in this case.102Data Fusion 4.3 GMM-FEMTable 4.2: Statistical significance of GMM-FEM results. Statistical significance of reduction inTRE is compared to other leading methods, and between the full-surface and partial-surface targetdata.(a) P -values for method comparisons.GMM-FEM vs. Prostatectomy BiopsyFull Partial Full PartialTPS-RPM 20−9 20−12 20−8 20−9CPD 20−9 20−9 20−9 20−9ICP-FEM 0.45 20−6 0.33 20−9(b) P -values for full vs. partial data.Method Prostatectomy BiopsyTPS-RPM 20−9 20−8CPD 20−7 20−7ICP-FEM 20−6 20−3GMM-FEM 0.28 0.42Table 4.3: Computation time required for each component of GMM-FEM registration reportedacross all patients.FE-meshing Stiffness Matrix Average Iteration Total Time0.64± 0.22 s 2.03± 0.27 s 0.15± 0.01 s 9.34± 3.63 sTo investigate the statistical significance of registration using full versus partial segmentations,we performed four additional sets of signed-rank tests. Results of are provided in Table 4.2b.For TPS-RPM, CPD and ICP-FEM, the increase in the TRE is significant, suggesting that thesemethods are not suitable when the TRUS image is only partially segmented. However, for GMM-FEM, the test fails to reject the hypothesis. This implies that, for our method, a full segmentation ofthe prostate is not necessary: a partial surface (at least with up to 30% missing points) registrationleads to a comparable TRE. Partially segmenting the surface can help speed-up the segmentationprocess, which must take place during the clinical procedure.4.3.3.3 Registration Time and Computational ComplexityWe implemented GMM-FEM in Matlab (MathWorks, MA), using C++ mex code to interface withTetGen [219]. Timings were recorded on a PC with a 3.4 GHz Intel Core i7 processor and 8.00 GBof RAM. Average times for each step across all patients is summarized in Table 4.3. The total timesencapsulate all steps, including mesh generation and stiffness matrix computation. In all cases, thealgorithm converged within 100 iterations.The proposed registration algorithm converges on average within ten seconds on a regulardesktop PC. This efficiency is achieved due to the linearity assumption of the FEM, and the sparsenature of the equations involved. Since the stiffness matrix does not change between iterations ina linear material model, it only needs to be computed once given the original volumetric mesh.For the size of models considered in this work, this step takes on average 1.03 seconds. The nextmost computationally expensive step is the matrix solve in Equation (4.14) for updating the FEM103Data Fusion 4.3 GMM-FEMdisplacements. Since the system is sparse, it can be solved quite efficiently in Matlab. Such sparsesolves can typically be computed with complexity between O(M1.5) and O(M2), where M is thenumber of FEM nodes [226].4.3.4 Section SummaryIn this section, we applied our novel non-rigid surface-based registration approach to MR-TRUSfusion for prostate interventions. In particular, we were interested in how the method performedwhen a full segmentation from MR was fit to a potentially partial segmentation from TRUS.In Section 4.3.3.2, we validated the algorithm on data from both prostatectomies and prostatebiopsies. We compared the TRE against three other non-rigid registration methods: TPS-RPM,CPD, and ICP-FEM. The proposed GMM-FEM was found to outperform both TPS-RPM andCPD in all cases. This improvement is shown to be statistically significant. Since the only ef-fective difference between these registration methods is in the regularization scheme (FEM-basedvs. Gaussian/TPS-based), we conclude that the FEM component is responsible. Compared toICP-FEM, our method did not lead to statistically significant improvements when a full TRUSsegmentation was available. However, if only a partial segmentation is available, then the improve-ment was found to be significant. Since the only effective difference between the two methods is inthe correspondence scheme (soft vs. binary assignments), we conclude that the GMM componentof the proposed framework is responsible.While there are some techniques that focus on partial-surface registration, most do not considervolumetric properties (e.g. Poisson effects, incompressibility) during the registration, only surfacebending. A 3D deformation field would then need to be interpolated internally based on the fittedsurface, as a post-processing step. Physically, it is the internal volume that resists deformation,affecting the shape of the surface. Our method better reflects this. By comparing to CPD, whichrequires post-processing to recover a 3D field, we show that keeping the FE-regularizer in-loop leadsto improved error results internal to the volume.For a tumor to be considered clinically significant, its radius should be no less than 5 mm.The TRE, as a root-mean-square error, provides an estimate of the standard deviation for biopsytargets. A standard deviation given by a TRE of 2.5 mm yields a confidence interval in which 95.4%of targets fall within the clinically significant 5 mm radius [114]. The current results (2.61 mm) ofour GMM-FEM registration bring us closer to these bounds.The presented reduced method, GMM-FEM, is shown to be both efficient and robust. Itwas designed to handle cases where visibility can limit accuracy of boundary segmentation insome regions of an organ, leading to the need to handle missing data. However, the method alsoperforms well when full segmentations are available. It was shown to outperform current state-of-104Data Fusion 4.4 Dynamic Statistical Registrationthe-art surface-registration techniques: TPS-RPM, CPD, and ICP-FEM. We believe this makes ita practical approach to consider for use in image-guided interventions.4.4 Dynamic Statistical RegistrationSo far we have assumed in the dynamic registration process that we begin with an appropriatebiomechanical model describing a particular subject’s anatomy. This model not only defines thetransformation function for registration, but also acts as a regularizer. However, anatomical geome-tries often differ between subjects. Using a model derived from one subject may not be appropriatewhen applying to another. On the other hand, creating a new model for each person would notonly be tedious, but may require data from detailed high-resolution scans, such as MRI. This maynot always be practical. To avoid needing to manually construct a model on a subject-by-subjectbasis, it would be extremely useful if we could simply adapt one while registering to the target datadirectly.In this section, we describe an approach for adapting a model to subject-specific geometriesbased on population statistics through the use of a statistical shape model (SSM) [44].4.4.1 Statistical Shape ModelsSSMs describes shape statistics based on a set of training data. It usually consists of a mean shape,plus a combination of linear variational modes:x = x¯+∑ibiΨi = x¯+ Ψb,where x is a vector of concatenated positions describing a subject-specific shape, x¯ a vector ofpositions describing the mean shape for the population, {Ψi} are a set of variational modes thatcan be linearly added to the mean, and {bi} are subject-specific shape parameters that uniquelydefine a particular instance. The mean shape x and variational modes {Ψi} are typically determinedby performing a principal component analysis on the set of training shapes. Given a collection oftraining shapes {xt} in the same coordinate space, the mean and variational modes can be computedasx¯ = 1TT∑t=1xt, C =1T − 1T∑t=1(xt − x¯) (xt − x¯)T = ΨΣΨT,where T is the number of training instances, C is the sample covariance matrix, Ψ is the setof singular vectors of the covariance (i.e. the principal components), and Σ is a diagonal matrix105Data Fusion 4.4 Dynamic Statistical Registrationcontaining the corresponding squared singular values {σ2i }. From these, we can select a subset ofsingular vectors which describe the majority of the variation. For example, choosing the first Nsuch that ∑Ni=1 σ2i /∑Tj=1 σ2j > 0.95 will ensure we capture approximately 95% of the total variation.In what follows, we take Ψ and Σ to be the reduced versions.Given new data y, we can attempt to find the best-fitting shape parameters b by minimizingthe objective functionEb(b) = 12 ‖y − sR (x¯+ Ψb)− t‖2 + µ2 bTΣ−1b. (4.16)The transform defined by the scale, rotation, and translation (s,R, t) is to account for changes inthe coordinate space. The second term in the objective function is added as a form of Tikhonovregularization with parameter µ to limit the magnitude of the shape parameters b. This ensures wedo not stray too far from the variation observed in the training data. To minimize, we first assumeb is fixed and compute (s,R, t) using ordinary Procrustes analysis. We then take the coordinatetransform to be fixed and compute the optimal b. These two steps can be iterated until convergenceis achieved. The minimizing parameters b then define the subject-specific instance best describedby the population statistics.Surface-based SSMs have been widely used in the medical segmentation and registration lit-erature (e.g. [35, 49, 61, 91, 199]). Originally, correspondences between points in the trainingdata were assigned manually [44]. Automatic pair-wise construction techniques have since beenproposed, where one shape is arbitrarily chosen as the mean, and is registered to all other shapesin the training set. Correspondences are then estimated based on proximity. Unfortunately, thisapproach is inherently biased towards the selected mean shape, which may reduce the generalityof the resulting SSM. To remove this bias, group-wise registration techniques have been devel-oped [12, 199], where both the mean shape and its pair-wise transformations to the training dataare considered unknown. Instead, they are concurrently estimated by registering the entire grouptogether at once.Statistical shape models have been applied in multi-modal image registration. Chowdhury et al.[33] investigated the concurrent segmentation of the prostate in MR and CT using a linked-SSM.This linked-SSM has two separate mean shapes: one for MRI and one for CT. Fitting the SSM toa new MRI instance predicts the corresponding prostate boundary in CT due to the linkage. Theimplicit assumption is that changes in shape between modalities is consistent and predictable. Thismay hold in CT and MRI as long as imaging conditions are similar for all subjects. Unfortunately,the assumption breaks down for transrectal ultrasound, where the probe location, orientation, andpressure will inevitably vary between subjects. To address this, we add our dynamic modellingapproach to the framework.106Data Fusion 4.4 Dynamic Statistical Registration4.4.2 Dynamic SSM RegistrationTo add SSMs to our existing registration workflow, we no longer assume that the dynamic model isknown, but that its surface is somehow described by a mean shape and linear modes of variation.In the case of finite-element models, the SSM can be thought of as describing the surface statisticsof the ‘rest shape’ of the tissue. Given a set of observations, our task will be to simultaneouslyestimate a subject-specific rest shape, as well as any elastic deformations or other motion.For a single observation, adding the SSM to our objective function leads to:E(u,λ, z, σ2, s, R, t, b) = β2N∑mnwmn ‖yn − xm‖2 +R(σ2) + L(u,λ, z) + µ2 bTΣ−1b.The source surface x depends on the rest model described by parameters b, the scaled coordinatetransform (s,R, t), and the model displacement vector u:xm = sR (x¯m + Ψmb+ uˆm + Φmδu) + t, (4.17)where Ψm corresponds to the rows of Ψ that describe the SSM surface point xm, uˆm is the previousdisplacement of the SSM surface point, and δu is the new incremental displacement vector tocompute. Note that the deformation described by the model is the coordinate space of the SSM.This is then mapped to the target data by a scaled coordinate transform. Minimizing in the usualalternating way will allow us to fit our shape-plus-dynamics model to the observed data, with theregularization parameters β and µ balancing between deformations and adjusting the rest shape.The true power, however, comes from fitting the model to multiple observations.If we have two or more target surfaces from the same subject (e.g. by segmenting data frommultiple images sources or acquired at different times), then we can attempt to estimate both a com-mon rest shape, and independent sets of displacements for each target. The multiple-observationobjective function isE({u(i)}, {λ(i)}, {z(i)}, {[σ(i)]2}, {s(i)}, {R(i)}, {t(i)}, b)=∑i[β2N (i)∑mnw(i)mn∥∥∥y(i)n − x(i)m ∥∥∥2 +R(i)([σ(i)]2)+ L(i)(u(i),λ(i), z(i))]+ µ2 bTΣ−1b,where i sums over all independent observations. When computing the correspondences, displace-ments, and Lagrangian multipliers, each observation can be treated independently. However, to107Data Fusion 4.5 SSM-FEMupdate the shape parameters, we keep all else fixed and differentiate with respect to b,0 = ∂E∂b=∑iβN (i)∑mnw(i)mn[∂x(i)m∂b]T[x(i)m − y(i)n]+ µΣ−1b=(∑iβN (i)∑mnw(i)mn[s(i)R(i) Ψm]T[s(i)R(i)(x¯m + Ψmb+ u(i)m)+ t(i) − y(i)n])+ µΣ−1b=⇒[β(∑i[s(i)]2N (i)∑mnw(i)mnΨTmΨm)+ µΣ−1]b (4.18)= β(∑is(i)N (i)∑mnw(i)mn[R(i) Ψm]T[y(i)n − s(i)R(i)(x¯m + u(i)m)− t(i)]),where u(i)m is the current displacement of the point on the shared SSM instance surface xm. Thisis a dense linear system that needs to be solved at each iteration, which is tractable as long asthe number of modes in the SSM model is kept quite sufficiently low. The resulting optimal shapeparameters b are used to update the estimate of the subject’s rest shape.An additional step may be required after the rest surface update to ensure the underlyingmodel adheres to the new SSM instance. For example, in finite element models, we need to adjustor regenerate the volumetric mesh to satisfy the new surface requirements. The approach wetake is to simply move the surface nodes and leave the internal nodes as-is. If a surface elementbecomes poorly conditioned or inverted, we then re-generate the volumetric mesh and map theFEM displacement field to the new node locations. For rigid structures, the body’s surface cansimply be adjusted, and its inertia recalculated.To demonstrate the utility of this approach, we apply this multi-target dynamic SSM registrationin the context of prostate interventions, where we have pairs of images that are each partiallysegmented. Due to the poor prostate visibility in both sets of images, we do not have a reliablesurface from which we can generate a model and estimate dynamics. The SSM helps to fill thisknowledge gap, producing a subject-specific estimate of the underlying rest geometry, which canthen be used to dynamically register to both partial segmentations.4.5 SSM-FEM: Partial Surface Registration for Prostate InterventionsA common challenge when performing surface-based registration of images is ensuring that the sur-faces accurately represent consistent anatomical boundaries. Image segmentation may be difficult108Data Fusion 4.5 SSM-FEMFigure 4.11: Overview of SSM-FEM. The pre-operative (MR) image is acquired prior to theprocedure and is segmented to create a surface representation of the anatomy (e.g. the prostate).At the beginning of the procedure, an intra-operative (3D-TRUS) image is acquired and highlyvisible parts of the anatomy (mid-gland) are segmented. SSM-FEM deformably maps both surfacesand interior volume to a common intermediate SSM instance. Subsequently, the MRI is registeredto the TRUS through this intermediate shape. c© IEEE 2015.in some regions due to either poor contrast, low slice resolution, or tissue ambiguities. To addressthis, we apply our novel non-rigid surface registration method designed to register two partial sur-faces, capable of ignoring regions where the anatomical boundary is unclear. Our probabilisticapproach incorporates prior geometric information in the form of a statistical shape model (SSM)to estimate an intermediate ‘rest’ shape, and physical knowledge in the form of a finite elementmodel (FEM) to estimate the independent deformations observed in each partial surface target.Thus, we call this approach SSM-FEM.We validate our registration framework in the context of prostate interventions by register-ing pre-operative magnetic resonance imaging (MRI) to intra-operative 3D transrectal ultrasound(TRUS) for 19 patients who underwent prostate biopsies. Both the geometric and physical pri-ors are found to significantly decrease net target registration error (TRE), leading to TREs of2.35 ± 0.81 mm and 2.81 ± 0.66 mm when applied to full and partial surfaces, respectively. Weinvestigate robustness in response to errors in segmentation, and varying levels of missing data. Re-sults demonstrate that the proposed surface registration method is an efficient, robust, and effectivesolution for fusing data from multiple modalities.109Data Fusion 4.5 SSM-FEM4.5.1 Clinical WorkflowAn outline of the proposed MR-TRUS fusion is shown in Figure 4.11. The MR image can beacquired any time prior to the biopsy. It is subsequently contoured in axial slices by tracing theboundary of the prostate only where it can be reliably segmented, as determined by a clinician.We refer to the partial MR segmentation as the MR observation.At the beginning of the biopsy procedure, the radiologist acquires a sweep of the prostate glandusing tracked 2D-TRUS. We used an axial rotation along the probe with a mechanical system [15]to acquire this sweep, however, other trackers/sweep protocols [269] should perform just as well.These tracked slices are then used to generate a 3D-TRUS volume [69]. The prostate is subsequentlysegmented where its boundary can be reliably traced. We refer to partial TRUS segmentation asthe US observation. We then perform the proposed SSM-FEM registration to map the MR imageinto the space of TRUS. Throughout the procedure, mono-modality 2D/3D TRUS registration isused to compensate for intra-procedure motions and deformations [54].4.5.2 Partial Surface to Partial Surface RegistrationWe treat the registration problem as two coupled SSM to partial-surface registrations, which aresolved concurrently. The SSM is used to construct two separate probability density functions thatdefine the boundary of the structure, one for each modality. We then wish to find:1. The single set of SSM modes that best fit both partial observations.2. Two non-rigid transformations that map the SSM to each observation, independently.To construct the SSM, we use a group-wise GMM approach [199]. This has been shown to pro-vide a general and compact representation of surfaces. Details of the training are provided inSection 4.5.5.1. The result is a linear SSM model where shapes can be generated by adding a com-bination of modes of variation to a mean shape. The linear weights, referred to as shape parameters,along with the modes of variation define a single instance of the SSM. Parameters that control therigid coordinate transformation of an instance are referred to as pose parameters, and include scale,rotation and translation. During registration, a common instance of the SSM is computed and usedto create a finite element model of the prostate’s estimated ‘rest’ shape. This is used to computeand limit internal deformations. The key idea is that the surfaces in US and MR represent deformedobservations of a single common organ. Hence, the solution to the SSM shape parameters dependson both. Since the two images are acquired at different times and under different contexts, anydeformations are considered independent. We therefore compute two separate deformation fields,one for each modality. The shape parameters and deformations are combined in a single framework,where the goal is to minimize net deformation and surface-to-surface errors.110Data Fusion 4.5 SSM-FEM4.5.3 SSM-FEMWe simplify the dynamic statistical registration approach from Section 4.4 to the case of an uncon-strained finite-element model of the prostate. For simplicity, we consider a linear material model,which allows us to express the elastic energy asL(i)(u(i)) = 12[u(i)]TK[u(i)], (4.19)where K is the linear stiffness matrix for the common underlying finite element model. This matrixdepends on two elasticity parameters: the Young’s modulus E, and Poisson’s ratio ν, which areinputs to the registration.To relate positions on the surface of the FEM to the positions of its nodes, we interpolate usingthe finite element shape functionsx(i)m =∑jφj(Xm)n(i)j = Φmn(i), =⇒ δx(i)m = Φmδu(i), (4.20)where Xm is the material location of surface point x(i)m , and {n(i)j } are the finite element nodelocations related to target i. In our implementation, all points on the surface correspond to nodes,and internal nodes are appended to the end of the node list. As a result, the interpolation matrixhas the form Φmi = δmiI3×3. The combined objective function is therefore given byE({u(i)}, {[σ(i)]2}, {s(i)}, {R(i)}, {t(i)}, b)(4.21)=∑i[β2N (i)∑mnw(i)mn∥∥∥y(i)n − x(i)m ∥∥∥2 +R(i)([σ(i)]2) + 12[u(i)]TK[u(i)]]+ µ2 bTΣ−1b, (4.22)where w(i)mn are the target-specific GMM correspondence weights from Equation (4.7), and R(i) isthe GMM variance term from Equation (4.5). Following Myronenko and Song [176], we evaluate thecorrespondence weights based on the previously computed displacements, and employ an alternatingminimization strategy. This allows us to compute the optimal pose, rest shape, and displacementsseparately.PoseTo compute the target-specific pose parameters we perform a weighted orthogonal Procrustes regis-tration between the current deformed source surface x(i) and the target y(i). We start by computing111Data Fusion 4.5 SSM-FEMthe weighted mean source and target locations:w¯(i) =∑mnw(i)mn, x¯(i) =∑mnw(i)mnw¯(i)x(i)m , y¯(i) =∑mnw(i)mnw¯(i)y(i)m .Next we compute the covariance matrix and its singular-value decomposition:C(i) =∑mnw(i)mn[y(i)n − y¯(i)] [x(i)m − x¯(i)]T= U (i)S(i)[V (i)]T.From these, we can determine the optimal scale, translation, and rotation ass(i) =√√√√∑mnw(i)mn‖y(i)n − y¯(i)‖2∑mnw(i)mn‖x(i)m − x¯(i)‖2, (4.23a)t(i) = y¯(i) − x¯(i), (4.23b)R(i) = U (i)Λ(i)[V (i)]T, Λ(i) = diag([1, 1, |U (i)||V (i)|]). (4.23c)These pose parameters can be updated at each iteration. In some situations, estimation of thepose may not be necessary. If the SSM modes also account for changes in scale between subjects,then s(i) should be ignored. In the case of a finite-element model, the FEM displacement field cancompensate for translation, so computation of t(i) is unnecessary. The rotation parameter R(i) helpseliminate rotational artifacts for linear FEM materials, where changes in orientation can induceartificial forces. However, for non-linear or corotated materials, rotation can similarly be ignored.ShapeTo update the shape parameters, we keep a fixed pose and target-specific displacements, andsolve the linear system in Equation (4.18) for the new b. This is updates the rest surface, whichwe in-turn use to update the rest state of the FEM. Since in our case each vertex on the surfacecorresponds to a node, we simply move the surface nodes to their new rest locations. If any elementsbecome inverted, then we re-mesh the surface to produce a new model. Remeshing invalidates thedisplacement vectors, which may no longer contain the same number of nodes. To correct this,we could interpolate the displacement field from the original model to the new node locations.However, this step is only necessary if using a non-linear material, where the displacement field iscomputed incrementally. For a linear materials, we compute the full displacement vectors at eachiteration, to they will automatically be updated. Remeshing is typically only required in the firstfew iterations if there is significant deformation observed. As the system converges, the rest surfacetypically only undergoes small variations.112Data Fusion 4.5 SSM-FEMDeformationTo update displacements, we keep the shape and pose parameters fixed and minimize the objectivefunction in Equation (4.22) with respect to displacements u(i):∂E∂u(i)= βN (i)∑mnw(i)mn[∂x(i)m∂u(i)]T[x(i)m − y(i)n]+ Ku(i)0 =(βN (i)∑mnw(i)mn[s(i)R(i)Φm]T[s(i)R(i)(x¯m + Ψmb+ Φmu(i))+ t(i) − y(i)n])+ Ku(i)=⇒[β([s(i)]2N (i)∑mnw(i)mnΦTmΦm)+ K]u(i) (4.24)= βN (i)∑mnw(i)mn[s(i)R(i)Φm]T[y(i)n − s(i)R(i) (x¯m + Ψmb)− t(i)].These displacements are computed independently for each target surface i.A summary of the SSM-FEM algorithm is outlined in Algorithm 4.1. Once the SSM-FEM reg-istration has converged, we propagate the deformed models into the space of the TRUS and MRimages, allowing us to express voxels in either modality using the natural coordinates and shapefunctions of the FEM. Thus, for voxels inside the prostate in one modality, it is possible to find itscorresponding voxel in the other.For material properties, we apply a homogeneous elastic material with a constant Young’smodulus to all elements. The models used in this study are composed of approximately 7500elements. Throughout our experiments, we used a stopping condition of σ2 ≤ 10−4 mm2, Young’smodulus of E = 5 kPa, which is in the range of values reported in Kemper et al. [117] for theprostate, and a Poisson’s ratio of ν = 0.49 to maintain near incompressibility.Input ParametersIn total, there are six free parameters in the proposed method: µ, β, {w(i)}i∈(MR,US), and the twoFEM-specific material parameters E and ν.The first two registration parameters are tunable. The first, µ, controls the impact of SSMmodes. If set too high, the SSM will be restricted to the mean shape. If too low, the SSMmay produce unlikely instances. For our model, a value of µ = 400 allowed reasonable variations113Data Fusion 4.5 SSM-FEMInputs: E, ν, µ, x¯, Ψ, β, {y(i)} and {w(i)};Initialization:b = 0;for i ∈ {MR,US} dou(i) = 0;Initialize σ(i) using Equation (4.8b);endwhile not converged doExpectation:for i ∈ {MR,US} doUpdate w(i)mn using Equation (4.7);endPose Estimation:for i ∈ {MR,US} doUpdate s(i), R(i) and t(i) using Equation (4.23);endRest Volume Estimation:Update b using Equation (4.18);Update rest shape x = x¯+ Ψb;Update FEM model for new surface x;Biomechanical Registration:for i ∈ {MR,US} doUpdate u(i) using Equation (4.24);endVariance Evolution:for i ∈ {MR,US} doCompute GMM correspondence variance σ(i) using Equation (4.8a);endendAlgorithm 4.1: SSM-FEM Registration Summaryfrom the mean. The second parameter, β, controls the balance between implicit surface-to-surfaceforces and internal resistance to deformation provided by the FEM. It should be tuned to allowfor “reasonable” flexibility of the model. We found β = 5000 to be sufficient for this prostateapplication. This parameter can be viewed as scaling external forces acting on the prostate to fallwithin a reasonable range.The fraction of outliers, {w(i)}, is a property of the quality of the target data. For high-qualitydata, we set w(i) = 0. If some noise or segmentation variability is expected, then w(i) = 0.1 issuggested. Otherwise, if over-fitting to extraneous points is observed for a particular target, thenw(i) should be increased.114Data Fusion 4.5 SSM-FEMThe elasticity parameters E and ν control the model’s physical response to the forces drivingregistration. In linear materials, the Young’s modulus E, can be factored out of the stiffnessmatrix. By dividing Equation (4.24) through by β, we can see that it is the ratio of E/β thatimpacts the computed displacements. Thus, the actual value for Young’s modulus has no impacton the registration as long as β and µ are scaled appropriately. However, we still recommendkeeping a reasonable value for consistency. Finally, Poisson’s ratio ν controls incompressibilityeffects: when compressed along one dimension, how much the material expands perpendicularly.For incompressible materials, we want ν ≈ 0.5. Most soft tissues, including the prostate, areconsidered nearly incompressible, so we set ν = 0.49 [117].4.5.4 Registration Methods for ComparisonThe proposed SSM-FEM registration method consists of two priors: 1) Geometric (SSM); and 2)Biomechanical (FEM). To highlight the importance of each component, we remove each individually,and compare results.Keeping only the geometric prior, we implicitly assume that the SSM, TRUS and MR surfacesare in the same biomechanical state. The concurrent registration of a SSM to both TRUS and MRsurfaces without the biomechanical component is equivalent to a group-wise rigid SSM registration.We refer to this approach as SSM registration.On the other-hand, if a complete subject-specific surface of the prostate is available, such asthrough a full segmentation of the MRI, then there is no need for the geometric prior. In thiscase, we can treat the fully segmented MR surface as the source rather than relying on the SSM.We solve only for an initial coordinate transform and residual biomechanical deformations. This isprecisely the GMM-FEM approach from Section 4.3.1.4.5.5 Experiments and ResultsWe evaluate our registration approach on MR-TRUS image pairs acquired from patients who un-derwent a prostate biopsy. In this section, we describe the data collection protocols for both theSSM training data and the biopsy image acquisition and segmentation. We then validate the accu-racy of our registration algorithm by measuring the target registration error (TRE) between pairsof intrinsic fiducials in the prostate. To highlight the importance of the SSM, we compare theoutcome of the combined SSM-FEM registration with the similar registration that excludes eitherthe SSM or FEM components.115Data Fusion 4.5 SSM-FEM4.5.5.1 DataThe data acquisition protocols were approved by our institutional ethics boards, and all patientsprovided written consent to be included in the study.SSM ConstructionTo construct a statistical model that represents inter-subject variation of prostate shapes, we requirea large set of training data, ideally collected under similar mechanical conditions so that differencesin appearance are mainly due to natural inter-subject shape variations. A suitable source is a largepopulation of prostate images acquired in a single modality. The choice of modality is important,since the technique may affect the prostate shape (e.g., forces from end-firing probes or endorectalcoils). Finally, for surface-based SSMs, the prostate needs to be accurately and reliably segmented.Brachytherapy volumes are routinely acquired and segmented in large clinics. We used adataset of images acquired from 290 brachytherapy patients in the construction of our SSM. EachTRUS volume consists of 7 to 14 parallel equally spaced (5 mm apart) axial B-mode images of theprostate, captured using a side-firing transrectal probe. The in-plane resolution of these images was(0.16, 0.16) mm. For each B-mode image, the prostate gland was delineated using Variseed (VarianMedical Systems, Palo Alto, CA, USA) and a contouring plug-in. This plug-in is based on thesemi-automatic prostate segmentation method presented by Mahdavi et al. [160]. Contours weremanually corrected by an expert clinician, and subsequently used as the SSM training population.To construct the SSM, we used the group-wise GMM method of Rasoulian et al. [199]. Thisresulted in a mean shape consisting of 1000 vertices and 1996 faces. The mean and primary modesof variation are depicted in Figure 4.12. As seen in the figure, the first two modes control the scale,whereas the third mode controls curvature of the prostate.The choice of the number of modes is governed by the compactness of the SSM. Compactnesssimply measures the cumulative variance of the model as modes are added in descending order ofeigenvalues [91]. A popular rule is to keep the first L modes that span 95% of the total variation.For our SSM, this resulted in a model with 50 principal modes.Prostate BiopsyValidation data was acquired from 19 patients scheduled for a prostate biopsy. In Figure 4.13,a typical example of an MR and TRUS image pair is shown. The T2-weighted MR images wereacquired using a 3 Tesla GE Excite HD MRI system (Milwaukee, WI, USA) with a spacing of0.27× 0.27× 2.2 mm. Slices from the base, mid-gland and apex are shown in Figures 4.13a, 4.13band 4.13c, respectively. The TRUS images were acquired using a 3D-TRUS mechanical biopsy116Data Fusion 4.5 SSM-FEMFigure 4.12: Prostate SSM modes of variation. Graphical representations are shown of the prostateshape after varying weights corresponding to the first three principal modes of variation, each withinthree standard deviations (3√λ). The mean shape is shown with a green model in the middle column.The amount of variation in the left and right column is color coded for each mode. c© IEEE 2015.system [15] with a Philips HDI-5000 US machine and a C9-5 transducer using an axial rotationof the biopsy probe. The TRUS images were reconstructed into a 3D-volume with a spacing of0.19× 0.19× 0.19 mm. Figures 4.13d, 4.13e and 4.13f show slices from base, mid-gland and apex,respectively. In each modality, areas around the mid-gland (e.g. Figures 4.13b and 4.13e) weresegmented where the prostate boundary could be reliably and accurately traced, as determinedby an expert clinician. We refer to these contours as a partial segmentation. Additionally, weattempted to segment regions of the prostate where the boundary was not clearly visible based onsymmetries and prior knowledge of the general prostate shape. White arrows in Figure 4.13 indicateexamples of these uncertain regions. These additional contours are referred to as uncertain data.The combination of both uncertain and partial data make up the full segmentations. Examplesegmentations of MR and TRUS volumes are shown in Figures 4.13g and 4.13h, respectively. Forthe data used in this study, the partial segmentations of MR represent approximately 80% of thefull surface, and the partial segmentations of TRUS approximately 70%. All segmentations wereperformed manually using Stradwin (Cambridge University, UK).Note that even though the prostate boundary is typically more visible in MR than TRUS, itis still prone to segmentation error [223]. For example, since axial MR slices are typically verythick, it may be ambiguous where the prostate ends and the bladder begins, as the prostate mayterminate between slices. Another source of MR segmentation error, specifically at the base, is thepotential inclusion of the seminal vesicles, as this varies between experts.117Data Fusion 4.5 SSM-FEM(a) Base (b) Mid-gland (c) Apex(d) Base (e) Mid-gland (f) Apex(g) MR Segmentation(h) TRUS SegmentationFigure 4.13: Ambiguity of prostate boundary in MR (top) and TRUS (middle). Typically, theprostate boundary can be accurately and reliably segmented in the mid-gland, i.e. (b) and (e).White arrows highlight regions where the true prostate boundary is ambiguous. (g) and (h) depictthe resulting segmentation of MR and TRUS, respectively. Partial reliable segmentations are colorcoded in green, uncertain contours in red. c© IEEE 2015.118Data Fusion 4.5 SSM-FEM(a) Initial (b) RegisteredFigure 4.14: Example SSM-FEM registration with w = 0.0, µ = 400, β = 5000, E = 5.0 kPaand ν = 0.49 applied to full data (TRUS on the left, MR on the right). Following center of massinitialization (a), the SSM mean is evolved to target surfaces (b). The target surfaces are shown inred, the current SSM instance is shown in green, and the result of SSM-FEM registration in blue.c© IEEE 2015.(a) Initial (b) RegisteredFigure 4.15: Example of SSM-FEM registration with missing data. Registration is performedbetween two partial surfaces, with w = 0.2, µ = 400, β = 5000, E = 5.0 kPa and ν = 0.49 topartial data (TRUS on the left, MR on the right). Following center of mass initialization (a), theSSM mean is evolved to target surfaces (b). The target surfaces are shown in red, the current SSMinstance in green, and the result of SSM-FEM registration in blue. c© IEEE 2015.4.5.5.2 Registration of Full and Partial SurfacesWe used the biopsy data to validate our registration pipeline using both the pairs of full surfaces,and the pairs of partial surfaces. To initialize translation, we use a center of mass alignment betweenthe SSM-mean and target, as seen in Figures 4.14a and Figure 4.15a. For the initial rotation, wesimply matched the right-anterior-superior (RAS) coordinates.Following initialization, we apply the SSM-FEM algorithm until registration converges. A119Data Fusion 4.5 SSM-FEMFigure 4.16: Examples of corresponding fiducial pairs in MR (left) and TRUS (middle) images.The composite image following SSM-FEM registration is shown in the right column. The segmentedprostate boundary for MR and TRUS is shown in blue and red, respectively. c© IEEE 2015.typical result for full MR-TRUS surfaces is shown in Figure 4.14b. The Tikhonov regularizationweights were tuned to allow sufficient flexibility while still resulting in plausible prostate shapes.This resulted in a choice of µ = 400 and β = 5000. Since the quality of data was quite high andthe surfaces sufficiently smooth, we set the estimated fraction of outliers, w(i), to zero. For a faircomparison, we used the same parameters for SSM and for GMM-FEM registration methods aswell.The same experiment was applied to the partial MR and TRUS surfaces. A typical SSM-FEMregistration result for a partial MR-TRUS surface pair is shown in Figure 4.15. We used the sameTikhonov regularization weights as for the full data. However, since partial surfaces exhibit missingpoints, we set the estimate of noise and outliers to w(i) = 0.2. As demonstrated by Myronenkoand Song [176], this helps avoid falling into local minima due to poor initialization. The sameparameters were used for SSM and GMM-FEM registration methods.Quantitative ValidationTo quantify the registration results, we asked an expert radiologist to select five intrinsic fiducialsper patient consisting of cysts and benign prostatic hyperplasia. While we did not follow a strict120Data Fusion 4.5 SSM-FEMTable 4.4: TRE results for SSM-FEM comparisons. The p-values reflect Wilcoxon signed-ranktests compared to initialization.Method TRE (mm) p-valueFull Partial Full PartialInitial 4.86± 1.43 4.53± 1.35 NA NASSM 3.86± 1.26 3.95± 1.26 20−2 0.30GMM-FEM 2.72± 1.15 4.89± 1.46 20−3 0.27SSM-FEM 2.35± 0.81 2.81± 0.66 20−4 20−4protocol regarding the precise anatomical location of these landmarks, we did ask our clinicalcollaborator to only draw samples interior to the prostate. This resulted in a total of 93 landmarksfrom the cohort of 19 patients. The spatial distribution of these landmarks was: 64.5% in themid-gland, 21.5% in the base and 14% in the apex. An example of corresponding fiducial pairs inMR and TRUS, as well as the fused MR-TRUS image following SMM-FEM registration, is shownin Figure 4.16. The L2 distance between these fiducial pairs was used to quantify the TRE. Thefiducial localization error (FLE) is approximately 0.21 mm for TRUS and 0.18 mm for MR, asreported in a previous study [114, 232]. This suggests that the FLE is not likely to dominate.The mean and standard deviation of the TREs, as well as corresponding p-values when comparedto initialization, are shown in Table 4.4. To investigate the statistical significance, we first checkwhether or not the TRE distributions are normally distributed. Using a one-sample Kolmogorov-Smirnov test, the TREs were found not to be normally distributed at the 95% significance level(p ≤ 10−4). As a result, we performed a set of Wilcoxon signed-rank tests, which are robustto deviations from normality. The null hypothesis is that the TREs of the initialization and thesubsequent registration method share a common median at the 95% significance level.For full surfaces, all three methods (SSM, GMM-FEM and SSM-FEM) significantly decrease theTRE compared to an initial rigid registration. The mean TRE improvement for SSM, GMM-FEMand SSM-FEM registration methods is 2.00 mm, 2.14 mm and 2.51, respectively. The mean TREfor SSM-FEM registration is 2.51 mm lower compared to SSM registration alone. This suggeststhat the transformation between MR and TRUS surfaces is not rigid: substantial mechanically-induced deformations exist. For partial surfaces, SSM led to a decrease in mean TRE by 0.58 mm,and GMM-FEM to an increase by 0.36 mm. However, these differences are deemed not statisticallysignificant at the 95% level. The 2.72 mm decrease in TRE of the proposed SSM-FEM, however,is significant at the 95% level (p < 10−4).Table 4.5 shows the TRE following SSM-FEM registration for fiducials at the base, mid-glandand apex, separately. Errors in the mid-gland are consistently lower compared to those in the baseand apex for both full and partial data. The reason for this is likely that prostate segmentations hereare more reliable compared to the base and apex, leading to fewer assumptions when reconstructing121Data Fusion 4.5 SSM-FEMTable 4.5: TRE results for SSM-FEM by region. The TRE is measured at base, mid-gland andapex for full and partial data (mm).Base Mid-gland ApexFull 2.87± 0.75 2.95± 0.61 2.53± 0.91Partial 3.12± 0.65 2.21± 0.45 3.05± 0.76Table 4.6: Statistical significance of SSM-FEM results. The p-values reflect Wilcoxon signed-ranktests between methods.Test p-valueFull PartialGMM-FEM vs. SSM 20−4 20−3SSM-FEM vs. SSM 20−6 20−5SSM-FEM vs. GMM-FEM 0.45 20−4the deformation field.To compare the methods, we performed additional Wilcoxon signed-rank tests on paired TREs,the p-values of which are reported in Table 4.6. From these tests, we see that GMM-FEM signifi-cantly outperforms SSM alone when the full segmentations are available (p < 10−4), but SSM is thebetter of the two when only partial segmentations are available (p < 10−4). Upon inspection, weobserved that the increased error in GMM-FEM is mainly due to a mis-assignment of the base toapex regions (i.e. flipping of the prostate). The SSM component is able to mitigate this, providingenough additional information to prevent flips caused by a lack of corresponding surface features.Comparing SSM to SSM-FEM, we see that the flexibility added by the finite element model leadsto significant reductions in TRE for both full and partial surfaces (p < 10−5). Finally, comparingSSM-FEM to GMM-FEM, we find that the difference in TRE is not significant at the 95% levelwhen full surfaces are available. However, when applied to partial segmentations, SSM-FEM doessignificantly outperform GMM-FEM (p < 10−4).Finally, we investigate the impact on TREs of using full versus partial segmentations for each ofthe methods. For GMM-FEM, there is a significantly large increase in TRE of 2.1 mm (p < 10−3)when applied to the partial TRUS data. The soft-correspondences alone are found not sufficientin handling the large uncertainties in segmentations. For SSM, the difference in TRE between fulland partial surfaces is minor (0.39 mm, p = 0.25). Similarly, for SSM-FEM, there is no statisticallysignificant difference in TRE (0.46 mm, p = 0.33). This suggests that when applying SSM andSSM-FEM, a full segmentation of the prostate is not necessary: registration based on the partiallysegmented surfaces produces similar TREs.Computation times for the SSM, GMM-FEM and SSM-FEM methods are provided in Table4.7. Timings are reported on a regular desktop PC with a 3.4 GHz Intel Core i7 CPU with 8 GBof RAM. Due to the increased computational complexity of both the FEM and SSM components,122Data Fusion 4.5 SSM-FEMTable 4.7: Computation time of SSM-FEM.Method Time (s)Full PartialGMM-FEM 9.54± 3.26 8.69± 2.21SSM 5.72± 0.53 5.51± 0.42SSM-FEM 50.61± 6.82 29.34± 2.55it is not surprising that the SSM-FEM algorithm is the slowest. However, the mean computationtime is still under one minute when applied to both full and partial surfaces, making it practicalfor clinical use.4.5.6 Section SummaryWe applied our non-rigid surface-based registration approach in the context of partial surfacessegmented from prostate biopsy images. The method uses a statistical shape model as an interme-diary to help co-register the two partial surfaces. The SSM introduces geometric prior knowledge,allowing for varying shapes across a population. The finite-element regularizer accounts for furtherphysical deformations, adding flexibility to the SSM.The proposed registration algorithm converges within a minute on a regular desktop PC. Agreat advantage of the approach is that it estimates point-correspondences, nodal displacements,shape, and pose parameters in a single minimization framework. This is one of the contributingfactors to the efficiency. We also obtain a full volumetric deformation field as a by-product of theregularization. This is in contrast to many existing surface-based approaches, where deformationsneed to be extrapolated from the surface in a post-processing step.In Section 4.5.5.2 we investigated the performance of our method using intrinsic fiducial pairs.We compare to two alternatives: group-wise SSM, where the influence of the biomechanical (FEM)prior is removed; and GMM-FEM, where the influence of the geometric (SSM) prior is removed.SSM-FEM yields a statistically significant improvement compared to SSM registration alone (by2.51 mm for full surfaces, 2.14 mm for partial). This indicates that the FEM component plays animportant role, accounting for differences in deformation states of the prostate in the two observa-tions. When comparing to GMM-FEM, the difference in TRE was only found to be statisticallysignificant when partial surfaces are used (2.0 mm, p < 10−4). In this case, we saw that GMM-FEMsometimes resulted in mis-assignments of the base and apex. The SSM adds sufficient prior shapeinformation to prevent this.One shortcoming of our algorithm is that it requires both the MR and TRUS to be segmentedprior to the registration. While the MR can typically be segmented ahead of time, the 3D-TRUSneeds to be segmented within minutes due to the clinical requirements. Since our algorithm is123Data Fusion 4.5 SSM-FEMdesigned to handle partial surfaces and is robust to missing data, we suggest only segmentingregions in which the boundary of the anatomy can be clearly distinguished, such as the mid-gland.This can help expedite the segmentation process. For the regions that have clear boundaries, asemi-automatic segmentation method can also be useful, such as the one in Qiu et al. [194].In this application, the method was limited to surfaces segmented from two modalities. However,it can be trivially extended to concurrently register more than two surfaces. Such a situation mayarise when the patient is scheduled for a re-biopsy. We can then combine information from allprevious images to obtain a better estimate of the resting prostate shape. Registration of themultiple images would allow the radiologist to avoid areas that in the previous biopsy were foundto be cancer-free, and enable re-sampling of suspicious areas. The extension is accomplished bysimply modifying the objective function in Equation (4.22) to sum over further observations. Themore data available, the more reliable the shared SSM instance is expected to be.While the application in this work is MR-TRUS fusion for prostate biopsies, the proposedmethod is not limited to this context. It is applicable as long as the following conditions are met:1) a statistical shape model of inter-subject variability is available; and 2) each observation surfacerepresents a snapshot of the deformed anatomy. A free-hand ultrasound probe may be used if it istracked to allow for 3D image reconstruction prior to segmentation. However, the second conditionmay fail if motion of the probe induces significantly different deformations between slices. In thatcase, the reconstructed image would not represent a single consistent deformation state.Our probabilistic algorithm is based on very simple concepts: soft-correspondences via Gaussianmixture models, geometric prior knowledge provided by a statistical shape model, and biomechan-ical regularization based on a linearized finite element model. The soft correspondences make themethod robust to noise and missing data; the SSM provides geometric information when boundariesof anatomical regions are not clear in images; and the FEM adds flexibility, allowing the shape todeform to account for soft-tissue motion between acquisitions. The method is general and robust,able to co-register any set of full or partial segmented surfaces.124Data Fusion 4.6 Articulated Registration4.6 Deformable Registration of Articulated AnatomyAs a final application, we consider the case of registering a more complete biomechanical model –consisting of multiple bones, muscles, joints, and other constraints – to new data from a dissectionstudy (Figure 4.17). The model and dissection data are derived from two different subjects, andthe arms and hands are in different states of articulation. Therefore we need to adjust for changesin both state and in pose. In addition, the muscle dissection data consists of a collection offibres, which are to be registered to surface volumes in the template model. These fibres may notaccurately represent the entire muscle volume, and are not collected for the entire arm. Similarly,laser-scanned bone geometry includes an incomplete humerus, does not have separation betweenbones, and contains extraneous data including screws and metal plates used to maintain a fixedpose, as well as the interosseous membrane between the radius and ulna. Thus, we require a methodthat that can handle such data, and be robust to outliers and missing information.In our approach, we convert the bones to flexible finite element models, allowing them to deformto account for inter-subject shape variability. To maintain the joint constraints between them, weintroduce a new type of attachment constraint: a FEM-Frame attachment. This lets us to definejoints between the model components, allowing the arm, wrist and fingers to freely articulate inthe same way that rigid bones would, respecting any joint limit considerations. To register musclesurfaces to the fibre data, we create a ‘wrapping surface’ around each muscle’s collection of fibres,(a) Articulated Arm Model (b) Dissection DataFigure 4.17: Articulated model and digitization of the arm. The arm model (left) consists of20 rigid bone structures constrained by a set of 20 joints, which are attached to by 46 finite ele-ment muscles. The dissection data (right) consists of a single laser-scanned bone geometry, and 19digitized groups of muscle fibres. The goal is to register the model to the new data.125Data Fusion 4.6 Articulated Registrationand use these for computing correspondences. To handle outliers and missing data, we apply ourGaussian-mixture model soft-correspondence approach.4.6.1 Articulated RegistrationThe registration of template articulated models to subject-specific surface data is a common ap-proach for skeletal segmentation and registration tasks. The prior information provided by a modelis invaluable for resolving ambiguities when many bones have similar shapes, such as the vertebraein the spine [198], or the phalanges of the fingers [80].One simple approach for registration of multiple connected bones to a target is to treat theentire structure as a single deformable entity [53, 79, 136]. However, Gilles et al. [80] show that thisapproach can lead to unnatural deformations around joints, which are better captured by a dis-continuous displacement field. Thus, it is important to explicitly incorporate points of articulationwithin the registration process.For whole-body skeletal registration of micro-CT in mice, Baiker et al. [11] developed an artic-ulated registration algorithm based on a skeletal atlas. The CT images were roughly segmentedusing thresholding. The atlas was hierarchically registered to the threshold segmentation, startingwith the skull and progressing to the distal bones of the limbs. The result from each stage intro-duced articulated joint constraints for the following stages. The bones themselves, however, wereconsidered rigid, so do not capture subject-specific shape variations. Yip et al. [274] adapted thistechnique for registration of the human skeleton, and followed this with a deformable stage usingoptical flow to account for further shape changes, such as those caused by metastatic bone lesions.In this two-stage process, shape changes are only recovered in the second stage, which can limitaccuracy of the pose estimation.For registering CT to ultrasound of the lumber spine, Rasoulian et al. [198] developed a multi-body registration algorithm that applied Kalman filters for point-cloud registration of the individualvertebrae, combined with a spring-based biomechanical model to constrain the poses between them.They show that with such an approach, they can robustly account for changes in spinal curvaturebetween pre-operative CT and intra-operative US, which is needed in image-guided percutaneousspinal injections. This technique required a subject-specific model to be created from CT for eachpatient. To automatically account for inter-subject variability, Rasoulian et al. [200] later usedpopulation statistics from a set of training segmentations to create a statistical multi-vertebraemodel that accounts for both differences in shape and pose. Such an approach was found to beincredibly robust in segmenting the spinal column in CT, but requires training data that sufficientlycaptures all expected variations in both shape and pose.Gilles et al. [80] present a registration framework to account for both inter-subject shape vari-126Data Fusion 4.6 Articulated Registrationability and pose with a deformable model-based approach. For changes in shape, they use statisticalshape models to estimate global changes in their template model, and apply an as-rigid-as-possibleshape-matching registration to allow further deformations. To address changes in pose – includingjoint limits – they adapt the pose of the reference template following the deformation stage: a validset of transforms is computed that best approximates the global pose of each bone, and these areused to update the pose of the template. By iteratively applying these shape and pose stages, theyare able to account for both, reducing artificial deformations around joints. The technique is purelysurface-based, however, and does not consider volumetric artifacts.For full-body registration in the context of transferring anatomy between animated characters,Dicko et al. [56] developed an algorithm that maps not only the skeletal structure, but also the skin,fat, muscles, and other organs. First the outer skin is deformably registered using shape matching[79, 175]. They then apply a harmonic field minimization to estimate an initial deformation ofthe internal organs and bones. To reduce unnatural bone deformations, they restrict each bonetransform to a best-fitting affine one. The organs are then re-interpolated based on the transformedbones and the outer skin. With such an approach, they are able to show improved volumetricbehaviour and fewer artifacts compared to Gilles et al. [80]. However, it does not explicitly considerjoints or joint limits during the registration process.Our approach is similar to that of Gilles et al. [80], except that we have no statistical shapecomponent. Instead, the deformation is volumetric through the use of finite-element models, andthe pose estimation is computed in the same step as deformation. We also compute correspondencesusing the probabilistic GMM approach rather than iterative-closest-point, which is more robust tomissing data and outliers.4.6.2 Template Model and Digitized Target DataThe template model of the arm is adapted from one generously provided by collaborators at theUniversity of Auckland, obtained from the IUPS Physiome open model repository3 [70]. The originalsoft tissue geometry was extracted from axial image slices provided by the Visible Human Project[109], and bone geometry from a laser-scanned anatomically accurate physical model (SOMSO,www.somso.de). To these surface geometries, Fernandez et al. [70] fit cubic Hermite finite elementmodels. We adapted the arm model by converting the bones into rigid bodies, and manuallyadded joint constraints between them, complete with joint limits. Anatomical accuracy of the jointlocations and limits was verified by collaborators at the Department of Anatomy, University ofToronto.The digitized forearm muscle fibre and bone data was provided by the Department of Anatomy,3http://physiomeproject.org127Data Fusion 4.6 Articulated RegistrationFigure 4.18: Encapsulated fibre surfaces. A wrapping surface is generated around each set of mus-cle fibre bundles by smoothly blending together a set of cylinders, and reconstructing an isosurface.University of Toronto, collected by Li et al. [140] as part of a cadaveric study to determine musclearchitectural parameters. The formalin-embalmed arm was immobilized using metal plates andscrews, and all skin and fascia was skin was removed. Digitization of muscle fascicles was carriedout using a MicroScribeTM MX Digitizer (0.05 mm accuracy; Immersion Corporation, San Jose,CA). Each muscle fascicle was traced at 3–5 mm intervals between attachment sites. The fibreswere then serially excised to reveal underlying fascicles and aponeuroses, and the process repeateduntil all muscle volumes were extracted. The bone geometry was then laser-scanned using a FAROLaser ScanArm (FARO Technologies Inc., Lake Mary, FL, USA). Ethics approval for this studywas received from University of Toronto Health Sciences, and University of British Columbia.From the fibre geometries, created an encapsulating surface for each muscle in order to registerwith the muscle surfaces in the template model. Similar to Lee et al. [129], we construct this wrap-ping surface around each collection of fibres by considering a set of blended cylinders about the fibrebundles. To build a smoothed triangulated surface, we first compute a three-dimensional signed-distance field for the cylinders [186], and smooth this field by applying the Laplacian smoothingoperator. To generate the final bounding surface, we extract an iso-surface from the smoothed fieldusing the Marching Tetrahedra algorithm [57]. For the forearm muscles, we chose the 2 mm iso-surface. Resulting encapsulating surfaces are shown in Figure 4.18. Note that since we treat eachmuscle separately, there will be intersections of these muscle surfaces, and gaps where neighbouringmuscles should conform. We do not attempt to correct this. During registration, the delineatedmuscles in the template model will automatically attempt to fit to both surfaces at any sharedboundary.4.6.3 Articulated Finite Element ModelTo simultaneously allow our registration process to adjust its pose as well as deform to fit the boneand muscle shapes, we need a mechanism to constrain finite element models with articulating jointsbetween them. This would allow, for example, the arm to bend freely at the elbow – at least withinspecified limits – while also allowing the bones themselves to deform to fit the humerus, radius and128Data Fusion 4.6 Articulated Registrationulna. To accomplish this, we introduce a FEM-Frame attachment that adds a coordinate framewhich is complete driven by the motions of finite element nodes. We can then apply the usual jointconstraints between this coordinate frame and any other frame, including ones associated with rigidbodies, or those attached to any other deformable structures.FEM-Frame AttachmentsWe provide two mechanisms for attaching a coordinate frame to a finite-element model:• Material location: attach the frame to a single fixed point in material coordinates.• Procrustean: attach the frame to a weighted collection of nodes.In the first method, the coordinate frame is attached to a fixed point within a single finite elementwith material coordinates Xa. We can compute the world position of the attached frame, as wellas estimate its orientation, based on the finite element shape functions:xa =∑iφi(Xa)ni, F =∑ini[∂φi(Xa)∂X]T= RaF˜ ,where ni are the node positions, xa is the attached frame’s origin, and Ra is the attached frame’sorientation computed by performing a polar decomposition on the deformation gradient F . Thisapproach seems to work well for bodies undergoing small changes in orientation. For large rotations,however, a joint constraint attached to a single point can induce large deformations within theelement, which can lead to numerical instabilities. For a broader attachment area and a moregeneralized approach, we consider the motion of a weighted combination of nodes:xa0 =∑jwjw¯nj0, Ra0 = I, w¯ =∑jwjxa =∑jwjw¯nj , F =∑jwj [nj − xa][nj0 − xa0]T = RaF˜ ,where {wj} are a collection of weights, nj0 and nj are the rest coordinates and current coordinatesof the dependent nodes, (xa0, Ra0) is the rest state of the attached coordinate frame, and (xa, Ra)is the current one. We call this the Procrustean approach, since it is derived by performing aweighted Procrustes registration between the current node locations and their rest positions. Theweights {wj} can be determined in any number of ways. For the purposes of our articulated armregistration, we compute them based on a radial spline function,w(r˜) =(1− r˜2)2, r˜ ≤ 1,0, otherwise,129Data Fusion 4.6 Articulated Registration(a) Bones (b) Muscles (c) JointsFigure 4.19: Articulated finite element arm. Bone and muscle geometries are embedded in finiteelement volumes. The bones are attached to each other using joint constraints. In (c), a point-forceis applied to the hand in order to induce both deformation and articulation.where r˜ is the radius from the joint, normalized by the influence radius. For revolute joints, theradial distance is computed from the axis of rotation, whereas for universal and spherical joints itis the normalized distance from the joint centre.Finite Element Muscle and Bone VolumesThe original finite element volumes from the arm model were constructed using cubic Hermiteelements [70], which our simulation system does not currently support. A direct conversion tolinear or quadratic elements led to poor element conditioning. To avoid challenges in volumetricmeshing, particularly for the thin, twisting structures of the muscles in the forearm, we insteaduse the embedded finite-element approach described in Section 3.2.3 (Figure 4.19). A separateencapsulating volume was generated for each bone, with a surrounding buffer volume of 1 cm toallow for attachments with surrounding tissue. The muscles were divided into five groups: shoulder,upper-arm, elbow, forearm, and hand. These were coupled to each other and to the bones usinga collection of linear point attachment constraints (see Appendix B.3.3.1). The bones themselveswere coupled together using joint constraints (Appendix B.3.3.2). For the fingers and joints in thewrist, a joint radius of 0.5 cm was applied. For the elbow, a joint radius of 2 cm was used. Ascan be seen by the fingers in Figure 4.19c, these joints allow the bones to articulate freely withinspecified joint limits, while the bones themselves, such as the radius and ulna, are also able todeform.4.6.4 RegistrationTo register our articulated arm model to the target dissection data, we apply our dynamic reg-istration framework described in Section 4.2. The deformation, muscle attachments, and joint130Data Fusion 4.6 Articulated Registration(a) Rigid (b) FEMFigure 4.20: Rigid vs. FEM articulated registration. With a rigid articulated registration of thebones in the arm, the model is unable to recover the elbow angle due to the bone shape differencesof the humerus, radius, and ulna. By allowing the bones to additionally deform, we are able tobetter estimate the joint angles.constraints are all automatically handled by the underlying constrained dynamic system.For computing registration forces, the muscles are individually labelled in both the FEM modeland digitized fibres, so correspondences are calculated between these pairs of surfaces as long asthe target is available. Muscles that have no target – such as those in the shoulder, upper-arm andhand – have no registration force applied, so simply follow according to the motion of the otherattached components.The laser-scanned target bone geometry consists of a single mesh, with no separation betweenbones. To help resolve ambiguities when registering to the fingers, we manually label the distalphalanges (Figure 4.22b). For all other bones in the model, the single laser-scanned geometry isused as a common target. Correspondence weights are computed using the GMM approach.Because the template model and target data are taken from different subjects, we do not havephysical requirements such as incompressibility, nor the need to use realistic elasticity parameters.In fact, we expect the volumes and shapes to change significantly for a close fit. For simplicity, weuse a corotated linear material model for all FEM components (Appendix B.2.3.1). We initiallyuse E = 100 kPa, ν = 0.1 for the bones, and E = 0.5 kPa, ν = 0.1 for muscles. We expect largerdeformations for the muscles due to their inherent soft nature. The small value of Poisson’s ratiois to allow significant changes in volume, especially since the initial muscle volumes extracted from131Data Fusion 4.6 Articulated RegistrationFigure 4.21: Articulated muscle fibre registration. The arm FEM-articulated arm is registered tothe digitized muscle fibre and bone data. Muscle surfaces in the forearm are shown as translucentto see the internal fibres.the Visible Human data are quite large. We use a force-scaling parameter of β = 200, which seemsto balance elasticity versus correspondence well without leading to instabilities, and w = 0.1 forthe fraction of outliers due to the noisy muscle-wrap surfaces and extraneous data on the bonegeometry. Once the registration process converges, we reduce the Young’s modulus for both themuscles (10 kPa) and bones (50 Pa) to obtain a better fit. These adjustments are performedinteractively.Registration results of the bones for both a rigid articulated registration and our deformabletechnique is depicted in Figure 4.20. With rigid articulation alone, the model was unable to fit wellto the elbow, resulting in a different pose estimation. With the additional deformable aspect, theradius and ulna were able to adjust to the subject-specific curved shapes present in the data, whichproduced a more consistent elbow angle with the target.The final registered arm is depicted in Figure 4.21. The muscles in the upper-arm and shoulderautomatically follow the model’s change in pose because of the underlying dynamic model, whereasthe muscles in the forearm are fit to the fibre-wrapping surfaces. In Figure 4.22, we show a close-upof the forearm and hands. There is a difference in finger pose estimation with and without labellingof the distal phalanges. Labelling was found to be necessary to resolve ambiguities between theindividual fingers, since the bones are similarly shaped. The labelling itself was performed simplyby placing a single point at each finger-tip and colouring any faces within a radius 1.5 cm.We can adjust between volumetric regularization and closeness-of-fit by varying the Young’smodulus of the muscles (Figure 4.22c–e). By changing it from E = 500 Pa to E = 50 Pa, wecan more-closely surround the fibres. However, this causes the muscles in the hand – which haveno target – to be pulled excessively down towards the forearm. The cause for this is that thedigitized fibres do not contain the tendon geometries that extend into the wrist and hand, but thefinite element muscle volumes do. These tendons are therefore artificially pulled back towards the132Data Fusion 4.6 Articulated Registration(a) Unlabelled (b) Labelled (c) 500 Pa (d) 50 Pa (e) 50/500 PaFigure 4.22: Forearm registration with varying parameters. Without labelling of the distal pha-langes, the fingers in (a) are attracted to each other. This is corrected by adding an attractor atthe tips (b). By varying the stiffness parameters of the muscle, we can achieve different trade-offsbetween volume-restoration and correspondence fit (c, d). At E = 50 kPa, the muscles in the handexhibit excessive stretching. We can restore its shape by keeping E = 500 kPa in the hand, andE = 50 kPa elsewhere.forearm fibres. We can correct this by increasing the stiffness for the muscles in the hand, back to500 Pa, to resist this pull.4.6.5 Improving Computational EfficiencyThe bottle-neck of the registration algorithm is in computing the correspondence weights. In ouroriginal framework, a unique weight is computed for each source-target vertex pair, which resultsin a total computational complexity O(MN). For the surface geometries considered in our armregistration task (the target bone has 20k vertices, and the model 30k vertices), the full computationof {wmn} takes 5.2 minutes on average. This would make the approach impractical, especially sincewe may require hundreds of iterations before we reach convergence. For comparison, assemblingthe system matrices and solving for the next set of displacements in this example only takes 0.88s on average. We need a way to reduce the computational complexity.From the Gaussian mixture probability distribution in Equation (4.4), we can see that thecontribution of a particular Gaussian centered at xi to the correspondence probability at target133Data Fusion 4.6 Articulated Registrationlocation y will be negligible if1. the target point is far enough away that the probability of outliers dominates(i.e. wN (1−w)M p(y|xi)), or2. there exists another xj that is significantly closer to the target(i.e. p(y|xj) p(y|xi)).The first condition allows us to limit the global search radius. To ignore any Gaussians thatcontribute less than 10% of the background outlier distribution, we need only consider sourcepoints xi that satisfy‖y − xi‖2 ≤ r2w = log(20σ2√(2piσ2)3MNw1− w). (4.25)This gives us a maximum radius, dependent on w, beyond which we can safely assume the targetpoint is an outlier. The second condition allows us to terminate a search after we have foundthe nearest K source vertices such that the K + 1 vertex contributes negligibly to the mixtureprobability distribution p(y). To place a stronger bound on computational complexity, we chooseto additionally enforcing a maximum K = Kmax points to consider. To find the nearest K sourcevertices to a particular target vertex that also fall within the search radius rw, we use an axis-alignedbounding box (AABB) tree [253], and perform a K-nearest neighbour search. Initial constructionof the AABB tree is O(M logM) and updates at each iteration are amortized O(M). The searchfor the K nearest neighbours has a complexity of approximately O(K logM) [74]. Thus, we canreduce computation of the correspondence weights to O(M +NK logM).Note that the smaller the value of K, the more susceptible the estimated probability distributionis to discrete jumps, which can make the registration less stable and more susceptible to localminima. In the extreme case of K = 1, we have essentially reverted back to the iterative-closestpoint algorithm. We find that limiting K to approximately 30 allows for a decent trade-off betweenspeed and stability. In the case of registering the bone geometries, this results in a 100× speed-up,with an average computation time of 3.1 s per iteration. The total registration time for the armwas 23 minutes.4.6.6 Section SummaryWe applied our dynamic registration framework to register a complete hybrid model of the armto muscle fibre and bone data acquired from a dissection and digitization study. To create targetsurfaces for the muscle volumes, we wrap the digitized fibre geometry with a smoothed surfaceusing a blended cylinders approach. During registration, we need to account for both changes inshape and changes in pose. For changes in shape, we convert the bones to finite-element models so134Data Fusion 4.7 Chapter Summarythey can deform throughout the process. However, we still want the arm to bend freely at jointsto be able to account for gross changes in pose. We therefore introduce a FEM-frame attachmentframework, allowing us to apply joint constraints between collections of nodes.We also introduced techniques for reducing the computational cost of computing Gaussianmixture model-based correspondence weights, which is currently the major bottle-neck. By limitingthe search radius around each target point and only considering the nearest K Gaussian sources, wewere able to improve speed for the arm registration by two orders of magnitude. This complexityimprovement can be applied for any GMM application, not just this registration of the arm. Wefound it necessary to perform this step in order to have interactive registration times, where wecould adjust elasticity parameters manually to improve the registration fit.The result of the registration process is a subject-specific arm model in a pose that matches thetarget digitized dataset. Not only does this model allow us to label and separate all the individualbones in the target geometry, but it also brought along estimates of the joint locations and limits,as well as muscle geometries not originally present in the data. Furthermore, we can now use thedigitized fibres to augment the model by incorporating the fibre architecture information into themuscle volumes, which was previously missing. We will discuss the importance of this, as well as ourapproach for including this detailed information in the model, in the next chapter. By convertingthe bones back into rigid bodies, we have generated a hybrid dynamic model which can be used forsubject-specific simulations.We aim to use these same registration techniques to construct subject-specific models for otherparts of anatomy, including the upper-airway [5], which consists of a large collection of bothhard and soft components, as well as various types of joints and attachment constraints. Ourdynamically-driven model-based approach is generic, and provides a systematic and robust methodfor creating hybrid models from templates for use in dynamic simulations.4.7 Chapter SummaryIn this chapter we addressed challenges relating to fusion of data from multiple sources, includingbetween multiple fully or partially segmented images, as well as between a dynamic template modeland target surface data.In our solution, we discussed the use of dynamic simulation to guide and regularize the regis-tration task. We first introduced our dynamic registration framework in a general context: given apair of source and target surfaces, and a registration transform defined by a constrained dynamicmodel, we determine the optimal parameters by balancing betweenSimilarity: maximizing the likelihood that points on the target surface are drawn from a Gaus-sian mixture model defined by the source;135Data Fusion 4.7 Chapter SummaryRegularization: minimizing net residual energy of the dynamic model.The Gaussian mixture model provides a robust handling of noise or incomplete target data, andthe dynamic model allows us to incorporate prior physical and biomechanical knowledge of theunderlying anatomy.We then simplified our registration framework for the case of a single unconstrained finiteelement model (GMM-FEM) and applied it in Section 4.3 to register MRI and TRUS images in thecontext of prostate biopsies and prostatectomies. We show that both the GMM correspondencesand the FEM regularization are necessary when dealing with unreliable or partial surface data, andwe were able to reduce target registration error compared to similar state-of-the-art methods. Thiswork has been published as [J2], reproduced here with permission ( c© IEEE 2015), though notethat we have redefined β here to more directly relate to the scaling of registration forces, and tobe invariant to the number of target points.In Section 4.4, we extended our registration framework to incorporate population statistics inthe form of a statistical shape model (SSM-FEM). This allowed us to estimate a subject-specificdynamic model from a set of partial observations. We again applied the technique in the contextof prostate biopsies, and show that the SSM can play a crucial role when the prostate boundaryis not clearly visible in either of MRI or TRUS. This work has been published as [J1], reproducedhere with permission ( c© IEEE 2015).Finally, in Section 4.6, we introduce the concept of adding joint constraints between finite-element models, which allows us to adjust a full hybrid dynamic model to account for both subject-dependent changes in shape as well as pose. We also discuss how to improve computational efficiencyof determining GMM weights by limiting the correspondence search radius, which may be necessaryfor large models. We apply this articulated registration in deforming a complete model of the armto fit new muscle fibre and bone data acquired in a dissection and digitization study. This work waspresented at CMBBE 2015 (Montreal), which placed third amongst student research presentations[A2].The results of a model-based registration can be used to fuse image data in a common referencespace, as we have done for the prostate, as well as for creating a subject-specific model from atemplate, as in the articulated arm. The newly registered data can then be incorporated backinto the model, allowing for detailed subject-specific simulations useful for estimating functionaloutcomes, such as those following surgery or treatment.136CHAPTER 5Detail: Structural Intricacies and Missing InformationFor a model to be useful, it needs to capture all relevant details that significantly impact thefunction to be studied. For example, muscles are known to contract along the direction of itsinternal fascicles, and this information will have a direct impact on generated forces and motion.Thus, in applications that rely on the contraction of muscles, we will need to include some type ofarchitectural description. Similarly, tendinous tissue acts a conduit for force transmission betweenmuscles and bones, which can be exploited by multipennate muscles to gain mechanical advantage.In applications that rely on strong force transmission, these properties may be important to include.When modelling speech or swallowing, any artificial holes in the model will allow the air or fluid toescape, and the model will fail to adequately predict function. Thus, we may need to ensure thereis a watertight lining for the air or fluid to flow through. The challenge, then, is to include all theserelevant details in a practical way, even if we do not necessarily have all the required information.These are the topics discussed in this chapter: how to efficiently incorporate muscle fibre, tendon,and surface detail into hybrid biomechanical models.In Section 5.1, we discuss how to extract muscle architectural information from a dense datasetacquired through a dissection and digitization study. We use these as muscle-specific templateswhich can be adapted to subject-specific geometries. We also demonstrate the importance of suchfields when analyzing the contraction behaviour of muscle tissue.Thin tendinous structures such as the sheets of aponeuroses in the masseter can pose modellingchallenges when constructing traditional finite element meshes, necessitating a high resolution toadequately resolve them. This can lead to impractical simulation times. In Section 5.2, we examinehow to efficiently include the properties of these thin sheets within a muscle, and their importance– in conjunction with the detailed fibre architecture – on force transmission in the masseter. Wealso demonstrate how we can map the muscle to another surface, allowing us to transfer the tendonproperties when data may not be available.Finally, in Section 5.3, we address the issue of covering model gaps, regions where there is eithera missing tissue component or we wish to ensure an airtight seal to allow computation of air or fluidflow. For this we introduce a unified skinning technique, allowing a closed surface geometry to bedriven by the dynamics of other multibody or deformable components. We briefly provide exampleswhere this technique has been useful in the context of simulation of speech and swallowing.137Detail 5.1 Fibre Architecture5.1 Detailed Muscle Fibre Architecture in Finite Element ModelsFinite element models of muscle are able to capture both complex muscle geometry and internalfibre architecture. However, due to an inability to see fascicles by conventional imaging, mostcurrent models assume simple fibre distributions. These are often registered from a minimal setof contrived templates. To improve realism, we propose to use detailed fibre fields, digitized froma human cadaver, for defining the internal muscle structure in our models. We present a work-flow for registering these detailed fibre fields to subject-specific muscle geometries, and extractingorientation information for use with a muscle-specific constitutive law. The importance of thisdetail is emphasized with two examples of muscle groups found in the forearm: the extensor carpiradialis longus and the flexor digitorum superficialis. We show that simplified fibre fields can leadto a 10-20% difference in predicted muscle force, and equally significant differences in contractedgeometries.5.1.1 Finite Element MusclesFinite element (FE) muscle models allow for a detailed analysis of internal tissue deformation,contact conditions, and force transmission [21]. They differ from regular FE models in that theyadditionally incorporate a 3D arrangement of muscle fibres and a constitutive law that accounts fortheir ability to contract. Fibre architecture has been shown to significantly affect muscle speed andforce output [141]. Since contraction occurs in the direction of fibres, the specific architecture alsoaffects a muscle’s geometric shape as it is activated. For these reasons, we believe it is importantto include realistic fibre fields when creating models to study muscle physiology.Unfortunately, muscle fibre architecture is difficult to accurately measure in vivo using conven-tional imaging techniques. There have been some attempts to extract fibre structure using DiffusionTensor Imaging [133], but the data is extremely noisy, so the methods rely heavily on assumptionsof smoothness and uniformity. Fernandez et al. [71] extracted muscle direction information fromDTI on the human triceps surae muscle in the lower limb. To reduce noise, they acquired imagesusing 20 directions instead of the minimum 6 required to reproduce a symmetric tensor field. Otherimaging approaches include identifying muscle fascicle origin and insertion sites such as with 3Dultrasound [3], and generating a fibre field between them. Lines of action are typically assumed torun linearly between these points of attachment. These models may fail to reproduce twisting orinterleaving intricacies throughout the muscle volume that might impact its true behaviour.Some of the first work to incorporate real muscle fibre information into a volumetric model wasdone by Nielsen et al. [180]. They carefully dissected away layers of a canine heart, measuringthe surface geometry and fibre angles at a sampling of points using a mounted probe. From this138Detail 5.1 Fibre Architecturepoint-set, they reconstructed a volumetric mesh with an accompanying interpolated fibre field.This model was later used to simulate heart contraction dynamics [177]. Similar procedures havebeen performed for other non-human muscles, such as the cat medial gastrocnemius [130]. In thesestudies, fibre directions were measured at a relatively sparse set of points and came from the samesource as the muscle geometries, so there was no need for registration.Blemker and Delp [21] led the way in human FE muscle modelling, developing models to simulatethe human hip that included fibres registered from a set of four simplified templates. The fibretemplates were cube-shaped with smooth distributions to represent four generic muscle types. Thesewere then manually deformed to fit the target geometry. The fibre orientations were included in theconstitutive model [22], which added the ability of the muscle to contract along the direction of thefibres. Webb et al. [261] used a similar process for muscles in the shoulder. Ro¨hrle and Pullan [205]also used a fibre template registration technique applied to a cubic Hermite mesh of the masseterto study forces generated during mastication. The reason for using these simple templates was thatthere was no method for obtaining true fibre distributions for the muscles of interest.Lu et al. [154] developed an alternative method that combines the finite element method withthe non-uniform rational B-spline (NURBS) solid representation. Fibre directions are computedusing tangent information within the NURBS solid. One of the benefits of their method is that FEequations are applied directly to the NURBS description, so there is no need for volumetric meshgeneration. Similar to the template mesh approach, the resulting fibre fields are smooth and followthe curvature of the muscle.There have been recent developments in techniques for measuring highly detailed muscle fibrearchitectures in
You may notice some images loading slow across the Open Collections website. Thank you for your patience as we rebuild the cache to make images load faster.
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Overcoming obstacles in biomechanical modelling : methods...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Overcoming obstacles in biomechanical modelling : methods for dealing with discretization, data fusion,… Sánchez, C. Antonio 2018
pdf
Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.
Page Metadata
Item Metadata
Title | Overcoming obstacles in biomechanical modelling : methods for dealing with discretization, data fusion, and detail |
Creator |
Sánchez, C. Antonio |
Publisher | University of British Columbia |
Date Issued | 2018 |
Description | Biomechanical modelling has the potential to start the next revolution in medicine, just as imaging has done in decades past. Current technology can now capture extremely detailed information about the structure of the human body. The next step is to consider function. Unfortunately, though there have been recent advances in creating useful anatomical models, there are still significant barriers preventing their widespread use. In this work, we aim to address some of the major challenges in biomechanical model construction. We examine issues of discretization: methods for representing complex soft tissue structures; issues related to consolidation of data: how to register information from multiple sources, particularly when some aspects are unreliable; and issues of detail: how to incorporate information necessary for reproducing function while balancing computational efficiency. To tackle discretization, we develop a novel hex-dominant meshing approach that allows for quality control. Our pattern-base tetrahedral recombination algorithm is extremely simple, and has tight computational bounds. We also compare a set of non-traditional alternatives in the context of muscle simulation to determine when each might be appropriate for a given application. For the fusion of data, we introduce a dynamics-driven registration technique which is robust to noise and unreliable information. It allows us to encode both physical and statistical priors, which we show can reduce error compared to the existing methods. We apply this to image registration for prostate interventions, where only parts of the anatomy are visible in images, as well as in creating a subject-specific model of the arm, where we need to adjust for both changes in shape and in pose. Finally, we examine the importance of and methods to include architectural details in a model, such as muscle fibre distribution, the stiffness of thin tendinous structures, and missing surface information. We examine the simulation of muscle contractions in the forearm, force transmission in the masseter, and dynamic motion in the upper airway to support swallowing and speech simulations. By overcoming some of these obstacles in biomechanical modelling, we hope to make it more accessible and practical for both research and clinical use. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2018-12-21 |
Provider | Vancouver : University of British Columbia Library |
Rights | Attribution-NonCommercial-NoDerivatives 4.0 International |
DOI | 10.14288/1.0375814 |
URI | http://hdl.handle.net/2429/68132 |
Degree |
Doctor of Philosophy - PhD |
Program |
Electrical and Computer Engineering |
Affiliation |
Applied Science, Faculty of Electrical and Computer Engineering, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 2019-02 |
Campus |
UBCV |
Scholarly Level | Graduate |
Rights URI | http://creativecommons.org/licenses/by-nc-nd/4.0/ |
AggregatedSourceRepository | DSpace |
Download
- Media
- 24-ubc_2019_february_sanchez_carlos_antonio.pdf [ 85.86MB ]
- Metadata
- JSON: 24-1.0375814.json
- JSON-LD: 24-1.0375814-ld.json
- RDF/XML (Pretty): 24-1.0375814-rdf.xml
- RDF/JSON: 24-1.0375814-rdf.json
- Turtle: 24-1.0375814-turtle.txt
- N-Triples: 24-1.0375814-rdf-ntriples.txt
- Original Record: 24-1.0375814-source.json
- Full Text
- 24-1.0375814-fulltext.txt
- Citation
- 24-1.0375814.ris