Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A feasibility study of template-based subject-specific modelling and simulation of upper-airway complex Tang, Keyi 2017

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2017_september_tang_keyi.pdf [ 49.83MB ]
Metadata
JSON: 24-1.0347294.json
JSON-LD: 24-1.0347294-ld.json
RDF/XML (Pretty): 24-1.0347294-rdf.xml
RDF/JSON: 24-1.0347294-rdf.json
Turtle: 24-1.0347294-turtle.txt
N-Triples: 24-1.0347294-rdf-ntriples.txt
Original Record: 24-1.0347294-source.json
Full Text
24-1.0347294-fulltext.txt
Citation
24-1.0347294.ris

Full Text

A Feasibility Study of Template-BasedSubject-Specific Modelling andSimulation of Upper-Airway ComplexbyKeyi TangB.Eng., University of Electronic Science and Technology of China, 2014A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF APPLIED SCIENCEinThe Faculty of Graduate and Postdoctoral Studies(Electric & Computer Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)May 2017c© Keyi Tang 2017AbstractThe upper-airway complex is involved in a number of life-sustaining func-tions, such as swallowing, speech, breathing and chewing. Disorders asso-ciated with these functions can dramatically reduce the life quality of thesuffers. Biomechanical modelling is a useful tool that can bridge the gap be-tween the human knowledge and medical data. When tailored to individualpatients, biomechanical models can augment the imaging data, to enablecomputer-assisted diagnosis and treatment planning.This thesis introduces a model-registration framework for creating subject-specific models of the upper-airway complex based on 3D medical images.Our framework adapts a state-of-art comprehensive biomechanical modelof head and neck, which represents the generic upper-airway anatomy andfunction. By morphing this functional template to subject-specific data, wecreate upper-airway models for particular individuals. In order to preservethe functionality of the comprehensive model, we introduce a multi-structureregistration technique, which can maintain the spatial relationship betweenthe template components, and preserve the regularity of the underlyingmesh structures. The functional information, such as the muscle attach-ment positions, joint positions and biomechanical properties, is updated tostay relevant to the subject-specific model geometry. We demonstrate thefunctionality of our subject-specific models in the biomechanical simulations.Two illustrative case studies are presented. First, we apply our mod-elling methods to simulating the normal swallowing motion of a particularsubject based on the kinematics (of the airway boundary, jaw and hyoid)extracted from dynamic 3D CT images. The results suggest that our modeltracks the oropharyngeal motion well, but has limited ability to reproducethe hyolaryngeal movements of normal swallowing. Second, we create twospeaker-specific models based on 3D MR images, and perform personal-ized speech simulations of the utterance /@-gis/. The models reproduce thespeech motion of the tongue and jaw recorded in tagged and cine MRI datawith sub-voxel tracking error, predict the muscular coordinating patterns ofthe speech motion.This study demonstrates the feasibility of using template-based subject-iiAbstractspecific modelling methods to facilitate personalized analysis of upper-airwayfunctions. The proposed model-registration framework provides a founda-tion for developing a systematic and advanced subject-specific modellingplatform.iiiLay SummaryAfter surgical treatment and radiotherapy, oral-cancer patients may havedifficulty in certain life-sustaining activities, such as swallowing, breathingand speech. Biomechanical models provide a means to analyze the causeand effect of these upper-airway dysfunctions, and to simulate the surgicalchanges in bone and muscle structures for prediction of treatment outcomes.These clinically-relevant applications require the models to include as muchinformation as possible from a specific subject. However, creating biome-chanical models for coupled upper-airway system relies heavily on expertinteraction. The slow process of model creation prevents us from simulatinglarge numbers of individual cases. In order to ease the modelling efforts, thisstudy explores the use of registration methods for model creation: Morph apredefined template model to match with certain subject data. Personalizedswallowing and speech simulations are performed to demonstrate the poten-tial of the template-based subject-specific modelling methods for clinically-relevant analysis of upper-airway functions.ivPrefaceThis thesis presented herein was approved by UBC clinical Research EthicsBoard, certificate numbers: H16-00016, H16-01546.Most of the contributions and ideals described in Chapter 3 have beenpresented previously in the publication [P1]. I was the primary author andmain contributor to the design, implementation and testing of the methodsdeveloped in the [P1], under supervision of Dr. Sidney Fels. Dr. Negar M.Harandi assisted with the analysis of the results, and gave editorial feedbackto the paper. Dr. Yoko Inamoto provided the CT data. Dr. Maureen Stoneand Dr. Jonghye Woo provided the MRI data. OPAL project provided thetemplate upper-airway model.Chapter 4 has not yet been published. I developed the subject-specificupper-airway model, segmented the bone surfaces, proposed and imple-mented the swallowing simulation methods, analyzed the simulation results.Dr. Yoko Inamoto provided the CT data. Dr. Andrew Ho provided thesegmentation of the moving airway boundary. Dr. John Lloyd provided thesource code of the forward-dynamics simulation. Dr. Ian Stavness providedthe source code of the inverse solver.Chapter 5 has been partially published in the literature [P2]. I devel-oped the speaker-specific models, proposed and implemented the speechsimulation methods, analyzed the simulation results, under supervision ofDr. Sidney Fels. Dr. Negar M. Harandi was the primary author of thepublication [P2], and assisted with the analysis of the results. Dr. MaureenStone and Dr. Jonghye Woo provided the MRI data. Dr. John Lloyd pro-vided the source code of the forward-dynamics simulation. Dr. Ian Stavnessprovided the source code of the inverse solver.Journal Manuscripts in Review[P1] Tang K, Harandi NM, Fels S. 2017. Subject-specific modelling ofupper-airway complex. Computer Methods in Biomechanics and BiomedicalEngineering (CMBBE). Submitted.vPeer-Reviewed Conference Paper in ReviewPeer-Reviewed Conference Paper in Review[P2] Tang K, Harandi NM, Woo J, Fakhri EG, Stone M, Fels S. 2017.Speaker-specific Biomechanical Model-based Investigation of a Simple SpeechTask based on Tagged-MRI. Interspeech. Submitted.viTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiList of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . xiiiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xivDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Outlines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Background and Previous Work . . . . . . . . . . . . . . . . 62.1 Data Acquisition and Measurement . . . . . . . . . . . . . . 72.1.1 Cadaver Studies . . . . . . . . . . . . . . . . . . . . . 72.1.2 Fluoroscopy and Computed Tomography . . . . . . . 72.1.3 Magnetic Resonance Imaging . . . . . . . . . . . . . . 92.1.4 Elastography . . . . . . . . . . . . . . . . . . . . . . . 122.1.5 Electromyography . . . . . . . . . . . . . . . . . . . . 122.2 Biomechanical Modelling of Upper-Airway Complex . . . . . 132.2.1 Functional Reference ANatomical Knowledge . . . . . 142.2.2 Combined Multi-Body Finite-Element Simulation . . 202.3 Subject-Specific Modelling . . . . . . . . . . . . . . . . . . . 22viiTable of Contents2.4 Biomedical Applications . . . . . . . . . . . . . . . . . . . . . 282.4.1 Intralaryngeal Prosthesis Implantation . . . . . . . . 282.4.2 Intensity Modulated Radiation Therapy . . . . . . . . 292.4.3 Hemiglossectomy and Hemimandibulectomy . . . . . 292.4.4 Maxillofacial Surgery . . . . . . . . . . . . . . . . . . 302.5 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . 303 Subject-Specific Modelling of Upper-Airway Complex . . 323.1 Multi-Structure Registration . . . . . . . . . . . . . . . . . . 333.1.1 Registration Framework . . . . . . . . . . . . . . . . . 363.1.2 Correspondence Computation . . . . . . . . . . . . . 383.1.3 As-Similar-As-Possible Deformation . . . . . . . . . . 403.1.4 Structure-Preserving Free-Form Deformation . . . . . 433.1.5 Experiment . . . . . . . . . . . . . . . . . . . . . . . . 453.2 Subject-Specific Modelling . . . . . . . . . . . . . . . . . . . 483.2.1 Medical Image Data . . . . . . . . . . . . . . . . . . . 483.2.2 Template Model . . . . . . . . . . . . . . . . . . . . . 493.2.3 Modelling Framework . . . . . . . . . . . . . . . . . . 503.2.4 FE Quality Control . . . . . . . . . . . . . . . . . . . 523.2.5 Functionality Transfer . . . . . . . . . . . . . . . . . . 543.2.6 Experiments . . . . . . . . . . . . . . . . . . . . . . . 553.3 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . 714 Data-Driven Swallowing Simulation . . . . . . . . . . . . . . 754.1 Kinematic Data . . . . . . . . . . . . . . . . . . . . . . . . . 794.2 Inverse Simulation . . . . . . . . . . . . . . . . . . . . . . . . 804.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824.4 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . 925 Data-Driven Speech Simulation . . . . . . . . . . . . . . . . . 945.1 Medical Dataset . . . . . . . . . . . . . . . . . . . . . . . . . 965.2 Speaker-Specific Modelling . . . . . . . . . . . . . . . . . . . 985.2.1 Inverse Simulation . . . . . . . . . . . . . . . . . . . . 1015.3 Speech Simulation . . . . . . . . . . . . . . . . . . . . . . . . 1045.3.1 Definition of the Trajectories . . . . . . . . . . . . . . 1045.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 1055.4 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . 108viiiTable of Contents6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1106.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1106.2 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . 1126.3 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . 117Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118AppendixA Muscles and Ligaments in FRANK . . . . . . . . . . . . . . 136A.1 Muscles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136A.2 Ligaments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141ixList of Tables2.1 Summary of FRANK component . . . . . . . . . . . . . . . . 152.2 A list of deformable (FEM) components in FRANK . . . . . 193.1 Summary of segmentation . . . . . . . . . . . . . . . . . . . . 563.2 Summary of tongue FE mesh . . . . . . . . . . . . . . . . . . 593.3 Summary of pharynx FE mesh . . . . . . . . . . . . . . . . . 603.4 Subdivision of FRANK . . . . . . . . . . . . . . . . . . . . . 633.5 Registration errors within the subject-specific models . . . . . 673.6 Summary of FE mesh qualities . . . . . . . . . . . . . . . . . 674.1 The peak tracking error during oral transport phase . . . . . 894.2 The peak tracking error during pharyngeal phase . . . . . . . 915.1 Speaker information . . . . . . . . . . . . . . . . . . . . . . . 975.2 Summary of segmentation . . . . . . . . . . . . . . . . . . . . 985.3 Registration errors within the speaker-specific models . . . . 985.4 Tracking errors of the template, Speaker A and Speaker B . . 1035.5 Tongue tracking error (mm) over 40 contol points . . . . . . . 1075.6 Jaw tracking error (mm) over 20 contol points. . . . . . . . . 107A.1 Jaw, hyoid muscles in FRANK . . . . . . . . . . . . . . . . . 137A.2 Face muscles in FRANK . . . . . . . . . . . . . . . . . . . . . 138A.3 Tongue muscles in FRANK . . . . . . . . . . . . . . . . . . . 139A.4 Soft palate muscles in FRANK . . . . . . . . . . . . . . . . . 139A.5 Pharynx muscles in FRANK . . . . . . . . . . . . . . . . . . 140A.6 Larynx muscles in FRANK . . . . . . . . . . . . . . . . . . . 140A.7 List of all ligaments in FRANK . . . . . . . . . . . . . . . . . 141xList of Figures1.1 Mid-sagittal view of the human upper-airway . . . . . . . . . 22.1 Digitized fibres from a cadaveric forearm . . . . . . . . . . . . 72.2 Midsagittal and axial MSCT images of swallowing . . . . . . 82.3 Real Time MRI captured during swallowing . . . . . . . . . . 102.4 A FE tongue model developed from a DTI data set . . . . . . 112.5 Illustration of FRANK . . . . . . . . . . . . . . . . . . . . . . 142.6 Illustration of all rigid components in FRANK . . . . . . . . 162.7 Illustration of all deformable components in FRANK . . . . . 172.8 Diagrams of two subject-specific modelling categories . . . . . 222.9 Illustration of mesh locking . . . . . . . . . . . . . . . . . . . 252.10 Virtual intralaryngeal prosthesis implantation . . . . . . . . . 283.1 Illustration of the mechanism of multi-level FFD . . . . . . . 343.2 Proposed multi-structure registration framework . . . . . . . 363.3 Illustration of multi-structure registration pipeline . . . . . . 373.4 Iterative closest point (ICP) correspondence . . . . . . . . . . 403.5 Notation for triangular mesh. . . . . . . . . . . . . . . . . . . 423.6 Illustration of the registration of the synthetic data . . . . . . 463.7 Registration error measured over 300 iterations . . . . . . . . 473.8 Mesh quality (η) measured over 300 iterations . . . . . . . . . 473.9 Medical images used in this study . . . . . . . . . . . . . . . . 493.10 Proposed framework for subject-specific modelling and simu-lation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513.11 Illustration of human intervention . . . . . . . . . . . . . . . 523.12 Example of element quality distribution of a FE mesh . . . . 533.13 Illustration of quality-controlling mechanism . . . . . . . . . . 533.14 Segmentation of the CT and MRI images . . . . . . . . . . . 573.15 Registration results of tongue model . . . . . . . . . . . . . . 593.16 Registration results of pharynx model . . . . . . . . . . . . . 603.17 Comparison of the resulting element quality by our methodsand MMRep . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61xiList of Figures3.18 Histograms of the element qualities . . . . . . . . . . . . . . . 623.19 Illustration of poor-quality elements . . . . . . . . . . . . . . 643.20 Subject-specific models of upper-airways complex . . . . . . . 653.21 Histograms of the element qualities of three models . . . . . . 663.22 Comparison of the element quality of the resulting modelsand the template models . . . . . . . . . . . . . . . . . . . . . 683.23 View of speech postures of three models . . . . . . . . . . . . 703.24 A comparison of muscle activations . . . . . . . . . . . . . . . 724.1 Videofluoroscopy of deglutition . . . . . . . . . . . . . . . . . 774.2 Illustration of swallowing modes reported in literatures . . . . 784.3 Illustration of segmentation and CT images . . . . . . . . . . 794.4 Illustration of tracking markers and target positions . . . . . 804.5 Methods of determining the target positions . . . . . . . . . . 814.6 Tracking error of swallowing simulation . . . . . . . . . . . . 824.7 Mid-sagittal views of the inverse simulation of swallowing . . 834.8 Oblique views of the inverse simulation of swallowing . . . . . 844.9 Top views of the inverse simulation of swallowing . . . . . . . 854.10 Activation levels of the extrinsic and intrinsic tongue muscles 864.11 Activation levels of the suprahyoid, mastication and palatemuscles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874.12 Activation levels of the laryngeal and pharyngeal muscles . . 884.13 The model fails to reproduce the target hyoid posture at t = 0.9s 904.14 The model fails to raise the the corniculate cartilage at t = 0.9s 915.1 Illustration of two previously reported models for speech sim-ulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955.2 Mid-sagittal slice of cine MR images of the speakers. . . . . . 975.3 Speaker-specific models of upper-airways complex . . . . . . . 995.4 Comparison of the element quality of the speaker-specificmodels and the template models . . . . . . . . . . . . . . . . 1005.5 Illustration of the inverse simulation . . . . . . . . . . . . . . 1025.6 Illustration of the end postures and the inverse-predicted mus-cle activations . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035.7 Illustration of results of the speech simulations . . . . . . . . 1065.8 Estimated muscle activations . . . . . . . . . . . . . . . . . . 1075.9 Illustration of erroneous tracking using Tagged MRI . . . . . 1096.1 Current subject-specific modelling framework . . . . . . . . . 1136.2 Target subject-specific modelling framework . . . . . . . . . . 114xiiList of Abbreviations1D 1-Dimensional2D 2-Dimensional3D 3-DimensionalANP Average Nodal PressureCBCT Cone Beam Computed TomographyCT Computed TomographyDoF Degrees of FreedomDTI Diffusion Tensor ImagingEMG ElectromyographyFB Fibre BundleFE Finite ElementFEM Finite Element ModelFOV Field Of ViewFPD Flat Panel DetectorFRANK Functional Reference ANatomical KnowledgeHMPS Hamiltonian Moving Particle Semi-implicitHP Harmonic PhaseMR Magnetic ResonanceMRI Magnetic Resonance ImagingMSCT Multi-Slice Computed TomographyOSA Obstructive Sleep ApneaPCSA Physiological Cross Section AreaSNR Signal-to-Noise RatioSWE Shear Wave ElastographyTMJ Temporomandibular JoinTR Repetition TimexiiiAcknowledgementsI would like to express my deepest appreciation to my supervisor, Dr. SidneyFels, for his support, guidance and mentorship. Without his encouragementand help for my graduate studies as well as personal life, this thesis wouldnot have been possible.I also received tremendous support from a number of fine researchers through-out my master studies. Thanks must go to Dr. Ian Stavness for bringingme into this research topic that I have no background in. Many thanksto Dr. Negar M. Harandi for sharing her knowledge with me during manydiscussions and a rewarding collaboration on the publication. Dr. PeterAnderson deserves special thanks for providing assistance with upper-airwaymodelling. I would like to thank Dr. John Lloyd for his great support withArtiSynth. Also, I would like to thank Dr. Yoko Inamoto, Dr. MaureenStone and Dr. Jonghye Woo for generous data sharing.Thanks also go to colleagues at the Human Communication Technologies(HCT) Laboratory for their support to my research and lab life. Specialthanks go to Andrew Ho and Antonio Sa´nchez for many useful conversationsand assistance with the project.I want to say a heart-felt thank you to my friends and family for their loveand support throughout this journey. Special thanks are owed to my fam-ilies for passing their life-long motivation and inspiration to me. I am alsograteful to my girlfriend, Mengyu, for daily accompanying and encourage-ment.xivDedicationTo LeixvChapter 1IntroductionThe human upper-airway complex is composed of rigid structures includ-ing jaw, and hyoid bone, highly deformable muscle-activated tissues such asthe tongue, soft palate, pharynx, larynx, an intricate arrangement of manymuscles and ligaments, and various constraint situations. It is involved in awide range of life-sustaining functions, such as breathing, swallowing, andspeech production. A number of associated dysfunctions impose adverse ef-fects on the quality of sufferers’ life. Dysphagia, or swallowing disorders, isa serious concern for stroke patients as aspiration-related pneumonia leadsto 40,000 deaths per year in North America [97]. Obstructive Sleep Apnea(OSA) is a serious disorder involving pauses in breathing during sleep, caus-ing arousal and daytime sleepiness [99], afflicting 12 million people in theUnited States [136]. Articulation, fluency, and voice disorders afflict morethan 7.5 million people in the United States [58].Due to the complexity of the head and neck anatomy, the inaccessibilityof many biomechanical and physiological parameters (e.g. muscular forcesand activations) presents significant barriers for traditional data collectionmethods to provide sufficient insight into normal and abnormal upper-airwayfunctions. The recent advance of computational modelling and simulationtechniques greatly facilitates and furthers our understanding of how thebody functions. Biological simulations enable the transition from visualiz-ing dynamic movements and measuring physiological parameters, to predict-ing motions, estimating unobservable variables and exploring motor controlmechanisms.As large inter-subject variation exists in the upper-airway anatomy andphysiology, generic computational models – which are suitable for exploringgeneral biomechanical and physiological principles – are not necessarily rep-resentative of specific patients. Subject-specific modelling targets at repre-senting geometrical, mechanical and physiological information of particularsubjects; hence it enables personalized simulations best suited for clinical us-age. Subject-specific modelling and simulation of upper-airway system have1Chapter 1. IntroductionFigure 1.1: Mid-sagittal view of the human upper-airway. c©Elsevier (2003),Thibodeau and Patton [144], adapted with permission.a wide range of viable biomedical applications. For example, subject-specificupper-airway models can be used to predict immeasurable biomechanicalquantities that may correlate with dysfunctions, such as OSA, dysphagiaand speech pathologies, and therefore enhance diagnosis protocols. Besides,subject-specific modelling and simulation can also aid in planning of treat-ment and surgical interventions. Personalized biomechanical models can beused to evaluate alternative treatment paths, such as different surgical pro-cedures, or to tailor a particular treatment to a specific patient, such ascustomizing dosages in radiation therapy. Thus, surgical procedures can beiteratively improved and tailored to a specific patient with little cost andrisk. In addition, subject-specific models can also help to guide patient re-habilitation. Given a model of a specific patients reconstruction, simulationof different muscle activation patterns may illuminate new motor strategiesto compensate for the altered musculoskeletal structure [136].Subject-specific modelling of the head and neck has been applied intocomputer-assisted surgeries, such as the maxillofacial surgery [23, 49, 108,153] and the jaw reconstruction surgery [156]. However, current subject-specific application is constrained by the limited functional fidelity of biome-21.1. Contributionschanical models. Fully reproducing upper-airway functions, such as swal-lowing and speech production, requires coordination of multiple motor com-ponents inside oral cavity, nasopharynx, oropharynx and larygopharynx.Creating a comprehensive model that couples multiple upper-airway organsrelies heavily on expert interaction. The slow process of model creationprevents us from simulating large numbers of individual cases. The presentthesis investigates the feasibility of applying registration techniques to facil-itating the subject-specific modelling and simulation of upper-airway com-plex.1.1 ContributionsThis thesis targets at developing biomechanics modelling tools that allowpersonalized analysis of various life-sustaining functions and dysfunctionsassociated with the human upper-airway system. We first present a reviewof data acquisition techniques, the state-of-art upper-airway biomechanicalmodels and subject-specific modelling methods. Then, we identify the chal-lenges facing standard approaches for creating comprehensive upper-airwaymodels for specific subjects. Next, we explores the use of model-registrationmethods to ease the modelling efforts: Our methods seek to register a pre-defined functional upper-airway template to the subject data. We demon-strate the feasibility of our methods by creating two subject-specific modelsof upper-airway complex based on two medical data sets. Furthermore,we apply our subject-specific modelling methods to simulating two coupledupper-airway functions: swallowing and speech. Finally, future directionsof subject-specific modelling are proposed. The main contributions of thepresent thesis are summarized as follows.Developed a model registration framework for subject-specific mod-elling of upper-airway complex.• We identified a model-registration strategy for creating comprehen-sive upper-airway models for particular subjects: Register a functionaltemplate to the partial segmentation of the anatomical structures ex-tracted from 3D medical images. In order to preserve the functionalityof the registered model, three types of regularity need to be main-tained:1. Inter-Component Regularity: Maintain the spatial relationship31.1. Contributions(including connectivity, topology, relative posture and size) be-tween model components.2. Intra-Component Regularity: Preserve the regularity of the un-derlying discretization structures of the template model duringregistration.3. Functional Regularity: Keep the functional information (includ-ing coupling attachments between components, muscle attach-ments and biomechanical properties) similar to the template butrelevant to the new model geometry.• We created a novel multi-structure registration technique, which canpreserve both the inter- and intra-component regularity.• Based on the multi-structure registration techniques, we developed aregistration work flow for creating subject-specific upper-airway mod-els from medical images.• We demonstrated the feasibility of our template-based subject-specificmodelling methods by creating two comprehensive upper-airway mod-els for particular subjects, and tested their functionality in a set ofbiomechanical simulations.Demonstrated the potential of the proposed subject-specific mod-elling methods for personalized analysis of swallowing biomechan-ics. We enabled a personalized simulation of realistic normal swallowingusing one of the developed models. Our model tracked the oropharyn-geal motion well, while having limited ability to reproduce the hyolaryngealmovements of normal swallowing.Demonstrated the potential of the proposed subject-specific mod-elling methods for personalized analysis of speech production.Using the proposed model-registration methods, we created two speaker-specific models, and enabled personalized speech simulations of the utter-ance /@-gis/. The models reproduced the speech motion of the tongue andjaw based on speaker-specific tagged and cine MRI data, and predicted thecorresponding muscular coordinating patterns, which made good agreementwith the speech-expert knowledge.41.2. Outlines1.2 OutlinesThe rest of this thesis is organized as the following. Chapter 2 reviews thestate-of-art data acquisition and measurement tools, head-and-neck models,subject-specific modelling methods, and lists a few closely related biomed-ical applications of the subject-specific upper-airway models. Chapter 3introduces the subject-specific modelling methods and demonstrates twosubject-specific models of upper-airway complex. In Chapter 4, we enablea swallowing simulation using a developed model based on a dynamic CTrecording of the corresponding subject. In Chapter 5, we create two speaker-specific models, and enable personalized speech simulations based on the cineand tagged MR recordings of the speakers. Chapter 6 summarizes the thesiscontributions, and describes directions for future work. Appendix A summa-rizes all of the muscles and ligaments included in the template upper-airwaycomplex model (FRANK).5Chapter 2Background and PreviousWorkSubject-specific models would be useful for assessing the biomechanical dys-function, comparing treatment options when more than one possibilities ex-ists, and predicting clinical outcomes without risking patients. This chapterreviews the tools and techniques used to create and validate subject-specificmodels of upper-airway system, and then lists a few potential biomedicalapplications of the subject-specific models.Current advancement of in vivo data acquisition techniques has enabledobservation of anatomical structure, recording of physiological data andmeasurement of tissue properties of the upper-airway complex. Such ob-servational data has widely applied in filed of biomehchanical modelling.However, these techniques are challenged by the low spatial resolution, longacquisition time and limited measurement capability. Section 2.1 providesa review of the data acquisition techniques closely related to biomechanicalmodelling; their associated limitations and challenges are discussed.As computational modelling and simulation greatly facilitate analysis ofbiomechanics and motor control of the head and neck, many biomechanicalmodels of upper-airway sub-structures have been created. Holistic models,which incorporate multiple functional units (organs), further the investi-gation of complex human behaviours, such as swallowing and speech pro-duction. Section 2.2 reviews previously reported biomechanical models andprovides a detailed description of the state-of-art functional reference modelof upper-airway complex.Although computational models are suitable for exploring general biome-chanical and physiological principles in upper-airway system, they are notnecessarily representative of specific patients. Subject-specific modellingenables individualized simulations best-suited for clinical purposes. Sec-tion 2.3 reviews the subject-specific modelling methods and identifies therelated limitations and issues.The ultimate goal of subject-specific modelling is applying the computa-tional modelling and simulation techniques into clinical diagnosis and treat-62.1. Data Acquisition and Measurement(a) (b)Figure 2.1: Digitized fibres from a cadaveric forearm using a MicroScribeTM3DX Digitizer. (a) delineated FBs are highlighted in black on an embalmedcadaveric specimen. (b) digitized fibres from specimen. c©Taylor & Francis(2014), Li et al. [84].ment. Section 2.4 reviews a few biomedical applications of biomechanicalmodelling and simulation closely related to our efforts.2.1 Data Acquisition and Measurement2.1.1 Cadaver StudiesCadaveric study is the most straightforward process for assessing organ geo-metric data and mechanical properties. Based on cadavers, The Visible Hu-man Project [20] creates complete, anatomically detailed, three-dimensionalrepresentations of the normal male and female human bodies. It has sup-ported many projects in the fields of medical image processing and biome-chanics. Gerard et al. [44] use a cadaver to measure the mechanical proper-ties of tongue. Li et al. [84] measure the morphologies of muscle fibre bundles(FBs) on a cadaveric forearm, using a MicroScribeTM 3DX Digitizer (0.3mmaccuracy; immersion Corporation, San Jose, CA, USA); the cadaveric spec-imen and the digitized fibres are illustrated in Figure 2.1. However, cadaverstudies fail to provide individualized information for living human. Sincemedical imaging modalities become more and more accurate, standardisedand allow in vivo measurements, they tend to supplant cadaver studies inthe filed of biomechanics.2.1.2 Radiography, Fluoroscopy and ComputedTomographyRadiography is the first modality for in-vivo anatomical imaging of internalstructure. Based on the measurements of the intensity of X-rays traversing72.1. Data Acquisition and MeasurementFigure 2.2: Midsagittal and axial MSCT images of swallowing. c© Springer(2010), Fujii et al. [37], adapted with permission.the body, radiography provides a static superimposed 2D representationof all internal structures. Similar to radiography, fluoroscopy uses X-rayto produce real-time 2D images of interior tissues. Stavness et al. [138]use lateral videofluoroscopy to record normal swallowing on three healthysubjects. Since fluoroscopic images do not provide 3D information, this datacan only depict motions that are visible in sagittal view.Computed Tomography (CT) combines multiple X-ray projections toreconstruct 3D images of tissues. CT can provide 3D information, but itsacquisition time is much longer than fluoroscopy. To reduce the acquisitiontime and improve the temporal resolution of dynamic CT scans, Multi-sliceCT (MSCT) scanners have recently been equipped with multiple arrays of X-ray detectors that are able to reconstruct a 3D volume from a single rotation.Fuji et al. [37] use a 320-detector-row multislice computed tomography (320-MSCT) scanner for detailed morphological analysis of swallowing (shown inFigure 2.2). A single-phase 3D image covers the area of oral cavity, pharynx,larynx and upper esophagus and is captured in 0.35s. The imaging processis repeated for 29 phases at intervals of 0.1 second, to generate 3D imagesfrom oral to the early esophageal stages of swallowing on one volunteer.Inamoto et al. [66] use a similar single phase image protocol to investigatethe effects of ages, gender and height on the anatomy of the pharynx andthe larynx on 54 healthy volunteers.Cone Beam CT (CBCT) is another emerging CT imaging technique thatuses a cone-beam acquisition geometry and Flat Panel Detector (FPD) toprovide relatively low-dose imaging with high isotropic spatial resolutionthat can be acquired with a single gantry revolution [105]. However, com-pared with conventional MSCT, CBCT has relatively poor soft-tissue con-82.1. Data Acquisition and Measurementtrast. Grauer et al. [53] use CBCT records of 62 nongrowing subjects to eval-uate the pharyngeal airway volume and shape. Recently, Glupker et al. [50]use CBCT to measure airway volume changes between open and closed jawpositions for 60 patients with temporomandibular joint disorders.As X-ray is absorbed by dense tissue and passes through air, CT usuallycan accurately depict skeleton and airway. High dosage of X-ray exposureis the main drawback of medical CT which hinders its use on healthy vol-unteers.2.1.3 Magnetic Resonance ImagingHydrogen atomic nuclei can absorb and remit energy in an external magneticfiled. Magnetic Resonance Imaging (MRI) aims at differentiating betweenbody tissues by measuring the released energy of hydrogen atoms when theprotons return to the initial state. Tissues react differently depending onthe proton density and the duration that the proton resume their initialstate (relaxation time). By varying the parameters of the pulse sequence,MRI can produce different contrast between tissues based on their relaxationproperties. The MR image shows good soft-tissue contrast and it does notproduce harmful ionizing radiation, which makes it well-suited for clinicalresearch involving volunteers.For years, MRI is limited by its long acquisition time due to inherenttrade-off between Signal-to-Noise Ratio (SNR), spatial resolution and tem-poral resolution. High-resolution MR volumes require a long acquisitiontime, commonly leading to involuntary movement and introducing motionartifacts. Many methods are proposed to reduce the acquisition time. Thesimplest modification is to minimize TR by increasing gradient strength.However, this method is limited by engineering cost and human physiol-ogy [79]. Echo train imaging is another complementary approach, whichacquires more than one phase encode line per TR. However, such methodcompromises not only contrast but also resolution and in some cases leads toimage distortion [79]. Super-resolution imaging techniques are introducedto generate high spatial resolution MR images in relatively short time bycombining information from a number of images. Woo et al. [158] reporttwo super-resolution MR volumes of tongue with isotropic spatial resolu-tion of 0.94mm and FOV of 240mm × 240mm × 240mm. More recently,Woo et al. [157] use twenty super-resolution MRI volumes to build an high-resolution atlas of vocal tract. Parallel imaging uses multiple receiver coils toaugment the time consuming Fourier encoding, which reduces the acquisitiontime significantly without compromising image contrast. Under-sampling92.1. Data Acquisition and MeasurementFigure 2.3: Real Time MRI captured during swallowing. cWikimedia Com-mons (2011), Uecker et al. [148].of k-space is another family of acceleration methods. Non-Cartesian sam-pling trajectories (e.g. radial or spiral) is applied to reduce sampling-aliases.Via combining these acceleration techniques, state-of-art MRI scanners havebeen successfully applied to capture tissue motions within certain spatial andtemporal resolutions. Kim et al. [73] produce dynamic 3D MRI recordingsof upper-airway obstruction during natural sleep on eight volunteers. Theraw data is captured in real time (2.6 fps for the short scan and 1.7 fpsfor the long scan). Lingala et al. [86] report a MRI system for study ofdynamic vocal tract shaping during speech production. Their MRI record-ings achieves spatio-temporal resolutions of 2.4mm×2.4mm every 12ms forslice-slice imaging and every 36ms for three-slice imaging.Although MRI can provide high contrast between soft-tissues, it failsto provide enough contrast to distinguish material points within soft-tissueitself. In order to capture motion information, MR tagging techniques usea special pulse sequence to create temporary features in soft-tissues. Basedon the assumption that the Harmonic Phase (HP) value of a fixed materialpoint is time-invariant, the motion of the material points throughout timecan be tracked. Xing et al. [160] and Woo et al. [159] use Tagged MRI foranalysis of 3D tongue motion during speech.102.1. Data Acquisition and Measurement(a) (b)Figure 2.4: Illustration of a FE tongue model developed from images ofhuman lingual myofiber tracts obtained by DTI with tractography. (a) a 3Dsagittal view of lingual myoarchitecture based on in vivo DTI tractography.c©Wiley-Liss (2007), Gaige et al. [39], adapted with permission. (b) thetongue FE mesh overlaid on the DTI image slice. c©American PhysiologicalSociety (2010), Mijailovich et al. [104].Fluid flow attenuates MR signal intensity in the direction of the magneticgradient. Since the self-diffusion of water is restricted by tissue geometry(e.g. fibres direction), by applying different gradient direction, the tissuegeometry can be measured with MRI. Diffusion tensor MRI (DTI) aimsto measure the restricted diffusion of water in tissues in order to reveal themicroscopic details of biological structures. By combining DTI with tractog-raphy, the paths of muscle fibres in tissues can be reconstructed. Heemskerket al. [60] successfully apply 3D DTI to determine overal muscle structure,fibre length, pennation angle and PCSA of mouse. Gaige et al. [39], re-port the complete 3D myoarchitechture of human tongue and the geometryof intrinsic and extrinsic myofiber populations, which are obtained in vivofrom DTI tractography. The fibre structures extracted from DTI imagesare employed by Mijailovich et al. [104] to drive a finite-element (FE) modelof lingual deformation during swallowing. DTI provides a non-invasive wayto obtain subject-specific fibre directions in upper-airway tissues. However,this imaging technique is still limited by its long acquisition time, insufficientspacial resolution and SNR.112.1. Data Acquisition and Measurement2.1.4 ElastographyElastography aims to estimate tissue mechanical properties by imaging.Strain elastography measures the mechanical properties of tissues by imag-ing their deformation under slight compression. Shear Wave Elastography(SWE) achieves the measurement by observing the velocity of mechani-cally excited shear wave propagation within the tissue of interest. Chenget al. [26] employ SWE to investigate the viscoelasity of tongue and soft-palate. Elastography provides a non-invasive way for the measurement oftissue properties. However, as soft tissues exhibit non-linear stress-strainbehaviour, responses at larger deformations cannot be inferred from elastog-raphy alone [155], which limits its application on biomechanical modelling.2.1.5 ElectromyographyMotor neurons transmit electrical signals that cause muscles to contract.Electromyography (EMG) involves measuring muscle activation indirectlyby picking up the electric charge produced by an action potential just as itreaches a muscle [46]. Two current techniques are available for eletromyo-graphy, i.e. surface EMG and intramuscular EMG. Surface EMG picks upvoltage signals on the surface of the skin, which is non-invasive. However,it can only measure the activation of superficial muscles. Moreover, it maypick up signals from different muscles, which will cause confusion and in-accuracy. In contrast, Intramuscular EMG directly inserts the hooked-wireelectrodes into the relevant muscle, leading to extremely precise and reli-able results [46]. Obviously, hooked-wire electrodes can also cause somediscomfort and therefore require more involved ethics approval for researchuse.EMG recording has been widely applied to investigate muscle activa-tions during head-and-neck associated activities, such as speech produc-tion [11, 41, 98, 147] and swallowing [33, 68, 101, 149]. However, EMGsuffers from numerous issues, including cross-talk between adjacent chan-nels, the complexity of anatomy and the discomforting of volunteers. Inaddition, the relationship between EMG signals and muscle forces is notstraightforward. Because these issues remain challenging, nowadays, inversemodelling has become a popular alternative tool for the measurement ofmuscle activations.122.2. Biomechanical Modelling of Upper-Airway Complex2.2 Biomechanical Modelling of Upper-AirwayComplexBiomechanical modelling of the human upper-airway complex has receiveda growing interest since it facilitates the analysis of complex human be-haviours, such as speech production [17, 42, 43, 57, 96, 116, 117, 131, 154],swallowing [63, 72, 106, 134, 146] and mastication [56]. However, the biome-chanical models created in these studies only incorporate a part of upper-airway complex, such as a single tongue model [43, 116] or a single facialmodel [96, 131]. Although Gerard et al. [42] add surface represented jaw,hyoid, hard and soft palate into the their Finite Element (FE) tongue modelto shape oral cavity; these components are modelled as non-dynamic struc-tures(i.e. fixed in space, only contacts with the tongue model are enabled).Based on Gerard’s model, Buchaillard et al. [17] then add non-dynamic pha-ryngeal and laryngeal walls and characterize the hyoid bone as a dynamicrigid-body and connect it with other solid structures using spring-like mus-cles. This model can provide high fidelity for reproducing the motions hap-pening inside oral cavity, but it has limited functionality for other organs,such as soft palate, temporomandibular joint (TMJ), pharynx and larynx.Kikuchi et al. [72] develop an upper-airway complex model for swallow-ing simulation using Hamiltonian Moving Particle Semi-implicit (HMPS)method. Their model incorporates the surfaces of tongue, hard palate, softpalate, larynx, pharynx and esophagus. However, the motion of their modelis in response to, rather than muscle contraction, manually defined bound-ary conditions; hence this model lacks the ability to reflect motor-controlmechanisms of real human.In order to support the study of complex human behaviours in differentlevels of fidelity, a Functional Reference ANatomical Knowledge (FRANK)biomechanical model of the head and neck [7] has been implemented in theArtiSynth biomechanical simulation toolkit [93]. Multiple anatomical mod-els that have been involved in different studies [7, 17, 24, 45, 107, 110, 137]are tailored to fit together to generate this generic 3D biomechanical model.FRANK is composed of FE models, rigid bodies and functional structuresincluding point-to-point muscles, joints, inter-component attachments andparametrically-controlled skin meshes. On the one hand, bones and carti-lages are represented as rigid bodes, which has a relatively low computationalcomplexity and avoids challenges such as the need to construct volumetricmeshes. On the other hand, soft tissues are characterized as FE models,which allows simulation of soft tissue deformation. FRANK represents an132.2. Biomechanical Modelling of Upper-Airway Complex(a) mid-sagittal cross-section (b) bones and cartilages (c) soft-tissuesFigure 2.5: FRANK: A template model of the head and neck [7]: (a) Mid-sagittal cross-section. The air-tight airway mesh is shown in cyan. (b) Bones(jaw, maxilla, hyoid) and cartilages (thyroid, cricoid, epiglottis, arytenoid,cuniform) are modelled as rigid-bodies. (c) Soft-tissues (faces, tongue, soft-palate, larynx and pharynx) are modelled as FE deformable-bodies.average human anatomy and function, which provides a biomechanical tem-plate of upper-airway complex. For the purpose of this thesis, we give anoverview of this state-of-art upper-airway complex model in following sec-tions.2.2.1 Functional Reference ANatomical Knowledge(FRANK)The geometries in FRANK are derived from the work of numerous re-searchers and multiple data sources; most components differ substantiallyfrom the original geometry in order to fit into the FRANK framework usingboth algorithmic modification and manual adjustments based on related lit-erature and anatomical references [34]. Table 2.1 provides a summary of themodel components, including their name, modelling type (either FE modelsor rigid bodies), mesh type (either hexahedral-dominant mesh or triangularsurface mesh) and references to source publications.142.2. Biomechanical Modelling of Upper-Airway ComplexComponent Type Mesh ReferenceFace FE Hex [110]Tongue FE Hex [17]Jaw, Hyoid, Maxilla Rigid Tri. [137]Soft-Palate FE Hex [24, 45]Pharynx FE Hex [7]Larynx FE Hex [107]Larynx Cartilages Rigid Hex [107]Table 2.1: Summary of components, includes component name, componenttype (either FE models (FE), rigid body (Rigid)), mesh type (hexahedral-dominant (Hex) or triangular surface mesh (Tri.)) and references to sourcepublications. Larynx Cartilages include thyroid, cricoid, arytenoids, epiglot-tis, cuneiforms.Bony StructuresBony structures are many orders of magnitude stiffer than soft tissue [31],and treating them as rigid bodies can simplify many simulations withoutsignificant loss of fidelity. Similarly, FRANK treats cartilage, less stiff thanbone but still orders of magnitude stiffer than soft tissues, as rigid. Rigidbodies do not require volumetric mesh but still need high quality surfacemeshes for correct contact handling. Bones and cartilage have uniform den-sities, which factors into the inertia calculation of the rigid body. Figure 2.6illustrates all the bony structures included in FRANK.The maxilla and upper teeth represent the skull and is fixed in spaceto serves as an anchor point [7]. The jaw is connected with the maxilla bythe TMJ that is modelled as three constraint planes. These planes limit thelateral motion of the jaw and constrain it to follow a pre-defined arc whenopening and closing [55]. The hyoid attaches to the base of the tongue, topof the larynx, jaw, and cranium through a series of point-to-point muscles.Soft TissuesFRANK represents soft tissues as FE models. Their geometries are illus-trated in Figure 2.7.Tongue The tongue model consists of 946 nodes, 740 hexahedral ele-ments, and 11 pairs of intrinsic and extrinsic muscle bundles (listed in Ta-ble A.3) with bilateral symmetry. In the similar approach described in152.2. Biomechanical Modelling of Upper-Airway ComplexMaxilla Jaw HyoidEpiglottis Thyroid CricoidCuneiform ArytenoidFigure 2.6: Illustration of all rigid components in FRANK.162.2. Biomechanical Modelling of Upper-Airway Complextongu pharynx soft palateLarynx FaceFigure 2.7: Illustration of all deformable (FEM) components in FRANK.Refer to Table 2.2 for density and material properties.literature [137], the tongue attaches to the hyoid and the jaw via bilateralconstraints. Tongue-jaw attachments include the insertion of the genioglos-sus and geniohyoid onto the mandibular geniotubercle and the insertion ofthe mylohyoid along the mandibular mylohyoid ridge. Tongue-hyoid at-tachments include insertions of the geniohyoid, mylohyoid, and hyoglossusmuscles with the hyoid.Pharynx The pharynx model extends from the cranial base position to thelower border of the cricoid cartilage. It attaches to tongue over a manuallydefined attachment region and forms the posterior wall of oropharynx. Italso attaches to thyroid based on proximity. The nodes on the inferiormargin of the pharynx are fixed, approximating attachments to the absentorgans below it, such as esophagus. The nodes at the pharyngeal raphe areanchored in space using soft bilateral constraints. Seven pharyngeal musclegroups are embedded in the pharynx model, and some of them are sub-divided into smaller activation bundles (listed in Table A.5). The pharynx-172.2. Biomechanical Modelling of Upper-Airway Complexthyroid attachments include insertions of the middle constrictors, inferiorconstrictors, salpingopharyngeus and stylopharyngeusm with the thyroid.Soft Palate The soft palate is attached to the posterior border of thehard palate (the maxilla). The lateral portions of the soft palate geome-try conforms with the pharyngeal wall, so that the pharynx can attach toit [7]. Five soft palate muscle groups are incorporated into this model (listedin Table A.4). Only musculus uvulae is strictly intrinsic to the geometry.The other muscles (levator veli palatini, tensor veli palatini, palatoglossus,palatopharyngeus) have extrinsic portions that attach to surrounding com-ponents.Larynx The epiglottis and laryngeal complex complete the lower part ofthe airway model of FRANK. The deformable larynx model contains a num-ber of cartilage structures: the thyroid, cricoid, epiglottis, left and rightcuneiforms, and left and right arytenoids; these are modeled as rigid bodiesembedded within the deformable laryngeal tissue, adding substantial stiff-ness to the component. The pre-epiglottic portion of the larynx attaches tothe hyoid based on proximity [7]. The larynx includes point-to-point musclesconnected to the rigid bodies (interarytenoid, lateral cricoarytenoid, poste-rior cricoarytenoid, and thyrohyoid) and sub-divided FEM-internal muscles(vocalis and muscularis portions of the thyroarytenoid, thyroepiglottic, andanterolateral, anteromedial, and posteriolateral portions of the external thy-roarytenoid or ventricularis). Table A.6 lists all the muscles associated withthe larynx and their properties.Face The face model contains three layers of elements between the super-ficial and deep surfaces; the thin outermost layer represents the epidermisand dermis, while the intermediate and deep layers represent the hypoder-mis [35]. The face attaches to the jaw and the maxilla at a set of manuallyselected locations [139]. Eleven faical muscle groups are included into themodel. The muscles and associated properties are listed in Table A.2.Material Properties Tongue, larynx and face FE components use a fifth-order Mooney-Rivlin tissue material where the strain energy (W) is de-scribed as:W = C10 (I1 − 3) + C20(I1 − 3)2 + κ(lnJ)2, (2.1)where I1 is the first invariant of the left Cauchy-Green deformation ten-sor; C10 and C20 are the Mooney-Rivlin material parameters, and the term182.2. Biomechanical Modelling of Upper-Airway ComplexName ρ [kg/m3] Materialtongue 1040 M.R.: c10 = 1037, c20 = 486,c01, c11, c02 = 0 Pa, κ = 10 · c10soft palate 1040 Lin.: E = 500 Pa, ν = 0.4995pharynx 1040 Lin.: E = 1500 Pa, ν = 0.49larynx 1040 M.R.: c01 = 2500, c20 = 1175,c10, c11, c02 = 0 Pa, κ = 10 · c01face 1040 M.R.: c10 = 2500, c20 = 1175,c01, c11, c02 = 0 Pa, κ = 10 · c10Table 2.2: A list of the information of deformable (FEM) components in theFRANK, including density, material properties. A Mooney-Rivlin (M.R.)material is defined by a 5-parameter model and bulk modulus κ, and a linear(Lin.) material is defined by Young’s modulus E and Poisson ratio ν.κ(lnJ)2 reinforces the incompressibility. Soft palate and pharynx adopt lin-ear material:σ = D, (2.2)where σ is the stress vector;  is the strain vector; D is the material stiffnessmatrix:D =E(1− ν)(1 + ν)(1− 2ν)1 ν1−νν1−ν 0 0 0ν1−ν 1ν1−ν 0 0 0ν1−νν1−ν 1 0 0 00 0 0 1−2ν2(1−ν) 0 00 0 0 0 1−2ν2(1−ν) 00 0 0 0 0 1−2ν2(1−ν). (2.3)E is the Young’s Modulus and ν is Poisson’s ratio. The densities andmateiral properties of the soft tissues are summerized in Table 2.2.Muscle ModelThe muscles of the upper-airway complex are approximated as point-to-point Hill-type actuators [61]. The contraction force depends on the musclesactivation level, a, the muscles length, l, and the speed of muscle shortening,∂l/∂t. The force equation typically consist of two components, passive andactive:fmuscle(a, l, l˙) = fpassive(l) + afactive(l, l˙). (2.4)192.2. Biomechanical Modelling of Upper-Airway ComplexThe passive force function fpassive(l) and active force function factive(l, l˙) aredefined by muscle-specific parameters such as Physiological Cross-SectionalArea (PCSA), tendon ratio Rt, optimal muscle length Lopt, maximal mus-cle length Lmax. FRANK adopts the piecewise force functions defined inliterature [115]:fpassive(l) =Fmax l ≥ LmaxFmax · l−LoptRtLmax−Lopt Lopt < l < Lmax0 l ≤ Lopt(2.5)factive(l, l˙) ={12Fmax [1 + cos (2piln)] 0.5 < ln < 1.50 otherwise(2.6)where ln is the normalized fibre length, ln = (l − LoptRt)/Lopt(1−Rt); Fmaxis the maximal muscle force, which is defined as Fmax = PCSA · 40N/cm2.The point-to-point muscles attach to rigid bodies and fixed points in space;they also pass through FE modes and exert local forces on them.AirwayFRANK models the upper-airway mucosa layer as a deformable water-tightmesh (as shown in Figure 2.5), which covers and attaches the bony andsoft tissues using a geometric skinning method [140]. The upper-airwaymucosa layer deforms according to a distance-weighted scheme along withthe other airway components. The position of each mucosa layer vertex vs iscalculated as a weighted sum of contributions from each master component:vs = vs0 +M∑i=1wifi(qm,qm0 ,vs0), (2.7)where vs0 is the initial position of the mucosa layer vertex; qm0 is the resetstate of the i-th master, wi is the skinning weight, and fi is the correspondingblending function. To provide two-way coupling between the skinned meshand articulators, the forces acting on the mucosa layer are also propagatedback to their dynamic masters allowing fluid-solid interaction [7].2.2.2 Combined Multi-Body Finite-Element SimulationArtiSynth achieves full coupling between multibody and FE components bycombining the dynamics of all components into a single Lagrangian system202.2. Biomechanical Modelling of Upper-Airway Complexwith composite positions q, velocities v, and forces fsystem(a,v,q), and acomposite mass matrix M. By applying Newtons second law, we havefsystem(a,v,q) =dMvdt= Mv˙ + M˙v (2.8)The system forces f consists of the active forces factive(a,v,q) and passiveforces fpassive(v,q):fsystem(a,v,q) = factive(a,v,q) + fpassive(v,q)factive(a,v,q) = Λ(v,q)a,(2.9)where Λ denotes a nonlinear function that relates the system positions (q)and the system velocities (v) to the active muscle forces.FRANK components attach with each other by several types of bilat-eral constraints, B(q). The skull (presented as partial geometry includingmaxilla and upper-teeth) is anchored in space. The jaw is connected withthe maxilla by the TMJ. FE components are attached to other FE com-ponents or rigid bodies by nodal attachments. Muscles are approximatedas point-to-point Hill-type actuators that may attach to a rigid body orpass through a FE body. The air-tight surface wraps over, and attaches tothe FE models and rigid-bodies to create a parametric upper-airway mucosalayer. The contacts between FRANK components are modelled as unilateralconstraints, U(q). These constraints are linearized on velocities, as:B(q)v = 0, U(q)v ≥ 0. (2.10)Forward-dynamics simulation involves solving the above system for themotion in response to forces arising from muscle activations. Inverse dy-namics, on the other hand, seeks to estimate the muscle activations a thatproduce a given set of target velocities v? by solving a quadratic problem:minawm2‖ v? − vi+1(a) ‖2 +wa2‖ a ‖2 +wd2‖ a˙ ‖2subject to 0 ≤ a ≤ 1 ,(2.11)where v? is the target velocity trajectory of tracking points; vi+1 is thevelocity vector of the tracking points in next time step, which is a linearfunction of the muscle activations a. The second term in Equation 2.11 isa l2-regularization term. The third term is a damping term; a˙ denots thetime-derivative of the activations a. The weights wm, wa and wd are usedto trade off between the cost terms. As a result, at each timestep of inverse212.3. Subject-Specific Modelling(a) Model Construction (b) Model RegistrationFigure 2.8: Diagrams of two subject-specific modelling categories. (a) directmodel construction from subject’s medical images. (b) register a predefinedtemplate model to subject-specific medical images or segmentation.simulation, Equation 2.11 is solved to provide muscle activations to advancethe forward dynamics system defined by Equation 2.8 and 2.10. Due to theredundancy of biomechanical systems, one target motion can be producedwith multiple sets of activations combinations.2.3 Subject-Specific ModellingComputer-aided diagnosis and treatment relies on biomechanical models topredict musculoskeletal behaviour (e.g. bone kinematics, tissue deformation,tissue degeneration, tissue reconstruction, etc.) from morphology, kinematicconstraints, mechanical constraints or neuromuscular impulses [48]. Thelarge inter-subject variability of anatomy and physiology requires the transi-tion of biomechanical research from generic understanding of biomechanicalphenomena to subject-specific studies addressing biomechanics of a partic-ular individual. Subject-specific modelling is typically done by creating athree-dimensional computation reconstruction of the anatomy of the tissueor mathematical model of the organ of interest in the individual subject,based on imaging scans or other individualized parameters [40].Currently, subject-specific modelling of human anatomy can be orga-nized into two major categories, as illustrated in Figure 2.8. The first cate-222.3. Subject-Specific Modellinggory is direct model construction from image data. This approach usuallyrequires image segmentation, surface extraction (i.e. segmentation), mesh-ing and functional information assignment (i.e. boundary condition, muscledefinition, coupling attachments, joints, biomechanical properties, etc.). Forexample, Nithiarasu et al. [118] use segmentation-and-meshing method tobuild a subject-specific upper-airway-surface FE model (using tetrahedronmesh) based on a 3D CT scan. However, this approach suffers from sev-eral limitations. Firstly, it requires complete geometry for the target organ,which may be unavailable due to inconsistent image quality. Secondly, au-tomated FE meshing, especially for Hexahedral or Hexahedral-dominantmeshing, remains to be a challenging problem. The manual mesh genera-tion requires significant time and operator effort to complete even a singlemesh. Thirdly, direct construction of biomechanical model needs redesign-ing functional feature, such as muscle definition or coupling attachments, foreach subject model. Because of these issues, this approach can be extremelytime-consuming and labour-intensive.The second category is registration. Instead of direct construction frommedical data, registration aims at finding a deformation that morphs a tem-plate model to a subject dataset. This approach takes advantage of theprior knowledge about the average organ shape, the morphological variabil-ity and the feasible organ functionality in the population, thus reducing theambiguity introduced by medical data and easing the re-designing efforts.Registration has been extensive studies in the filed of imaging processingand computer graphics. With respect to the purpose of this thesis, we givea brief overview of registration based subject-specific modelling method.Articulated Modelling Articulated models use a skeleton as a basis formodelling human motion. These models assume that bones undergo largerigid transformations and local nonrigid surface deformation occurs nearjoints. Template based articulated subject-specific modelling has been ex-tensively studied by computer graphics communities for realistic biomechanics-based animation. These models are represented as either polygonal meshesor point-cloud. Lewis et al. [82] introduce an articulated model that thedisplacements of its vertices are generated by a weighted set of (usuallylinear) influences from neighboring joints. Allen et al. [4] propose a tem-plate articulated model that is represented as a posable subdivision surface.Anguelov et al. [8] introduce a statistical human shape template model thatspans variation in both subject shapes and poses; based on a limited setof available markers specifying the target subjects, the statistical template232.3. Subject-Specific Modellingmodel evolves along the degrees of freedom (pose and shape deformationsubspace) to obtain the best fit. Corazza et al. [29] expand the statisticalhuman shape space described in [8] by incorporating a space of embed-ded kinematic models (anatomically feasible joint centers locations). Thesubject-specific model generated with this method is capable of accuratemotion tracking with Markerless Motion Capture (MMC) systems. In orderto create subject-specific model from medical image segmentation, Gilleset al. [47] incorporate feasible joint limits (angular limits and translationlimits) into their registration framework, to avoid unrealistic joint trans-formation that may be generated due to noise and local minimum. Theirframework treats both local and global deformations in successive regulariza-tion steps: Smooth elastic deformations are represented by an displacementfield between the reference and current configuration of the template; globaland discontinuous displacements are estimated through a projection onto astatistical shape model.FE Modelling FE modelling, in the past three decades, has provided con-siderable understanding to the area of musculoskeletal biomechanics. Nu-merical models based on the FE method become very popular because it isable to address the complex geometries, the anisotropic material propertiesand the specific boundary conditions associated with living tissues [18]. Theaccuracy and efficiency of FE simulations (i.e. the solution to the partial dif-ferential equations) is highly predisposed to the quality of the finite elementmesh [122]. Since the 4-noded tetrahedral mesh can be generated automati-cally if the information about organ geometry is available as closed surfaces,linear tetrahedral mesh is the most popular choice for subject-specific mod-elling. However, 4-noded tetrahedral mesh suffers from artificial stiffening,well-known as volumetric locking, when applied in modelling of incompress-ible (or nearly incompressible) continua [65], such as tongue and brain; themesh-locking effect is illustrated in Figure 2.9. One approach to addressthis issue is to improve the tetrahedral elements to prevent locking. Forexample, Average Nodal Pressure (ANP) prevents volumetric locking bydefining nodal volumes and evaluating average nodal pressures in terms ofthese volumes [14, 71]. ANP provides much better results for nearly incom-pressible materials than the standard tetrahedral element with only smallincrease in the computational cost. Another approach is to adopt hexahedraor hexahedra-dominant mesh. Hexahedra mesh not only avoids volumetriclocking but also reduces the computational complexity of FE analysis, mak-ing it preferable for modelling incompressible tissues.242.3. Subject-Specific ModellingFigure 2.9: Illustration of the mesh-locking effect by incompressible materialin a 2D case. Node A is the only free node. To maintain the area of thetriangle element I, node A can only move in the horizontal direction; tomaintain the area of triangle element II, node A can only move in the verticaldirection. Therefore, maintaining the areas of the two triangle elements leadsto zero displacements.As aforementioned in this section, FE model (mesh) can be generateddirectly from organ geometry (surface) for each subject, known as mesh-ing. This method is challenged by two facts. Firstly, automatic meshingtechniques cannot identify distinct mesh features. For example, the 3Dmesh of the face model in [23] consists of two distinct layers of elements:the outer layer represents the dermis tissues, while the inner layer modelsthe hypodermis tissues. Secondly, automatic meshing for producing high-quality hexahedral FE mesh for complicated organ shapes remains to bean ill-posed problem. The semi-automated meshing, however, still needs anexcessive amount of manual effort to achieve satisfactory results [18, 155].Several studies have applied registration method to generate subject-specific FE models. Couteau et al. [30] propose a Mesh-Mathing (M-M) al-gorithm, which firstly computed the volumetric function T (octree-splines)that transforms external nodes of the template FE mesh onto the targetsurface, and then apply T to all internal nodes, leading to a new 3D mesh.Based on Couteau’s MM method, Bucki et al. [18] add a constraint on spacedistortion, which ensures the non-folding property at every point in space.Grosland et al. [54] morph a template hexahedral mesh (with good quality)to a target surface using FE method. Multiple levels of mesh refinement isutilized in their registration framework, i.e. the template mesh initializedwith a low resolution and end with a fine resolution. Wang et al. [152] intro-duce a statistical template based approach to automate subject-specific FE252.3. Subject-Specific Modellingmodelling. They first construct a statistical atlas from a shape population,including the statistical shape model and the FE model of the mean shape;then, based on correspondence established between the template and a givensubject shape, the template shape evolves along the degrees of freedom(shape variation of the population) to obtain the best fit; finally, using Free-Form Deformation (FFD), they morph the internal nodes of the FE modelof mean shape to conform with the new geometry. Campbell et al. [22] applya similar approach to automate the subject-specific FE modelling of lumbarspine using a statistical shape model. Although the registration methods,mentioned above, start from good-quality template meshes, they may pro-duce excessive spatial distortion when reposition the internal FE nodes andresult in poor-quality elements. Several mesh untangling and quality im-provement techniques have been proposed by previous studies [36, 76, 77].However, they have no constraints on reallocating surface nodes, which maycause loss of the registration accuracy.To generate good-quality subject-specific FE meshes, Luboz et al. [95]add an extra mesh-repair step to correct invalid elements after Mesh-Matching [30].They repair the registered mesh using iterative approach: In each iteration,for the irregular elements, their nodes that have negative Jacobian value |J|are displaced in the gradient direction of |J|. In order to maintain the reg-istration accuracy, the maximal node displacements (distance between theinitial and final position) are constrained. Following this Mesh-Match-and-Repair (MMRep) ideal, Bucki et al. [18] achieve mesh repair by a two-foldprocess: 1. recover the regularity of inverted elements (|J| < 0); 2. improvethe qualities of the elements to an acceptable level. In the second process,they use Jacobian Ratio (JR) as the measure of the overall distortion of anelement. JR is defined as |Jen|/|Jemax|, where |Jen| is the Jacobian value atnode n in element e, and |Jemax| = maxn∈e |Jen|. They set 1/30 as the minimumJR value and 5mm as the maximal nodal displacement in the repair step.However, Buchi’s subject-specific FE modeling methods suffer from two lim-itations. First, MMRep only maintains a minimal mesh quality. When largedeformation involves in biomechanical simulation, the resulting FE modelmay cause instability. Second, MMRep uses nodal test to decide the elementregularity, i.e. an element is valid if and only if the Jacobian values at everynodes of it are positive. However, this conjecture is false for the hexhedralelement [75], which may cause the mesh repair to fail when MM produceexcessive spatial distortion.This state-of-art FE-mesh registration technique MMRep has been ap-plied to many subject-specific modelling cases. For example, Chabanas262.3. Subject-Specific Modellinget al. [23] generat patient-specific FE models of the face by registering ageneric face model to skin and skull surfaces segmented from CT scans.Their models are used to predict facial soft-tissue deformations resultingfrom bone repositioning in maxillofacial surgery. Harandi et al. [57] applythe MMRep method to generate a subject-specific model of oropharynx byregistering FRANK components (tongue, jaw and hyoid) to a volumetriccine-MRI dataset captured during speech. MMRep provides a fast subject-specific FE mesh generation method. However, none of them focuses onregistration of hybrid models that couple multiple rigid bodies and FE com-ponents.Multi-structure Modelling A number of musculoskeletal models withmultiple anatomical structures, such as bones, skins, muscles and tendons,have been proposed to enable realistic biomechanical simulations [7, 10, 80,117, 137, 142]. In order to create fictional characters with precise internalanatomy for realistic animation, Ali-Hamadi et al. [3] propose an anatomytransfer method, which transfers a template anatomical model to an arbi-trary target character defined by its boundary representation (skin). Theyfirst compute the registration of the template and the target skin. Then thetemplate bones are transferred to the target character. Finally, the bonelayer, along with the target skin eroded using the fat thickness information,are then used to define a volume where we map the internal anatomy ofthe template model to the target character using harmonic (Laplacian) de-formation. This anatomy-transfer method can quickly generate anatomicalmodels for a wide range of target characters. However, it has two unac-ceptable drawbacks in terms of subject-specific biomechanical modelling.Firstly, in most cases, the correspondence information extracted from thesubject medical data is not the boundary of the unknown anatomical struc-tures; hence, both interpolation and extrapolation are needed in order totransfer the anatomy. Besides, this method fails to preserve mesh quality oftemplate models; it is very likely to generate invalid elements in FE modelswhen large deformation is needed. Currently, creating subject-specific mod-els with multiple anatomical structures attracts much less attention thansubject-specific modelling of skeletons and isolated organs. There remainsa need for an efficient method that can map multi-structure biomechanicalmodels to given subject data and maintain their functionality.272.4. Biomedical ApplicationsFigure 2.10: Virtual implantation of intralaryngeal prosthesis on a 3D recon-struction from a CT scan of a patient. c©Wiley (2016), Raguin et al. [123],adapted with permission.2.4 Biomedical ApplicationsSubject specific modelling can be put into the practice to assist in managinga wide range of different medical situations, such as analyzing dysfunctionalcases, comparing treatment options when more than one possibility exists,and postoperative prediction. Here we review several potential biomedicalapplications that are most closely related to our efforts.2.4.1 Intralaryngeal Prosthesis ImplantationIntralaryngeal prosthesis proves to be the optimal solution for all those pa-tients who have dysphagia due to an impairment of the pharyngeal stage:Due to a deficiency in the brain or nervous system, the epiglottis is notable to prevent food and liquids from penetrating into the pulmonary tract.Raguin et al. [123] apply virtual implantation on 3D reconstruction froma CT scan sequence of the patient in pre-implantation step to determinethe size of the prosthesis (Figure 2.10). We expect subject-specific mod-els of upper-airway complex to assist with pre-implantation planning andcustomizing prosthesis by predicting the postoperative biomechanics of theassociate behaviours such as swallowing and breathing.282.4. Biomedical Applications2.4.2 Intensity Modulated Radiation TherapyIntensity-modulated radiation therapy (IMRT) is an advanced mode of high-precision radiotherapy that uses computer-controlled linear accelerators todeliver precise radiation doses to a malignant tumor or specific areas withinthe tumor. Due to high radiation doses, patients with head and neck cancerhave risk of suffering from swallowing disorder (or dysphagia) following theIMRT treatment. Swallowing-sparing intensity-modulated radiation ther-apy (sw-IMRT) tends to shape doses of radiation to avoid salivary glandsand oropharyngeal structures thought to be essential to swallowing [113].Song et al. [133] apply a mathematical model of trabecular bone to iden-tify patients with high risk for IMRT treatment-related bone fracture. Weexpect to apply subject-specific models of upper-airway complex to assessswallowing muscle function of specific subjects, and for understanding of therelationship between the swallowing dysfunction risk factor and the radiationdose delivered by an IMRT plan, which can provide valuable information forIMRT treatment planning.2.4.3 Hemiglossectomy and HemimandibulectomyTreatment of oral cancer commonly involves surgical resection of canceroustissue, in addition to radiation therapy and chemotherapy. Depending onthe size and location of the lesion, tissue resection can involve the portionsof the mandible (hemimandibulectomy), tongue (hemiglossectomy), floor ofmouth, and associated muscles [136].Biomechanical tongue models have been used to simulate the effect ofhemiglossectomy [16, 38]. These studies modified the structure of a tonguemodel to mimic tongue resection and reconstruction with free-flap soft-tissuegrafts. The reported simulations deal primarily with the effect of stiffening asub-region of the tongue, representing a legion or reconstruction, on tonguemovements.Vascularized osteocutaneous, osteomyocutaneous and alloplastic graftsare commonly used to restore mandibular continuity after hemimandibulec-tomy [1, 59, 102, 119, 127]. Whether or not the jaw is reconstructed, Hemi-mandibulectomy can significantly alters jaw biomechanics and deficienciesin mastication, speech and other orofacial functions are often observed [56].Numerical simulation methods have been applied to investigate the influ-ence of jaw reconstruction on mandible movement and bite force [56, 136];Stavness [136] use inverse simulation to find muscle activation strategy thatcan compensate for functional deficits after hemimandibulectomy.292.5. Discussion and ConclusionSubject-specific modelling can provide anatomical and physiological in-formation for a specific patient with oral cancer. It enables physicians totake account of the inter-subject variability of oropharyngeal biomechan-ics. We expect subject-specific models of upper-airway complex to predictthe surgical influence on human behaviours, such as speech, swallowing andmastication, and to assist with biomedical treatment and rehabilitation plan-ning.2.4.4 Maxillofacial SurgeryOrthognathic surgery is addressed for patients suffering from maxillofacialdysmorphosis of the lower part of the face, i.e. from disequilibrium betweenthe mandible, the upper jaw and the face [23]. Subject specific modellingmethod has been applied to predict aesthetic outcomes of maxillofaical surg-eries [23, 49, 108, 153]. The reported studies modify skull structure to mimica surgical procedure, and then use passive FE models of the facial tissue topredict the resulting impact on the face surface. This approach can be ex-panded to incorporate active deformation (deformation in respond to musclecontraction), thereby allowing prediction of different post-operative facialexpressions.2.5 Discussion and ConclusionCadaveric study is the most straightforward process for assessing organ ge-ometric data and mechanical properties. In recent years, a wide range ofin vivo medical imaging techniques have been applied to visualize the com-plex anatomical structures of bones, soft tissues and muscles of the headand neck. As a natural extension of observational analysis, biomechani-cal modelling combines the geometric and mechanical information within amathematical representation thereby enabling functional analysis. Manybiomechanical models have been proposed in the literature for the jaw,tongue, face, and other upper-airway sub-components. By coupling thesesub-components, a state-of-art holistic model of upper-airway complex, i.e.FRANK (Section 2.2.1), has been generated, representing the average hu-man anatomy and function. Subject-specific modelling enables the transi-tion from generic understanding of biomechanical phenomena to addressingbiomechanics of a particular individual. Creating a biomechanical modelrelies heavily on expert interaction. The slow process of model creationprevents us from simulating large numbers of individual cases. To ease theefforts of subject-specific modelling, registration methods tends to morph302.5. Discussion and Conclusiona predefined template model to align with certain subject data. However,few published methods focuses on registration of holistic models of the cou-pled upper-airway system, such as FRANK. A patient-specific model of theupper-airway system can enable motor-control simulations of complex hu-man behaviours, such as swallowing and speech production. Thus, numerousbiomedical applications will become feasible, including analyzing dysfunc-tion and planning treatment for patients.This chapter has presented an overview of biomechanical modelling ofthe upper-airway system, subject-specific modelling methods and the relatedbiomedical applications. We have identified areas of research that requirefurther investigation. The state-of-art biomechanical model of upper-airwaycomplex are generic and irrelevant to medical data and measurement of anyparticular subjects. Currently available subject-specific modelling methodsdo not apply to hybrid and modularized models. The improvement of thefunctional resolution of the subject-specific model enables the simulations ofcomplex human behaviours and can be applied to the potential treatment. Inthe following chapters we describe our contributions to these open researchproblems.31Chapter 3Subject-Specific Modelling ofUpper-Airway ComplexSubject-specific modelling enables individualized simulations best suited forclinical purposes. Subject-specific models may be directly reconstructedfrom medical image data. However, this approach proves to be time-consumingand labour-intensive in case of complex models. Registration methods aimat reducing the ambiguity introduced by medical data and easing the re-designing efforts: a deformation field is sought that morphs a pre-designedtemplate model to a target dataset. In previous chapter, we reviewed afunctional template of upper-airway complex: FRANK. In this chapter, weintroduce a registration method for transferring this holistic model from thegeneric space into a specific subject space.Model registration for constructing subject-specific upper-airway modelshas two main challenges. First, complete medical data for all componentsinvolved in FRANK is usually hard to obtain for a single subject; hence themodel geometry partially is unknown in most cases. To address this issue,model registration needs to minimize the morphological deviation from theaverage anatomy (i.e. the template) when enforcing the correspondenceconstraints.The second challenge of registration methods is preserving the function-ality of the resulting model in terms of two aspects: 1. the numerical sta-bility and accuracy; 2. the correct motor control behaviours. As the meshquality will significantly influence the numerical accuracy and stability ofFE models, maintaining mesh quality is one of the most important require-ments for the registration. Besides, in order to obtain correct motor controlbehaviours for a hybrid and modularized model, such as FRANK, the cou-pling constraints, such as the connectivity between the FE components, thelocation of joints, the positions of muscle attachments need to be consistentwith the new model geometry.We summarize the requirements mentioned above as three types of regu-larity: 1. inter-component regularity: maintaining the spatial relation-ship, including connectivity, topology, relative posture and size, between323.1. Multi-Structure Registrationthe model components; 2. intra-component regularity: preserving theshape regularity of the underlying discretization structures of every sub-components; 3. functional regularity: keeping the functional information(including coupling attachments between components, muscle attachmentsand biomechanical properties) similar to the template but relevant to thenew model geometry.In following sections, we describe a subject-specific upper-airway com-plex modelling method that aims at morphing the holistic template to thesubject data and maintaining the three types of regularity. This methodis based on a novel multi-structure registration technique, which is ableto maintain both the inter- and intra-component regularity, and to accom-modate different geometric discretization. We first introduce the multi-structure registration technique in Section 3.1. Then, the detailed descrip-tion of our proposed subject-specific modelling framework follows in Sec-tion 3.2. This framework is evaluated by registering FRANK onto twovolumetric medical image dataset of the human head and neck. As a re-sult, we obtain two subject-specific models of upper-airway complex anddemonstrate their functionality in biomechancial simulation of simple speechmotions. Finally, the discussion and conclusion are given in Section 3.3.3.1 Multi-Structure RegistrationEfforts to register a complete holistic model, such as FRANK, into the sub-ject domain mostly suffer from excessive inter- and intra-component dis-tortion, specially when there is a large morphological difference or sparsecorrespondences between the template and the subject data.Free-form deformation (FFD) [52, 129] is frequently used for non-rigidregistration [2, 18, 125, 126, 128, 163]. Meshes or images are embeddedinto a deformable virtual grid, and morphed alongside, while the grid ver-tices move. Through regularizing the grid deformation, FFD minimizesthe distance between the correspondence pairs, while smoothly interpolat-ing in the regions where no explicit correspondence exists. Tensor prod-uct B-spline volume is the most popular FFD model for registration dueto its low computational complexity. To model a large range of deforma-tions, multi-resolution B-spline FFDs are commonly used in previous stud-ies [2, 18, 125, 126, 128, 163]; FFD grids are progressively refined to sequen-tially reduce the registration error. This approach maintains an adjustableregistration accuracy, and is often sufficient to avoid spatial folding, as il-lustrated in Figure 3.1. However, multi-resolution B-spline FFD does not333.1. Multi-Structure RegistrationFigure 3.1: Top: Low FFD resolutions limit the registration accuracy; highFFD resolutions may develop spatial folding (see arrow). Bottom: Multi-resolution FFD can maintian an adjustable registration accuracy, withoutdeveloping any spatial folding. c©Springer (2001), Schnabel et al. [128],adapted with permission.guarantee a homeomorphism (i.e. continuous, invertible transformations).To address this issue, Edwards et al. [32] model the non-folding propertyas a soft constraint. Choi and Lee [27], however, derive sufficient condi-tions for FFD injectivity which are represented in terms of control pointdisplacements. Rueckert et al. [126] blend these injectivity conditions intotheir composite-FFD framework in order to achieve diffeomorphic image reg-istration. Homeomorphic FFD maintains the spatial relationship betweenembedded components. However, the overall regularity of the grid defor-mation cannot guarantee the regularity within each embedded componentitself (i.e. FFD is not shape-aware). When large deformation is requiredfor registration, quality of the embedded meshes is very likely to be under-mined. Besides, the deformation in the interpolated region can be arbitrary,i.e. the deformed morphologies may vary for different initial settings of thevirtual grid when the correspondence is sparse.Local Transformation Model (LTM) is another popular deformation modelfor non-rigid registration [5, 6, 64, 81, 83, 85, 112, 143, 161]. The LTM map-ping function is defined as F (pi) = Tipi + ti, where pi is the i-th vertexposition; Ti and ti are the local linear transformation and the translationrespectively. Smuth et al. [143] and Huang et al. [64] define the local trans-formations Ti as the rotations, Ti = Ri, leading to As-Rigid-As-Possible343.1. Multi-Structure Registration(ARAP) deformations. ARAP deformations are length-preserving; hencethey can effectively avoid mesh distortion in registration. However, theserigid local transformation models are usually over-constrained for registra-tion; they are incapable of handling the correspondences with different sizesor those which undergo large local stretching [162]. To increase the flexi-bility of the deformations, several As-Similar-As-Possible (ASAP) mappingmethods are proposed in previous studies [81, 112, 161] for registration.The local transformations Ti are defined as a combination of scaling androtation, Ti = siRi. ASAP can effectively avoid mesh distortion and isflexible enough to allow both local and global scaling. As-Affine-As-Possible(AAAP) deformations allow more freedom to capture fine details in reg-istration [5, 6, 83, 85]. Their local transformations are defined as affinetransformations, Ti = Ai. Since AAAP deformations have large degreesof freedom, they are very likely to make the registration under-constrained,particularly when the correspondence is sparse. Therefore, additional con-straints are often applied to regularize the registration system. For example,Allen et al. [5] and Amberg et al. [6] add smooth constraints into their reg-istration frameworks to ensure adjacent affine transformations are similar.LTMs are useful for avoiding distortion within a single component. Howeverthey cannot preserve the spatial relationship between different components.Noticing that FFD and LTM can compensate the shortcomings of eachother, we choose to combine a B-Spline FFD and an ASAP mapping into asingle transformation model: Structure-Preserving Free-Form Deformation(SPFFD). SPFFD permits two exciting deformation properties: 1. the em-bedded shapes are mapped to a target configuration by a homeomorphism,i.e. the mapping function is continuous and topology-preserving; 2. for eachshape, the mapping function is a similarity transformation (nearly), whichavoids excessive distortion while being flexible enough to allow local andglobal scaling. Moreover, the proposed SPFFD can accommodate differ-ent geometric discretization (e.g. surface mesh, volumetric mesh). We firstdemonstrate the overall framework of the proposed multi-structure registra-tion in Section 3.1.1. The we introduce the generalized ASAP deformationin Section 3.1.3. The full description for SPFFD is given in Section 3.1.4.Finally, we demonstrate the performance of our proposed registration frame-work using a synthetic data set; the experiments and results are demon-strated in Section 3.1.5.353.1. Multi-Structure RegistrationFigure 3.2: Proposed multi-structure registration framework.3.1.1 Registration FrameworkOur multi-structure registration pipeline is illustrated using a simple 2D di-dactic case in Figure 3.3. Given a set of Source Components (SC) (rectanglesin Figure 3.3(a)), we assume some of them have point-to-point correspon-dences with certain target surfaces (T) (ellipse in Figure 3.3(a)); we referto these components as Reference Components (RC) (red rectangle in Fig-ure 3.3(a)). As shown in Figure 3.2, we register the source components tothe target surfaces in two sequential steps:Step 1. Correspondences Establishment: The first step is to estab-lish the correspondences between the reference components and the targetsurfaces, we refer to them as C(RC, T ) (Figure 3.3(c)). We find C(RC, T ) byregistering the surface of RC to the target surfaces using a modified extrinsicIterative Closet Point (ICP) method, which is described in Section 3.1.2. Byminimizing the correspondence energy Ecorr, we maximize the closest-pointshape similarity between the reference components and the target surfaces.To allow enough flexibility for registration of two shapes with differentmorphology, and to preserve the fine details of the template, we employ anAs-Similar-As-Possible (ASAP) mapping. The ASAP mapping is obtainedby minimizing a deformation energy, EASAP =∑nElocal(n), where Elocal islocal energy that measures the difference between deformation gradients andthe corresponding similarity transformations. Details of our ASAP mappingare introduced in Section 3.1.3.Thus we established the correspondence C(RC, T ) by minimizing thefollowing energy:ECE = Ecorr + wASAPEASAP . (3.1)363.1. Multi-Structure RegistrationFigure 3.3: Illustration of multi-structure registration pipeline. (a) the ini-tial setting: all the rectangles are source components; the red rectanglerepresents the reference component, and other two rectangles are expectedto deform along with the red. The ellipse by dash line represents the tar-get surface. (b)-(c) establish the correspondences between the red rectangleand the ellipse by computing the registration between them. (d)-(f) all therectangles are embedded into a SPFFD grid; by moving the SPFFD controlpoints (black dots), the distances between the correspondences are mini-mized while making minimal morphological change on the rectangles. Inorder to convexify the objective function and prevent overfitting, a coarse-to-fine strategy is applied: SPFFD starts from a coarse deformation grid,and then gradually increases the grid resolution.373.1. Multi-Structure RegistrationWe initialize the registration with high ASAP weights wASAP = 1.0. Ifthe relative correspondence energy did not change significantly between theiteration j and j + 1 (i.e. |Ejcorr − Ej+1corr|/Ejcorr < 0.1), we additionallyrelax the regularization weights wj+1ASAP = wjASAP − 0.01, until wASAP <0.02. The adaptation of weights initially favors global rigid alignment andsubsequently lowers the stiffness of the surface of RC to allow increasingdeformation as the optimization progresses. Hand tuning is not necessaryin the correspondences-establishment step.Step 2. Deformation Transfer: In the second step, we apply the pro-posed Structure-Preserving Free-Form Deformation (SPFFD) to generate ahomeomorphic, shape-aware mapping function, which transfers the defor-mation of the reference components to all the source components throughan energy minimization (from Figure 3.3(d) to Figure 3.3(f)). The knowncorrespondences, C(RC, T ), act as constraints for our deformation trans-fer process. Since such constraints are over-deterministic, we enforce themsoftly by minimizing a correspondence energy Ecorr′ , which is fomulatedin Section 3.1.2. SPFFD maintains the inter- and intra-component shaperegularity by minimizing a smooth energy Esmooth and a shape-preservingenergy Eshape respectively. The details of our proposed SPFFD are given inSection 3.1.4.Sum the individual energy terms from above to form the full objectivefunction of the optimization:EDF = Ecorr′ + wsmoothEsmooth + wshapeEshape. (3.2)We keep the two regularization weights (the smooth weight wsmooth andshape-preserving weight wshape) at the same level, i.e. wsmooth = wshape =wreg. The regularization weight wreg controls the flexibility of the sourcecomponents, and can be tuned to allow appropriate levels of deformationfor different applications.3.1.2 Correspondence ComputationFor discrete-shape registration, correspondences are defined for each ver-tex as the displacement that maximizes a certain similarity measure [47].Extrinsic similarity measures are based on the current configuration of thesurfaces in the Euclidean space. Closest point criterion is the most popu-lar extrinsic similarity measure for discrete-shape registration for its sim-plicity [15]. Iterative Closest Point (ICP) algorithm [12] seeks to register383.1. Multi-Structure Registrationtwo shapes by finding the best transformation that minimizes the extrinsicdistance between them. Since the error function of the closest point reg-istration framework is non-linear and non-smooth, ICP algorithm is verylikely to converge at certain local minima, particularly for non-rigid regis-tration. Intrinsic methods, on the other hand, use distance metrics embed-ded in lower dimensions as the shape similarity measures [15, 64, 74, 87,100]. These intrinsic properties of shapes are quasi-invariant under objectpose and deformation [15]. However, intrinsic similarities are sensitive totopological noise, which makes them inappropriate for partial registration.Probabilistic correspondences are another class of shape-similarity measure-ments [28, 51, 69, 90, 91, 109, 145]. Instead of assuming the one-to-one (bi-nary) correspondence, one-to-many relaxations are applied to allow for fuzzycorrespondences [69]. Probabilistic similarity measures are less sensitive tothe missing correspondence and outliers. In addition, from optimizationpoint view, probabilistic registration frameworks can replace a non-linear,non-smooth error function by a convex smooth approximation [69]. Theseproperties make them valuable when the template and target shapes arecomplex, noisy or partially overlapping. However, probabilistic similaritymeasures often lead to considerably larger computational cost.For the simplicity of implementation and the relatively low computa-tional cost, we establish the correspondences between the surfaces of RCand the target surfaces using a pairwise ICP method. At each iteration, thetemplate surfaces are projected to the corresponding target surfaces andvice-versa, until a satisfactory error distance is obtained (Figure 3.4(a)). Inorder to avoid local minima, we assign a larger standard deviation of dis-tances in the tangent direction compared to the normal direction, as shownin Figure 3.4(b). The correspondence energy is then defined as:Ecorr =∑c(vc − tc)T Cov−1c (vc − tc), (3.3)where c is the index for correspondence pairs; vc is the vertices on the surfaceof RC, and tc is their target positions; Covc is the 3 × 3 covariance matrixfor the c-th correspondence, which is calculated as:Covc = Rc · 0 00 1 00 0 1 ·RTc , (3.4)where  is a small constant representing covariance along the surface normal.Rc is a 3 × 3 orthogonal matrix; the first column of Rc is the normal vector393.1. Multi-Structure Registration(a) (b)Figure 3.4: Iterative closest point (ICP) correspondence. (a) The templatesurface is projected to the target surface and vice-versa. (b) In order toavoid local minimum, a larger standard deviation of distances is assigned inthe tangent direction compared to the normal direction.of the target surfaces at position tc; and the other two columns contain thetwo basis vectors of the tangent plane.In the deformation-transfer step, the correspondence energy Ecorr′ issimilar to Equation 3.3, as:Ecorr′ =∑c∈C(RC,T )(vc − tc)T Cov−1c (vc − tc). (3.5)Different from Equation 3.3, the target positions tc are fixed in this equation.3.1.3 As-Similar-As-Possible DeformationYamazaki et al. [161] proposed an ASAP mapping method for trianglemeshes that proves to have good convergence rate and relatively low compu-tational cost. We generalize Yamazaki’s method to accommodate differentgeometric discretization. Suppose Un is a certain local unit on the 3D meshM . The global deformation energy EASAP of mesh M is the weighted sumof local energies EASAP (n) measured at local units Un, denoted as:EASAP =∑nwnEASAP (n), (3.6)where wn are weights for local energies.Suppose vi is the position of the i-th vertex on the mesh M ; v′i is itsdeformed position. Each local unit Un associates with a set of neighbourvertices N (n). Our ASAP mapping restricts the local linear transformation403.1. Multi-Structure Registrationfor Un to a similarity transformation. We define the local energy for Un as:EASAP (n) =(1snR˜Tnx′n − xn)TKn(1snR˜Tnx′n − xn)= xTnKnxn −2snx′Tn R˜nKnxn +1s2nx′Tn R˜nKnR˜Tnx′n(3.7)where xn is the concatenated vectors of vertices vi ∈ N (n); Kn is a localstiffness matrix associated with unit Un; Rn and sn are the local rotationmatrix and the scaling factor at unit Un; a tilde represents taking the Kro-necker product, R˜n = I3×3 ⊗Rn, where I3×3 is an identity matrix.Following Sorkine and Alexa [135], we minimize EASAP by iterative al-ternating between local and global phases:• Local Phase: Fix the vertices, and compute best fit local rotationsand scaling factors:We determine the local rotations by:Rn = argminRnEASAP (n); (3.8)If the local stiffness Kn is rotational-invariant (i.e. Kn = R˜nKnR˜Tn ),Equation 3.8 can be expanded as:Rn = argmaxRnx′Tn R˜nKnxn; (3.9)In contrast, if Kn in Equation 3.7 is not rotation-invariant, i.e. Kn 6=R˜nKnR˜Tn , we assume that the incremental local rotation δRn is neg-ligible in each iteration so that [R˜n]t+1Kn[R˜n]Tt+1 ≈ [R˜n]tKn[R˜n]Tt ,where [Rn]t is the local rotation at the t-th iteration. Then we candetermine the optimial δRn by:δRn = argmaxδRnx′Tn δR˜n[R˜n]tKnxn; (3.10)We optimize Equation 3.9 and 3.10 using the Singular Value Decom-position (SVD) method proposed by [9].To update the local scaling factors, we use the updated Rn values toenforce that the partial derivatives of Equation 3.7 w.r.t sn vanish;this leads to:sn = argminsnEASAP (n)=x′Tn R˜nKnR˜Tnx′nx′Tn R˜nKnxn(3.11)413.1. Multi-Structure RegistrationFigure 3.5: Notation for triangular mesh.• Global Phase: Fix the local rotations and scaling factors, and up-date vertices by minimizing the deformation energy EASAP in Equa-tion 3.6.Due to the fact that our model is closely related to volumetric mesh andtriangle mesh, without losing generality, we formulate the ASAP mappingfor these two discretizations as follow.Triangle Mesh Given a triangle mesh St, we interpret the unit Ui as theone-ring neighbours denoted by N (i), which is a set of vertices connectedto the i-th vertex, as shown in Figure 3.5. Analogue to [135], we define thelocal energy as:EASAP (i) =∑j∈N (i)w2ij‖1siRTi (v′i − v′j)− (vi − vj) ‖2, (3.12)which can be compactly written in the form of Equation 3.7. Noted thatKi = R˜iKiR˜Ti , the local rotation Ri is determined using Equation 3.8. Toavoid discretization bias, we adopt the cotangent weighting formula [103,121]:wij =12(cotαij + βij) {ij} is an interior edge;12(cotαij) {ij} is a boundary edge;0 otherwise,(3.13)where αij , βij are the angle opposite of the mesh edge (i, j). To compute theglobal deformation energy, we set the weights in Equation 3.6 as wi =1Ai,where Ai is the Voronoi area of cell at center vertex vi [103].Volumetric Mesh Given a volumetric mesh Sv, using Finite Element(FE)method, we interpret the unit Ue as the e-th element; the nodes of Ue are423.1. Multi-Structure Registrationdenoted by N (e). We update the global phase by minimizing the mechanicalstrain energy E as:E =∫Ωσ :  dV, (3.14)where σ and  are the stress and the strain tensor respectively. The localstiffness can be denoted as:Ke =∫ΩeBTe DeBedV, (3.15)where De is a constitutive matrix and Be is a differential operator thatmaps the nodal-displacement vector to the strain vector. This local stiffnessmatrix is not rotational-invariant; therefore, we determine the local rotationusing Equation 3.10.3.1.4 Structure-Preserving Free-Form DeformationWe embed the template shape into a B-spline FFD virtual grid as in [126].We then deform the shape by manipulating a lattice of control points ina continuous deformation field. The deformed position v′ of an arbitraryembedded point v is determined by:v′ =n∑i=0m∑j=0l∑k=0Ni,p(u)Nj,q(v)Nk,r(w)pijk, (3.16)where pijk is the control point labelled with the lattice index i, j, k. Andn, m, l are the total number of control points in each direction; Nx,y()represents the x-th basis function (of degree p) of the 1D B-spline. Theparametric coordinates u, v, w of an arbitrary point are found using theNewton-Raphson method, which searches within the bounds determined bythe knots span where v belongs. In order to reduce computation cost wechoose linear basis functions. In its compact form, Equation 3.16 can bewritten as:v′ = Bp, (3.17)where B is a sparse matrix (local support property [120]) which contains allthe basis functions; p is the concatenated vector of the control points.To avoid undesired distortion, we regularize the FFD grid deformationusing its first-order derivatives, which define a global smoothness constraint,as:Esmooth =∫Ω‖ ∇dx ‖2 + ‖ ∇dy ‖2 + ‖ ∇dz ‖2 dV, (3.18)433.1. Multi-Structure Registrationwhere ∇ is the gradient operator; dx, dy, dz are the displacements in thethree canonical orthogonal directions. Using the variational principle, Esmoothcan be minimized by changing the positions of the FFD control points.The global smoothness constraint in Equation 3.18 is not shape-aware;hence it cannot maintain the regularity of the individual embedding meshstructures. We introduce shape-aware local rigidity by coupling our ASAPenergy (Equation 3.6) with the FFD (Equation 3.17):Eshape(M) =∑n(1snR˜Tn Bˆnp− xn)TKn(1snR˜Tn Bˆnp− xn)= pT[∑n1s2nBˆTnKˆnBˆn]p− 2pT[∑n1snBˆTnKˆn(R˜nxn)]+∑nxTnKnxn,(3.19)where Bˆn is the concatenated matrix of the basis function B (in Equa-tion 3.17), so that x′n = Bˆnp; Kˆn is the warped stiffness matrix: Kˆn =R˜nKnR˜Tn . The last term is a constant. Equation 3.19 gives the deforma-tion energy of the embedded objects, which is controlled by the positions ofthe FFD control points. Therefore, The shape-preserving energy Eshape(M)contributes a local stiffness for mesh structure (or component) M , whichenables the FFD volume to be shape-aware. The overall shape-preservingenergy Eshape in Equation 3.2 is formulated as the weighted sum of the ener-gies of individual mesh components, i.e. Eshape =∑M wMEshape(M). Forthe reference components that have complete target geometry we set theirweights wM as 1.0; For the components that do not have target information,we set their weights wM as 3.0 to enforce larger morphological constraints.Similar to the composite FFD proposed by [126], a coarse-to-fine strat-egy is applied to avoid over-fitting: the FFD starts from a coarse grid, andgradually increases its resolution until a satisfactory error distance is ob-tained. In each level, when no significant energy decreases is obtained bymoving the control points, the FFD grid is re-meshed with additional DoF(as shown in Figure 3.3(e) and (f)). The pose of the FFD grid in worldcoordinates is initialized to align with the three principle directions of theembedded objects. The initial state of the global smooth energy (Equation3.18) is restored when the FFD grid goes to the next level, while accumulat-ing the shape energy (Equation 3.19) of each embedded object. This strat-egy allows large deformations while maintaining the regularity of embeddedshapes. In order to avoid spatial folding, we set the maximal displacement443.1. Multi-Structure Registrationof each control point as one third of the FFD grid spacing [27]; this leads toa homeomorphism: bijective, continuous, non-folding mapping.3.1.5 ExperimentTo demonstrate the performance of our registration method, we arranged asimple hybrid registration problem that involves both volumetric and surfacemeshes with sparse correspondences. The initial setting is illustrated inFigure 3.6(a). The red box surface is (the only) reference component in thisexperiment, and the blue sphere is its target surface. All the componentsplaced at the bottom, including two volumetric meshes and two surfacemeshes, are expected to deform along with the red box.As described in Section 3.1.1, we first established the correspondencebetween the red box and the blue sphere. Then, our proposed structure-preserving FFD (based on linear B-spline volumes) was used to deform allthe source components enforcing the correspondence constraints. The finalresult is illustrated in Figure 3.6(b). We compared our method with thediffeomorphic FFD (based on cubic B-spline volumes) proposed by [126].The resulting configuration by their method is shown in the Figure 3.6(c). Asit can be seen in the figure, both methods preserved the spatial relationshipbetween source components; our method maintained the shape regularity ofthe meshes, while the diffeomorphic FFD caused excessive mesh distortion.Figure 3.7 and 3.8 show the registration error and mesh quality overthe course of 300 iterations. The registration error is calculated as thesurface distance between the reference component (red box) and its targetsurface (blue sphere). The mean values of the registration error is similarbetween the two methods. Our method shows larger maximum error, since itcompromises some correspondence constraints to preserve shape regularity.We calculate the mesh quality (for both volumetric meshes and surfacemeshes) using a shape-measure metric: mean ratio (η) [70, 89]. Shape-measure metrics are invariant under translation, rotation, reflection anduniform scaling of the element; they attain the maximum (1.0) when theelement is in ideal configuration, and minimum (0.0) when the element isdegenerated [76]. For both the volumetric and surface meshes, our methodshows considerably better mesh quality. The grid resolution increases withthe number of iterations which, in turn, provides higher degrees of freedomto both methods. Our method is able to recover the mesh quality aftersufficient iterations (i.e. around 50 for ηmean, and 175 for ηmin). However,since the diffeomorphic FFD is not shape-aware, it fails to recover the meshquality.453.1. Multi-Structure Registration(a) Initial setting(b) Proposed method(c) DiffeomorphicFigure 3.6: Illustration of the registration performance on the syntheticdata: (a) the initial setting; the red box and the blue sphere are the refer-ence and target surfaces. Four other meshes – two volumetric FE meshesand two triangular surface meshes (yellow) – are expected to deform alongwith the red box. (b) result of our proposed method; both the inter- andintra-component regularities are preserved. (c) result of the diffeomorphicregistration [126]; the method preserves the spatial relationship between thesource components, but generates excessive mesh distortion, e.g. the ele-ments located at the pointed region are excessively warped, which does nothappen in the result by our method.463.1. Multi-Structure Registration(a) Registration error (b) Registration errorFigure 3.7: Registration error (surface distance) measured over the coursesof 300 iterations. Our results (in solid lines) are compared with the dif-feomorphic registration by Rueckert et al. [126] (in dashed lines): (a) theaverage registration error (Errormean); and (b) maximum registration error(Errormax).(a) Mesh quality (mean value) (b) Meah quality (minimum value)Figure 3.8: Mesh quality (η) measured over the courses of 300 iterations.Our results (in solid lines) are compared with the diffeomorphic registrationby Rueckert et al. [126] (in dashed lines): (a) the average mesh quality(ηmean); and (b) the minimum mesh quality (ηmin).473.2. Subject-Specific Modelling3.2 Subject-Specific ModellingFigure 3.10 shows the framework for subject-specific modelling of upper-airway complex as we move from the medical images, and the hybrid tem-plate model, to creating and simulating our subject-specific models. Ourmedical data has been briefly introduced in Chapter 2; more details are givenin Section 3.2.1. We revisit the comprehensive template model (FRANK)and briefly outline its major features in Section 3.2.2. As shown in thefigure, we morph the template into the subject-space in three steps: Corre-spondence establishment, anatomy and functionality transfer. The detailsof this framework are described in Section 3.2.33.2.1 Medical Image DataWe build subject-specific models based on two volumetric medical images,shown in Figure 3.9.CT Dataset : Volumetric images of a male subject during a swallowwere captured in a single shot using a 320-detector-row CT scanner (ToshibaMedical Systems, Otawara, Japan) [67]. The spatial resolution of the imagesis 0.5mm × 0.5mm × 0.5mm, and the temporal resolution is 10 Hz. Thisdataset has 21 time frames in total. The subject was seated on the chair in asemi-reclining position at an angle of about 45◦. We morphed the FRANKtemplate into the last frame of the dynamic CT scans, where the subjectand the FRANK are in the similar posture, as shown in Figures 3.9(a)–(c).MR Imaging Dataset : An atlas image of the human head-and-neckwas constructed using 3D MR images captured on 20 normal subjects inthe supine position [157]. All MRI scanning was performed on a Siemens3.0T Tim Trio system (Siemens Healthcare, Inc., Malvern, PA, USA). AT2-weighted Turbo Spin Echo sequence with an echo train length of 12and TE/TR of 62ms/2500ms was used. The dimensions of the 3D MRimages are 255 × 255 × z (where z ranges from 10 to 24), with 0.94mm ×0.94mm in-plane resolution and 3mm slice thickness. The resulting atlas isan isotropic volume with spatial resolution of 0.94mm. Detailed descriptionof the involved image processing techniques can be found in [157]. The MRatlas image is shown in Figures 3.9(d)–(f).483.2. Subject-Specific Modelling(a) CT – Mid-sagittal (b) CT – Axial (c) CT – Coronal(d) MRI – Mid-sagittal (e) MRI – Axial (f) MRI – CoronalFigure 3.9: Medical images used in this study: CT (a–c) and MRI (d–f).3.2.2 Template ModelIn order to simulate complex motions coordinating multiple upper-airway or-gans, we adopt the Functional Reference ANatomical Knowledge (FRANK)as the template model for subject-specific modelling of upper-airway com-plex. As shown in Figure 2.5, FRANK couples 5 FE models (tongue, softpalate, pharynx, larynx and face), 3 rigid-body-represented bones (jaw, hy-oid and maxilla), 7 rigid-body-represented laryngeal cartilages and a numberof spring-like structures (muscles and ligaments) in a modularized approach.FRANK components attach with each other by several types of bilateralconstraints. The maxilla is anchored in space. The jaw is connected withthe maxilla by the temporomandibular joint that is modelled as three con-straint planes; these planes limit the lateral motion of the jaw and constrainit to follow a pre-defined arc when opening and closing. FE components areattached to other FE components or rigid bodies using nodal attachments.Muscles are approximated as point-to-point Hill-type actuators that may at-493.2. Subject-Specific Modellingtach to a rigid body or pass through a FE body. The air-tight surface wrapsover, and attaches to the FE models and rigid-bodies to create a parametricupper-airway mucosa layer. The details of FRANK have been introduced inSection 2.2.13.2.3 Modelling FrameworkSubject-specific modelling by registration strives to find a deformation fieldwhich morphs a biomechanical template model to a target dataset, whilepreserving the morphology and functionality of the model. Here, the tem-plate model is hybrid, i.e. it consists of components with different geometricdiscretization; and the target is represented in the form of surfaces, pointclouds, or landmarks extracted from medical images.It is often hard to obtain complete correspondence information betweenthe medical images and the template. We refer to all the rigid bodies andFE models of the template model as the Source Components (SC). Someof these components (such as the jaw and hyoid in CT registration) haveexplicit correspondences to the image (i.e. target positions); these compo-nents are referred to as the Reference Components (RC). List of RC and SCcomponents is included in Table 3.2.6.This method is a three-step sequential procedure, as shown in Figure3.10. The first two steps are similar to the multi-structure registration pro-cedure proposed in Section 3.1. The last step is to update the auxiliarycomponents, such as various bilateral constraints and physical properties,and keep them relevant to the new geometry.Step 1. Correspondences Establishment: The first step is to establishthe correspondences between the template model and the subject data, werefer to them as C(RC,Seg). We find C(RC,Seg) using the correspondenceestablishment method introduced in Section 3.1.1.Step 2. Anatomy Transfer: In the second step, using the deformation-transfer method introduced in Section 3.1.1, we generate a smooth, home-omorphic, shape-aware mapping, which transfers the geometry of the tem-plate organs from generic space to subject space based on the correspondenceconstraints C(RC,Seg). Since the mapping function is topology preserving,we subdivide the hybrid model into subgroups if topology changes are neededduring registration; in each subgroup, the mapping function preserves thespatial relationship between the organs and maintains their regularity.503.2. Subject-Specific ModellingFigure 3.10: Proposed framework for subject-specific modelling and simula-tion of the upper-airway complex.To maintain the functionality of the model, we add an element-quality-controlling energy into the anatomy transfer process to prevent the poor-quality elements (of the template FE meshes) from further degrading, asdescribed in Section 3.2.4.Our anatomy transfer method is able to maintain the spatial relation-ship between adjacent components of the template model. However, whenthere exists a large morphological difference between the template and sub-ject, the resulting geometry can mismatch with the subject data (see Figure3.11(a)). Disparity in morphology can influence the functional behavioursof the subject-specific model by inducing false muscle line directions or jointlocations, which may affect the validity of the clinically relevant simulation.To maintain the registration accuracy, we allow landmarks (picked manu-ally) to be added in order to guide and correct the deformation. Figure3.11(b) illustrates the registration result where the nose and the lower partof the pharynx are corrected by adding landmarks that guide the anatomytransfer process.Step 3. Functionality Transfer: In the final step, we use an approach totransfer the functional information of the template model, such as the mor-phology of muscles or the joint locations, to the registered subject-specificmeshes. Our functionality transfer methods, as described in Section 3.2.5,513.2. Subject-Specific Modelling(a) Without Landmarks (b) With LandmarksFigure 3.11: The registration result where the nose and the lower part of thepharynx are corrected by adding landmarks that guide the anatomy transferprocess.are automated, and do not require additional manual effort.3.2.4 FE Quality ControlOur shape-aware FFD can maintain the overall mesh quality during anatomytransfer. However, when poor quality sub-domains exist in the template FEmeshes, their quality can degrade even more so after registration. This sit-uation may result in invalid elements, as shown in Figure 3.2.4. Severalmesh untangling and quality improvement techniques have been previouslyproposed [36, 76, 77]; but they impose no constraints on reallocation of thenodes, which may cause loss of the registration accuracy. [19] proposeda Jacobian based relaxation procedure to recover element regularity andimprove element quality after registration. However, this Mesh-Match-and-Repair (MMRep) algorithm only maintains a minimum mesh quality whichmay not be sufficient to maintain FE stability under large deformation dur-ing simulations. Moreover, the nodal test based on Jacobian metric, used byMMRep may fail to detect inverted elements [75] and thus cause the repairto fail.523.2. Subject-Specific ModellingFigure 3.12: Example of element quality distribution of a FE mesh beforeand after registration. The dashed line represents the mesh quality dis-tribution in rest configuration. After registration, the mesh quality levelis decreased by the resulting deformation. A wide distribution producesinvalid elements (round dot dash line); a narrow distribution (solid line),maintains the quality of every elements above an acceptable level.(a) (b)Figure 3.13: Illustration of quality-controlling mechanism. (a). for eachelement, we first find its ideal configuration by transforming the referenceconfiguration to make the best fit with the current configuration; then a forcefe is applied to drive the element deforming to its ideal configuration. (b).The magnitudes of the forces fe and weights we are Gaussian-distributionfunctions of the element quality. The standard deviation of the weight func-tions σw is set to 0.25. The standard deviation of the force function σf isset to λ/2, where λ is a value so that ten percent of elements have qualityvalues smaller than it; we limit the maximum value of λ as 0.5.In order to avoid local irregularity, we aim at preserving and improving533.2. Subject-Specific Modellingthe element qualities in every step of our registration. A force-based element-quality constraint is added into the anatomy transfer process, which mapsthe actual shapes of the poor quality elements to their ideal configurations.As shown in Figure 3.13, a body-force fe is applied to the e-th element. Wedefine the mesh-quality-controlling energy Equality as the summation of themechanical potential energy (based on the principle of virtual work) [78] ofeach element:Equality =∑ewe∫Ωeσe : e − fe · ue dV (3.20)where∫Ωeσe : edV is the mechanical strain energy of a single element; ueis the displacement filed, and fe is the body-force per unit volume, which isdefined as:fe =fesIeVR(xIe − xe)xIe = sIeRIexRe + tIe,(3.21)where VR is the volume of the reference element; xRe is a point in the refer-ence element frame, and sIe, RIe, tIe are scaling factor, rotation, translationrespectively, which transform the reference element to its target configura-tion (i.e. ideal element). The target configuration of the e-th element isdetermined by minimizing∫Ωe‖ fe ‖2dV . The magnitudes of the weights weand forces fe are defined as Gaussian-distribution functions (Figure 3.13(b))of the element-quality values; the poor quality elements are assigned largeweights and strength to prevent them from further degrading, thus leadingto a uniform quality distribution across the FE mesh (solid line in Figure3.2.4). We use the mean-ratio value [70, 88] as the mesh-quality metric.3.2.5 Functionality TransferAs aforementioned in this chapter, in order preserve the functionality ofthe registered model, the functional information, such as the position ofthe muscle attachments, the locations of joints and the physical properties,should stay relevant to the new model geometry.Muscle attachments: In biomechanical models, muscles and ligamentsare either embedded into FE components (soft-tissues) or attached to therigid bodies (bones, cartilages). In the first case, after the FE mesh ismorphed into the subject space, we transform the embedded muscles usingthe FE interpolation functions (i.e. shape functions). In the second case,543.2. Subject-Specific Modellingwe apply a surface-oriented FFD [132] to transform the muscle attachmentpoints. The transformation preserves the barycentric coordinates in thesurrounding polymesh faces; thus, the locations of the muscle attachmentsare determined by the local geometry of their associated rigid bodies.Component attachments: The coupling attachments – which areused to connect adjacent components – need to be reconstructed to main-tain the functionality of the subject-specific model. Based on the typesof parametrization of organs within the hybrid model, we summarize thecoupling attachments into three categories, as follow:• Rigid body – Rigid body: The pose of the joints between therigid-body pairs are transformed (rotated and translated) to fit withthe deformed geometry; rotations and translations are defined basedon the rigid transformations of the vertices that surround the joints.• FE model – FE model: The proposed anatomy–transfer methodpreserves the embedding topology (i.e. the boundary connections be-tween the FE models are maintained); therefore the FE-FE attach-ments are transferred to the deformed geometry without any addi-tional re-designing efforts.• FE model – Rigid body: The locations of the node-attachments areupdated to the deformed positions of the corresponding nodes directly.Physical properties: We translate the center of mass of the rigid-bodies using the centroid displacements of their associated meshes. Weassume that the densities of organs are consistent between different subjects;therefore, the mass of each component is re-calculated according to theirvolumes. Based on the assumption that the template model and the subjectdata are in the same posture, we proportionally scale the morphologicalparameters (i.e. rest length and the optimal fascicle length) of the musclesand the ligaments according to their length change.3.2.6 ExperimentsWe evaluate the proposed functionality-preserving non-rigid registration methodon two 3D medical Images (CT and MRI) of the human head and neck. Thetemplate model of upper-airway complex (FRANK) has been introduced inSection 2.2.1. We extracted the segmentation for RC from the CT and MRIimages. The segmentation information is summarized in Table 3.2.6. Thesegmentation results are shown in Figure 3.14.553.2. Subject-Specific ModellingOrgans (RC) Dataset Method Geometry ResultBones CT Auto CompleteTongue CT Manual CompleteSoft-palate CT Manual PartialEpiglottis CT Manual PartialAirway CT Auto -Bones MRI Manual PartialTongue MRI Manual CompleteAirway MRI Auto -Table 3.1: Summary of segmentation. Bones include jaw, maxilla and hyoid.In section 3.2.6, we compare the performance of our method with a state-of-art FE mesh registration technique, Mesh-Match-and-Repair[18], in tworegistration scenarios with complete and partial correspondences. In Section3.2.6, we present our subject-specific models of upper-airway complex.Registration of the FE modelsMesh-quality of FE models is essential for biomechanical simulation; badmesh-quality will cause inaccuracy and instability [21]. We validate the pro-posed FE mesh registration method (SPFFD) by comparing with MMRepalgorithm in two different cases:• Complete organ geometry can be retrieved from the medical data: thetemplate tongue mesh is registered to the tongue surfaces segmentedfrom the CT and MRI dataset.• Partial or adjacent organ geometry is available: the airway surface inthe template model was used as the reference component and is mor-phed onto the airway segmentation (from the CT and MRI dataset).As the source component, the template pharynx FE mesh is deformedalong with the template airway surface.We used the same correspondence-search algorithm (single-direction ICP)in the evaluation process. We rigidly aligned the FE model and the targetsurface as the initial setting for both our proposed algorithm and MMRep.Then, the two registration methods were applied to generate subject-specificFE meshes. We set the stop criteria as 0.45mm mean surface distance, whichis smaller than the voxel-size in both images.563.2. Subject-Specific Modelling(a) Bones (CT) (b) Soft-tissues and Airway(CT)(c) Bones (MRI) (d) Tongue and Airway(MRI)Figure 3.14: Segmentation of the CT and MRI images. Some segmentationsurfaces are set to be transparent to illustrate the covered components. (a).Segmentation of bones from the CT image, including complete geometryof jaw, maxilla and hyoid. (b). Segmentation of soft-tissues and airwayboundary from the CT image, including complete geometry of tongue andsoft-palate, partial geometry of epiglottis. (c). Segmentation of bones fromthe MRI image, including partial geometry of jaw, maxilla and hyoid. (d).Segmentation of complete geometry of tongue and airway from the MRIimage.573.2. Subject-Specific ModellingThe registration results are shown in Figures 3.15 and 3.16. In Fig-ure 3.2.6, we illustrate the element-quality histograms of the deformed FEmeshes. Our method produced more uniform element qualities (narrowerdistribution) over the resulting mesh. Table 3.2 and 3.3 summarizes theperformance of the two registration procedures for tongue and pharynx re-spectively. When the two methods achieved similar registration accuracy(mean value of surface distance), our method has smaller standard deviation.Notably, for both image datasets, the average and minimum mean-ratio val-ues of the resulting models from our method are considerably larger thanthe MMRep results (Figure 3.17). Compared to the template tongue andpharynx model, the average and the minimum mean-ratio values (ηmean)have increased with our method. Although the MMRep method holds theJacobian-Ratio values (JR) of the resulting meshes above a threshold (0.03),it fails to repair the elements with small mean-ratio value (η < 0.1). Wecompare the resulting pharynx geometry with the template model (as shownin Figure 3.16(a) – 3.16(f)); when the available correspondence is sparse, ourresults have smaller morphological change from the template than than theMMRep results.583.2. Subject-Specific Modelling(a) CT – SPFFD (b) CT – MMRep(c) MRI – SPFFD (d) MRI – MMRepFigure 3.15: Registration results of tongue model.FE Mesh Dmean ± σ(mm) ηmean(%) ηmin(%) JRmin(%)Template - 48.72 0.00 0.54CT – SPFFD 0.33± 0.35 58.60 26.96 6.25CT – MMRep 0.44± 0.67 44.85 0.00 3.32MRI – SPFFD 0.36± 0.36 57.93 19.58 2.73MRI – MMRep 0.39± 0.65 46.82 0.00 3.35Table 3.2: Summary of tongue FE mesh. Dmean and σ are the mean surfacedistance and the standard deviation. ηmean and ηmin are the mean andminimum mean-ratio value [70]. JRmin is the minimum Jacobian ratio value.593.2. Subject-Specific Modelling(a) Template (b) CT – SPFFD (c) CT – MMRep(d) Template (e) MRI – SPFFD (f) MRI – MMRepFigure 3.16: Registration results of pharynx model. The template and theregistered pharynx aligned with the subject airway meshes are shown. Whenavailable correspondence is sparse, morphological deviation (from the tem-plate geometry) of our results is smaller than deviation of MMRep results.FE Mesh ηmean(%) ηmin(%) JRmin(%)Template 79.05 35.71 0.34CT – SPFFD 80.00 50.59 38.62CT – MMRep 73.29 34.03 30.11MRI – SPFFD 79.45 49.74 33.80MRI – MMRep 74.03 25.57 16.14Table 3.3: Summary of pharynx FE mesh. ηmean and ηmin are the meanand minimum mean-ratio value [70]. JRmin is the minimum Jacobian ratiovalue.603.2. Subject-Specific Modelling(a) Tongue Models(b) Pharynx ModelsFigure 3.17: Comparison of the mesh quality by our methods and MMRep.The bars with solid outlines and dash lines illustrate the mean mean-ratiovalues (ηmean) and minimum mean-ratio values (ηmin) of the resulting FEmodels respectively. Figure (a) compares the element qualities of the tonguemodels by two methods; the mean-ratio values (ηmean and ηmin) have beenlisted in Table 3.2. Note that the minimum mean-ratio values by our methodare 0.0. Figure (b) compares the element qualities of the pharynx modelsby two methods; the mean-ratio values (ηmean and ηmin) have been listed inTable 3.3.61(a) TongueCT – SPFFD (b) TongueCT – MMRep (c) TongueMRI – SPFFD (d) TongueMRI – MMRep(e) PharynxCT – SPFFD (f) PharynxCT – MMRep (g) PharynxMRI – SPFFD (h) PharynxMRI – MMRepFigure 3.18: Histograms of the element qualities (mean ratio) over tongue and pharynx FE meshes. TongueCTand TongueMRI are the results from CT and MRI respectively; PharynxCT and PharynxMRI are the results fromCT and MRI respectively.623.2. Subject-Specific ModellingGroups RC SCCT1 tongue, soft-palate, airway tongue, soft-palate, pharynxCT2 airway, epiglottis, pharynx larynx, laryngeal cartilagesCT3 jaw, maxilla, hyoid, tongue, thyroid jaw, maxilla, hyoid, faceMRI1 tongue, airway tongue, soft-palate, pharynxMRI2 tongue, airway, pharynx larynx, laryngeal cartilageMRI3 jaw, maxilla, hyoid, tongue, thyroid jaw, maxilla, hyoid, faceTable 3.4: Subdivision of FRANK.Registration of FRANKFor each dataset, we divided the FRANK model into three sub-groups andperformed the registration to the segmented surfaces in each subgroup (fromFigure 3.14) separately. The RC and SC of each group are illustrated inTable 3.2.6. Because the face and larynx models do not connect to other FEmodels, this sub-division scheme does not affect the functionality transferprocess described in Section 3.2.5.The registration results, Model 1 (from CT dataset) and Model 2 (fromMRI dataset) are illustrated in Figure 3.20. Registration errors are shownin Table 3.2.6. For each RC, our method achieved submillimeter registrationaccuracy (mean surface distance). Larger registration errors were localizedaround absent or ill-defined features, such as the temporal process in the CTimage (which is absent in the template model), shown in Figure 3.14(a)),the vocal fold which is invisible in the MRI image and the lateral boundarybetween the tongue and pharynx. In these areas, the closest point correspon-dences were inconsistent with the structure-preserving constraints (definedby Equation 3.19); as a result, the local rigidity prevented the points in theseareas from converging to bad positions, thus leaving the larger registrationerrors.The histograms of the mesh quality of the template and the resulting FEmodels are shown in Figure 3.21. The mesh-quality information is also sum-marized in Table 3.2.6. Our method showed similar performance on the twodatasets. After registration, the average element qualities of the resultingFE models were similar to the values of the template(Figure 3.22(a)). Thenumbers of elements whose mean-ratio values were smaller than 0.1 were re-duced(Figure 3.22(b)). For tongue and pharynx, the minimum element qual-ities were noticeably increased. However, our methods are unable to improvethe poor-quality elements located at the anatomical sub-regions that havehigh-curvature or fine geometric details. For example, Figure 3.19 illustrates633.2. Subject-Specific ModellingFigure 3.19: Illustration of poor-quality elements in larynx and face. Thetop row shows the mean-ratio values and their corresponding colors. Arrowspoint out the poor-quality sub-regions.poor-quality elements in template larynx model. Since our method main-tained and improved the mesh qualities compared the template model, theresulting subject-specific models demonstrated comparable stability duringbiomechanical simulation.Since the CT and MRI dataset were captured in different postures (asdescribed in Section 3.2.1), the rest configurations of the tongue are notice-ably different in Model 1 and Model 2. During the scanning process, thesubject in the CT data was lying backwards at an angle of 45◦. In contrast,the MRI dataset was captured in the supine position; hence the tongue de-formed towards the hard and soft palate due to the effect of gravity. As ourfunctionality-transfer methods assume the subjects in the medical data arein stress-free configurations, the external-loading influence on the biologicalstructures are neglected.For the purpose of examining the functionality of the two subject-specificmodels, we performed forward and inverse simulations on three simple speechpostures and compared the results with similar simulations on the templatemodel.643.2. Subject-Specific Modelling(a) Model 1 – Mid-sagittal (b) Model 2 – Mid-sagittal(c) Model 1 – Sagittal (d) Model 1 – Axial (e) Model 1 – Coronal(f) Model 2 – Sagittal (g) Model 2 – Axial (h) Model 2 – CoronalFigure 3.20: Subject-specific models of upper-airways complex653.2. Subject-Specific Modelling(a) Tongue – Template (b) Tongue – Model 1 (c) Tongue – Model 2(d) Pharynx – Template (e) Pharynx – Model 1 (f) Pharynx – Model 2(g) Soft-Palate – Template (h) Soft-Palate – Model 1 (i) Soft-Palate – Model 2(j) Larynx – Template (k) Larynx – Model 1 (l) Larynx – Model 2(m) Face – Template (n) Face – Model 1 (o) Face – Model 2Figure 3.21: Histograms of the element qualities (mean ratio) overFE meshes within the three models of upper-airway complex. Figures(a)(d)(g)(j)(m) show the mesh qualities of FRANK model. Model 1 andModel 2 are the registration results of the CT and MRI dataset respectively.66Images Metric Jaw Maxilla Hyoid Tongue Soft-palate Epiglottis AirwayCTDmean(mm) 0.30 0.41 0.16 0.74 0.55 0.67 0.86Dmax(mm) 2.14 6.29 0.76 4.67 3.59 2.74 3.81σ(mm) 0.27 0.47 0.13 0.68 0.56 0.49 0.79MRIDmean(mm) 0.39 0.44 0.29 0.62 - - 0.93Dmax(mm) 2.33 2.38 1.40 3.65 - - 5.02σ(mm) 0.34 0.35 0.25 0.62 - - 0.89Table 3.5: Registration errors within the two subject-specific models of upper-airway complex. Dmean, Dmax andσ are the mean surface distance, the maximum surface distances and the standard deviation.Metric Tongue Pharynx Soft-palate Larynx FaceTemplateηmean(%) 48.72 79.05 71.14 63.22 53.88ηmin(%) 0 35.71 0 0 0#(η < 0.1) 20 0 48 121 77Model 1 (from CT)ηmean(%) 55.35 76.90 68.40 61.83 55.72ηmin(%) 20.10 40.07 0 0 0#(η < 0.1) 0 0 10 100 42Model 2 (from MRI)ηmean(%) 55.96 78.26 71.56 62.84 60.63ηmin(%) 29.29 42.30 0 0 0#(η < 0.1) 0 0 10 100 42Table 3.6: Summary of FE mesh qualities. ηmean and ηmax are the mean and maximum mean-ratio value [70].#(η < 0.1) is the number of elements whose mean-ratios value are smaller than 0.1.673.2.Subject-SpecificModelling(a)(b)Figure 3.22: Comparison of the element quality of the resulting models and the template models. Figure (a)illustrates the average element qualities (ηmean) of the each FE components, which have been listed in Table 3.2.6.Figure (b) illustrates the number of elements whose mean-ratio values are smaller than 0.1 (#(η < 0.1)), whichhas been listed in Table 3.2.6.683.2. Subject-Specific ModellingForward Simulation To demonstrate the functionality of the models, weperformed forward simulation using 3 sets of muscle activations for speechproduction – corresponding to the vowels /a/, /i/ and /u/ – which weredefined in [7]. Below we list the muscles involved followed by the percentactivation in parenthesis:• /a/ Formed using tongue and jaw muscles. Tongue: genioglossus mid-dle (0.08), anterior (0.12); hyoglossus (0.15); verticalis (0.05). Jawopening: anterior and posterior digastric, sternohyoid (all at 0.1).• /i/ Formed with tongue, face, and jaw muscles. Tongue: genioglos-sus posterior(0.5), middle (0.02), anterior (0.02); superior longitudinal(0.05); interior longitudinal (0.05); mylohyoid(0.10). Face: risorius(0.05) and zygomaticus (0.05). Jaw closing: temporalis (anterior, mid-dle, posterior), masseter, median pterygoid (all at 0.01).• /u/ Formed using tongue and face muscles. Tongue: styloglossus (0.1).Face: orbicularis oris middle ring (0.35).We fed these activations into the three models, i.e. the template model(FRANK), Model 1 (from CT) and Model 2 (from MRI). Each model wasallowed to settle under gravity for 0.1s, before the muscle activations werelinearly increased to their maximum level at 0.3s. The models settled againwithout further activation changes until 0.4s. The end postures are illus-trated in Figure 3.23. Three models are able to maintain their stability andto produce reasonable motions in response to the muscle activations in theforward-dynamics simulations.Inverse Simulation To demonstrate the inverse simulation capabilities ofthe resulting models, we recreated the posture /a/ by supplying the forward-simulation motion into the inverse solver. The inverse simulation sought tocalculate activations used to attain the posture. We picked 35 trackingpoints on the tongue back surface and 11 tracking points on the jaw. Thetracking weight (wm), regularization weight (wa) and damping weight (wd)were set to 1.0, 0.001 and 0.0001 respectively.The resulting postures (from the inverse simulations) were similar towhat is shown in Figure 3.23. The tracking errors – average distances be-tween the tracking points and their corresponding target positions – of thetemplate, CT and MRI models were 0.24 ± 0.09mm, 0.21 ± 0.09mm and0.05± 0.03mm respectively. The predefined forward simulation activationsare compared with the inverse-predicted activations in Figure 3.24. The693.2. Subject-Specific Modelling(a) Template – /a/ (b) Model 1 – /a/ (c) Model 2 – /a/(d) Template – /i/ (e) Model 1 – /i/ (f) Model 2 – /i/(g) Template – /u/ (h) Model 1 – /u/ (i) Model 2 – /u/Figure 3.23: View of end postures of three models, Template (FRANK),Model 1 (from CT dataset) and Model 2 (from MRI dataset)), in vowels(/a/, /i/ and /u/) productions.703.3. Discussion and Conclusionmuscles – whose activation levels were greater than 1% – were consideredas active muscles; other muscles are not shown in Figure 3.24. The threemodels yielded similar predictions of muscle activations for vowel /a/ pro-duction. The active muscles picked by the inverse solver were the same asthe predefined combinations.3.3 Discussion and ConclusionFor the first time, motor-driven subject-specific models of upper-airway com-plex were created. Based on correspondences established between a templatemodel and subject data, we transfer a functional template from the genericspace into a specific subject space. In order to preserve the functionality ofthe template, we seek to maintain three types of regularity:1. Inter-Component Regularity: Maintain the spatial relationship(including connectivity, topology, relative posture and size) betweenmodel components. We apply volumetric B-Splines to interpolate be-tween sparse correspondences so that the model can deform along withthe sub-anatomical-structures whose geometries are available.2. Intra-Component Regularity: Preserve the underlying discretiza-tion structures of the template model during registration. For eachcomponent, we enforce their local transformations to be similarity-invariant.3. Functional Regularity: Keep the functional information (includingcoupling attachments between components, muscle attachments andbiomechanical properties) similar to the template but relevant to thenew model geometry.In order to maintain the inter- and intra-component regularity, we firstpropose a multi-structure registration technique (in Section 3.1), which tendsto superimpose a collection of template meshes (with different types of ge-ometric discretizations) onto certain target dataset (e.g. surfaces or pointcloud), while minimizing their morphological deviation. We performed anexperiment on a synthetic data set and compare the registration perfor-mance with a traditional diffeomorphic registration method [126]. Bothmethods are capable of maintaining the spatial relationship between com-ponents. However, our method demonstrates a better performance in termsof preserving the mesh quality and shape regularity of each individual com-ponent.713.3. Discussion and Conclusion(a) Template (FRANK)(b) Model 1 (from CT dataset)(c) Model 2 (from MRI dataset)Figure 3.24: A comparison of predefined muscle activations (dashed lines)and the inverse-predicted activations (solid lines).723.3. Discussion and ConclusionBased on the multi-structure registration technique, we then propose aregistration framework for subject-specific modelling of upper-airway com-plex 3.2. We compared the performance of our methods with a state-of-artFE-model-registration method (MMRep) [18] (in Section 3.2.6). Our ex-periments indicate that our methods outperform MMRep with respect topreserving the FE mesh quality and addressing sparse correspondence onthese particular datasets.We created two subject-specific models of upper-airway complex basedon the CT and MRI dataset (in Section 3.2.6). For every source compo-nents (components that have correspondence), our method achieved sub-millimeter surface-distance-represented optimization accuracy. To demon-strate the functionality of the resulting models, we performed activation-driven simulations to model speech production on three vowels /a/, /i/ and/u/. Our models can maintain their stability during the simulation andproperly respond to the muscle activations and external loadings (such asgravity). The kinematics obtained from the forward simulations of /a/ wereused to drive the inverse simulations. Compared with the template model,our models showed comparable tracking performance and yielded similarmuscle-activation predictions.Our subject-specific registration methods can benefit from several im-provements. Firstly, our free-form-deformation starts from a uniformlyspaced grid at every resolution level. When the registration errors are un-evenly distributed in space, this approach can introduce unnecessary com-putational cost during registration. This problem can be avoided by re-placing FFD with a shape-customized embedded deformation. Secondly,we achieve mesh-quality control during registration, which avoids the loss ofregistration accuracy that happens in the mesh repair step after registration.However, for partial registration, the force-based energy may lead to unre-alistic deformation at positions where geometry is not fully determined bycorrespondence constraints. Future study should explore better mesh qualitypreserving mechanisms. Thirdly, due to the complexity of the organ geom-etry, our template FE model contains poor-quality elements. Redesigningefforts, to improve the initial mesh quality, will as well enhance the numer-ical stability of the registered models. Fourth, our methods can benefitfrom replacing the deterministic template with a statistic atlas, which notonly can represent the average information of human anatomy but also candescribe the inter-subject variations. Replacing the deterministic templatewith a statistic atlas can help with reducing the morphological uncertaintyand increasing our confidence on the registration results when the subject or-gan geometry is partially unknown from the medical data. Wang et al. [152]733.3. Discussion and Conclusionpropose a statistical atlas based subject-specific FE modelling framework.Future work should explore a more generalized framework to accommodatehybrid and modularized biomechanical models, such as FRANK. Fifth, weexpect to move our registration method into the image space in order toreduce or eliminate segmentation efforts.Currently, we assume that our target organ geometries extracted fromthe medical images with similar posture to our template model are in astress-free configuration. Such assumption does not hold in most in vivoconditions, due to the external forces (such as gravity) and tetanic contrac-tions. A good approximation of the unloading configuration can help withminimizing the bias of the modelled biomechanics. Vavourakis et al. [151]proposed an inverse analysis method to derive the pressure-free configurationof human aortas, and the gravity-free shape of the female breast. However,their studies only focus on isolated FE models. Further study is needed todetermine the unloading reference configuration of hybrid models (such asFRANK).Finally, our methods do not personalize the elastic properties of soft-tissues. Tissue properties may influence the muscular coordinating patternsof a specific physiological task and are needed for reliable subject-specificmodelling. Measuring the tissue properties in vivo is still an ill-posed prob-lem. Elastography provides a non-invasive way for such measurement. Forexample, Cheng et al. [26] applied magnetic resonance elastography to mea-sure the viscoelastic properities of tongue and soft palate. However, as softtissues exhibit non-linear stress-strain behaviour, tissue properties at largerdeformations cannot be inferred from elastography alone [155]. Future workshould further the investigation of in vivo tissue-property measurement.Our study shows the potential of the proposed registration techniquesfor subject-specific modelling of coupled upper-airway structures. However,careful validation is required for applying subject-specific models to clini-cal usage. From the anatomy view, the consistency between the resultingmodel geometry and subject anatomy is needed to be quantified. From thefunctionality view, the fidelity of the biomechanics and the motor-controlbehaviours of the subject-specific models is also needed to be measured.74Chapter 4Data-Driven SwallowingSimulation: TowardsPostoperative Prediction ofSwallowing BiomechanicsOur primary motivation for developing the subject-specific modelling tools isin their application to biomedicine. Computer-assisted diagnosis and treat-ment of coupled upper-airway functions and dysfunctions requires compre-hensive subject-specific models that coordinate multiple motor systems. Inthe previous chapter we have described a subject-specific modelling methodfor generating holistic models of upper-airway complex for specific individ-uals. In this chapter, we apply one of our developed models to simulatingthe normal swallowing motion, and assess the feasibility of postoperativeprediction of swallowing biomechanics.Swallowing is a rapid and complex sequence of movements which requiresprecise coordination of more than 30 muscles located within the oral cavity,pharynx and larynx [130]. In normal conditions, swallowing is divided intofour primary phases:• Oral Preparatory Phase: food is broken down (via mastication)and a cohesive bolus is formed [94].• Oral Transport Phase: food (bolus) is propelled posteriorly throughthe oral cavity and into the oropharynx [94].• Pharyngeal Phase: food (bolus) transits through the oropharynxand Upper Esophageal Sphincter (UES). Epiglottis is automaticallyclosed to prevent food and liquid to penetrate into the upper airwaysand lungs [130].• Oesophageal Phase: food (bolus) is pushed from the oesophagus tothe stomach.75Chapter 4. Data-Driven Swallowing SimulationSwallowing disorder is manifested in the older patients in the case of stroke,neurodegenerative disease, and dementia [124]. It can also be caused bytreatment of oropharyngeal cancer, such as laryngectomy. Swallowing dis-orders are associated with a higher incidence of patient suffering from aspi-ration [123], due to an impairment of the pharyngeal phase: because of adeficiency in the brain or nervous system the epiglottis is not able to preventfood and liquids from penetrating into the pulmonary tract; the accumula-tion of organic matter could lead to an inflammation of the lungs and air-ways [123]. Patients who will not respond or are not eligible for swallowingrehabilitation, the standard treatment is a tracheostomy. In addition, somepatients may also have to be fed via gastrostomy. An alternative optionto the invasive surgical methods for aspiration prevention is intralaryngealprosthesis implantation. For example, a case of a patient with swallowingdisorder (caused by partial laryngectomy) receiving an intralaryngeal pros-thesis is reported in [123]. The deglution videofluoroscopy before and afterintralaryngeal prothesis implantation is shown in Figure 4.1. In order todetermine the size of a such prosthesis in advance, virtual implantation iscreated on a 3D reconstruction from a CT scan of the patient [123], as shownin Figure 2.10. However, no prediction and assessment of the postoperativebolus-transit biomechanics has been accomplished before surgery.In recent years, computational models, as an alternative investigativetool, have been applied to study human swallowing process. Mizunumaet al. [106] and Sonomura et al. [134] apply FE models of human upperairway (as shown in Figure 4.2(a)) to investigate the swallowing of jellyand liquid bolus. Based on manually defined movements of the tongue andpharyngeal wall, they simulate the motions of the rheological models (jellyand fluid bolus) happening during the swallowing. Following Mizunumaet al. and Sonomura et al., Kikuchi et al. [72] use a HMPS model (de-scribed in Section 2.2) to reproduce the swallowing movements recorded inby videofluoroscopy (VF). However, the upper-airway models used in thesestudies do not include bony components (such as hyoid) and muscle drivenstructures. Although their models can assist with the visualization of upper-airway movements, they are unable to reveal the motor control mechanismsof swallowing. In order to investigate the influence of muscle aging on hyoidmotion during swallowing, Tsou et al. [146] build a muscle-driven laryn-gopharynx model based on cadaver data, as shown in Figure 4.2(b). Byapplying the inverse-dynamics simulator to tracking the motion data ex-tracted from VFs, Tsou’s model successfully reproduces the hyoid motionof swallowing within a certain range. More recently, Ho et al. [63] use theoropharyngeal part of FRANK (Section 2.2.1) to simulate swallowing move-76Chapter 4. Data-Driven Swallowing Simulation(a) Before implantation(b) After implantationFigure 4.1: Three successive sequences from videofluoroscopy of deglutitionbefore intralaryngeal prosthesis implantation (a) and after implantation (b).c©Wiley (2016), Raguin et al. [123], adapted with permission. Arrows illus-trate the food bolus penetrating into the trachea of the patient before andafter implantation.77Chapter 4. Data-Driven Swallowing Simulation(a) Model from Sonomura et al. [134] (b) Model from Tsou et al. [146]Figure 4.2: Illustration of two biomechanical models for swallowing sim-ulation reported in previous studies. (a) FE model of upper-airway com-plex from Sonomura et al [134]. This model does not include bony compo-nents (such as hyoid) and muscle driven structures; the organ movementsare driven by manually defined boundary conditions and external forces.c©Wiley (2011), Sonomura et al [134], adapted with permission. (b) Cou-pled biomechanics model of laryngopharynx from Tsou et al. [146]. Musclesare modelled as Hill-type actuators embedded in FE meshes; bony struc-tures and cartilages are modelled as rigid bodies. The geometry of thislaryngopharynx model is extracted from cadaveric data.ment; then, the moving organ surfaces are extracted as the boundary con-dition to simulate food bolus motion using Smooth Particle Hydrodynamics(SPH) method. Ho’s methods provide a promising way to simulate high-fidelity human swallowing. However, their study is not based on realisticanatomy and motion trajectories of a specific subject.In this chapter, we apply a subject-specific model of upper-airway com-plex to simulate the real swallowing behaviour recorded in medical data.Our simulation is based on dynamic 3D CT scans, which capture the organmotion in oral cavity, pharynx, larynx and upper esophagus during normalswallowing [37]. We first describe the details of the kinematic data usedto drive the swallowing simulation in Section 4.1. Then, the details of theinverse simulation are given in Section 4.2. The simulation results are pre-sented in Section 4.3. The discussion and conclusion follow in Section 4.4.784.1. Kinematic DataFigure 4.3: Segmentation of airway (cyan), jaw and hyoid bones (yellow)superimposed with the mid-sagittal slices of the sequential CT images (at 6frames).4.1 Kinematic DataThe dynamic CT dataset used in this study has been introduced in Sec-tion 3.2.1. It has 21 time frames in total with the temporal resolution of10 fps. This sequential CT images record the swallowing motion from oraltransit to the early esophageal phases. Based on the last frame, we constructa subject-specific upper-airway model, as shown in Figure 3.20.We aim at reproducing the swallowing motion from oral transport phaseto pharyngeal phase that is described in the CT image sequence. However,the CT images do not provide enough contrast ratio inside the soft-tissuesto depict the motion of internal tissue points. Therefore, we define thetracking markers as the points distributed on component surfaces; the targetpositions of these surface markers are located on the organ boundaries in thesequential CT images. Ho et al. [62] manually segment the moving airwayboundary from the 21 sequential CT images that we use in this study; theairway boundary depicts critical movements of the upper-airway soft-tissues.794.2. Inverse SimulationFigure 4.4: Illustration of tracking markers (yellow) and their target posi-tions (blue).In order to generate target motion trajectory for rigid bodies, includingjaw and hyoid, we extract their surfaces from the CT volumes using theautomatic segmentation tool in ITK-SNAP toolkit [164]. The segmentaiton(airway and bones) and CT images are illustrated in Figure 4.3.4.2 Inverse SimulationWe perform swallowing simulations based on the segmented bone surfacesand Ho’s airway segmentaiton. During the simulation, each frame (0.1s)of the dynamic CT was divided into fifty numerical time steps. We picktracking markers in the model (representing the motion of the model) byuniformly sampling on the segmented airway and bones (jaw and hyoid)surfaces (yellow points in Figure 4.4). Since rigid bodies have much smallerDoF than FE components, the density of tracking points picked on the jawand the hyoid geometry is set as one sixth of the density on the airway.We used ICP method to search their correspondences on the segmentation,and then linearly interpolate between the starting posture and the corre-spondences of each frame to generate the target motions (blue points inFigure 4.4) for every numerical time steps. As illustrated in Figure 4.5, forrigid components, we first find the best-fit rigid transformations that maptheir meshes to the corresponding segmentation; and then apply the rigidtransformations to their associated tracking markers to generate their tar-804.2. Inverse SimulationFigure 4.5: Methods of determining the target positions (blue points) fortracking markers (yellow points). (a) for tracking markers of rigid compo-nents, we first find the best-fit rigid transformation that maps the meshof the rigid component to the corresponding segmentation; and then applythis rigid transformation to the tracking markers to generate their targetpositions. (b) for tracking markers of deformable components, we use theirclosest points on the corresponding segmentation as their target positions.get positions. For tracking markers of deformable components, we use theirclosest points on the corresponding segmentation as their target positions ineach image frame. Since the CT dataset does not capture the oral prepara-tory phase, we change the posture of the model to match with the first CTframe before the swallowing simulation.814.3. Results4.3 ResultsThe data tracking simulation went through 21 frames in total. Figure 4.7-4.9illustrate 8 frames in mid-sagittal, oblique and top views respectively. Themuscle activations levels estimated by the inverse solver during swallowingare illustrated in Figure 4.10-4.12. As we notice the muscle activation pat-terns on both side (left and right) are similar, we show the left side only.The tracking performance for the FRANK components – the tracking errorsand the standard deviations of the Euclidean distance between the targetpositions and the resulting marker positions – are shown in Figure 4.6. Aswe notice the tracking error around the corniculate cartilage is significantlylarger than other part of the larynx, we illustrated their tracking perfor-mance separately.Figure 4.6: Tracking error and the standard deviation of each component duringswallowing simulation, which was computed as the Euclidean distance betweenthe target positions and the resulting marker positions.82(a) t = 0.0s (b) t = 0.3s (c) t = 0.4s (d) t = 0.5s(e) t = 0.7s (f) t = 1.0s (g) t = 1.4s (h) t = 2.0sFigure 4.7: A sequence of video frames showing the mid-sagittal view of the inverse simulation of swallowing. Themuscle activation pattern at these time frames are illustrated in Figure 4.10-4.12 at the vertical dash lines. Arrow1 shows that the model fails to fully reproduce the laryngeal motion: The vocal fold of the model is much lowerthan its position shown in the CT images. Arrow 2 shows that the contact behaviour between tongue base andepiglottis – which is ignored in the simulation – may contribute to epiglottis inversion.83(a) t = 0.0s (b) t = 0.3s (c) t = 0.4s (d) t = 0.5s(e) t = 0.7s (f) t = 1.0s (g) t = 1.4s (h) t = 2.0sFigure 4.8: A sequence of video frames showing the oblique views of the inverse simulation of swallowing. Facemodel is hidden in the images, in order to illustrate the occluded structures. The muscle activation pattern atthese time frames are illustrated in Figure 4.10-4.12 at the vertical dash lines. Arrows show that the wavelikemotion of the tongue directs the food bolus backwards and downwards.84(a) t = 0.0s (b) t = 0.3s(c) t = 0.4s (d) t = 0.5s(e) t = 0.7s (f) t = 1.0s(g) t = 1.4s (h) t = 2.0sFigure 4.9: A sequence of video frames showing the top view of the inversesimulation of swallowing. The muscle activation pattern at these time framesare illustrated in Figure 4.10-4.12 at the vertical dash lines. Arrow showsvelopharyngeal closure.854.3. ResultsFigure 4.10: Activation levels of the extrinsic and intrinsic tongue muscles esti-mated by the inverse solver during the swallowing. The vertical dash lines labelthe time of t = 0.3s, 0.4s, 0.5s, 1.0s, 1.4s.864.3. ResultsFigure 4.11: Activation levels of the suprahyoid muscles, mastication musclesand palate muscles estimated by the inverse solver during the swallowing. Thevertical dash lines label the time of t = 0.3s, 0.4s, 0.5s, 1.0s, 1.4s.874.3. ResultsFigure 4.12: Activation levels of the laryngeal muscles and the pharyngeal musclesestimated by the inverse solver during the swallowing. Since only three laryngealmuscles (TV, CPR, IAO) are activated during the simulation, other laryngealmuscles are not shown in the figure. The vertical dash lines label the time oft = 0.3s, 0.4s, 0.5s, 1.0s, 1.4s.884.3. ResultsHyoid Jaw Tongue Pharynx2.67mm± 0.88 0.43mm± 0.08 1.12mm± 0.70 0.86mm± 0.63Soft-palate Larynx Corniculate1.88mm± 1.23 1.35mm± 0.96 1.3781mm± 0.6689Table 4.1: The peak values of the tracking errors of the FRANK componentsduring oral transport phase (from t = 0.0s to t = 0.4s).Oral Transport Phase At the outset of the oral transport phase (att = 0.0s), by contracting SL, GGM and TRANS muscles, the bolus is con-tained between the lingual dorsum and hard palate, as shown in Figure 4.10.Then, by contracting PD and AM, hyoid bone is pulled to move superiorly;in the meanwhile, via coordinating the intrinsic tongue muscles (includingTRANS and SL) and the extrinsic tongue muscles (including AM, GGM,GGP), the tongue blade is pushed upwards and moves in a wavelike mo-tion, which directs the bolus posteriorly towards the oropharynx (as shownin Figure 4.8). The mastication muscles (SP, MP, SM, DM, AT, MT andPT) are activated throughout the oral transport phase (from t = 0.0s tot = 0.4s) to stabilize the jaw and tongue. During oral transport phase,except for the hyoid bone, the tracking errors of other tracking componentsare below 2.0mm (Table 4.3); the hyoid bone has the largest tracking error.Pharyngeal Phase Pharyngeal phase lasts for approximate one second(from t = 0.4s to t = 1.4s). As the oral transport phase ends, the bolusenters the oropharynx and crosses over the area of the anterior faucial pillars.This contact initiates the involuntary trigger of the pharyngeal phase [130].TV and IAO contract to close the laryngeal inlet (from t = 0.4s). Thetongue base is retracted towards the posterior pharyngeal wall. At thesame time, LVP pulls the soft palate upwards and backwards; combinedwith SC contraction, the velopharyngeal closure is formed, which preventsthe food bolus from entering the nasal cavity. During this motion, PP,PGA and PGP involve to stabilize the soft palate. As the velopharyngealclosure is formed (from t = 0.5s), the pharyngeal constrictors (SC, MC andIC) are sequentially activated from SC1 downward to IC3 (Figure 4.12),thus producing pharyngeal peristalsis from the oropharynx to the upperesophagus (Figure 4.7(d)-(f)). The movements of the tongue base and thepharyngeal wall are generated to provide the driven force for the pharyngealbolus transit [130]. The contraction of the suprahyoid muscles (AM, PM,894.3. ResultsFigure 4.13: The model fails to reproduce the target hyoid posture at t =0.9s. The yellow points represent the tracking markers; the blue pointsrepresent their corresponding target positions.GH, AD, PD) directs the hyoid bone superiorly and anteriorly (from t =0.4s). The larynx – as it attaches to the hyoid rigid body – passively movesalong with the hyoid bone. The superior and anterior movements of thehyoid bone direct the larynx under the tongue base, thereby preventing thebolus entering the laryngeal inlet, as shown in Figure 4.7(d)-(f).Large tracking errors appear on the larynx and hyoid models during pha-ryngeal phase (Figure 4.6 and Table 4.3). At t = 0.9s, as shown Figure 4.13,the hyoid model fails to reproduce the target posture. In the meanwhile, thelarynx fails to raise the corniculate cartilage. As shown in Figure 4.14, thetracking markers (yellow points) around the corniculate cartilage are sub-stantially lower than their target positions (blue points) at t = 0.9s. As aconsequence, the vocal fold of the larynx model in Figure 4.7(d)-(f) locatesat the position of tracheal inlet of the CT images, leaving the laryngeal inletopen.904.3. ResultsHyoid Jaw Tongue Pharynx7.70mm± 0.95 0.95mm± 0.08 2.83mm± 0.80 4.31mm± 0.37Soft-palate Larynx Corniculate2.32mm± 1.07 2.21mm± 0.91 8.94mm± 0.67Table 4.2: The peak values of the tracking errors of the FRANK componentsduring pharyngeal phase (from t = 0.4s to t = 1.4s).Figure 4.14: The model fails to raise the the corniculate cartilage at t =0.9s, leaving the laryngeal inlet open during the pharyngeal phase. Theyellow points represent the tracking markers; the blue points represent theircorresponding target positions.914.4. Discussion and Conclusion4.4 Discussion and ConclusionAs the results demonstrate, the model was able to achieve normal swallowingthat closely resembles real tissue motion captured during the oral transportphase and the pharyngeal phase. In general, the tracking error in pharyngealphase is greater than that in oral transport phase. Two model componentssuffered from large tracking error: the hyoid bone and the larynx. There areseveral possible explanations for their limited tracking performance. Firstly,the laryngeal complex model is over simplistic and does not provide thefidelity to describe the realistic biomechanics in that region. As an example,we model the laryngeal cartilages as rigid bodies embedded inside the larynxFE model, which makes the larynx model over stiff; in turn, the larynx modellacks the flexibility to allow the elevation of the corniculate cartilage and theinversion of the epiglottis.In addition, the accuracy of the swallowing simulation is not only lim-ited by the biomechanical mode but also the resolution of the image data.The dynamic CT images used in this study do not describe the motion ofa specific tissue point inside the pharyngeal wall. Besides, from t = 0.6s tot = 1.1s, the CT images fail to depict the boundary between the tongue andthe pharyngeal wall. These limitations make the vertical motion informationof the pharyngeal wall unavailable. The elevation of the pharynx is essentialfor the elevation of larynx [130]; Pearson et al. [113, 114] suggest the longpharyngeal muscle group (SalP, StyP, PP) are activated during swallowingto assist hyolaryngeal elevation. However, SalP, StyP and PP were not ac-tivated from t = 0.5s to t = 1.0s in the swallowing simulation (Figure 4.12),which may affect the motion of both the hyoid bone and the larynx.Furthermore, since unilateral constraints may lead to oscillating or stick-ing behaviour in the inverse simulation [136], we did not incorporate the con-tact behaviour in the swallowing simulation. The lack of contact may affectthe swallowing biomechanics. For example, during the pharyngeal phase,the contact between the tongue base and the epiglottis may contribute tothe epiglottis inversion (Figure 4.7(c)-(f)).To summarize, our model is capable of reproducing oropharyngeal mo-tion of normal swallowing. However, it shows limited ability to track hy-olaryngeal movements. The superior and anterior movements of the hyoidand the larynx are essential to swallowing for two reasons. Firstly, it willdirect the laryngeal inlet under the tongue base and seal off the airway tocease respiration and to prevent any misdirected food or liquid from en-tering the trachea. Secondly, these movements will create a biomechanicalforce to pull the cricoid cartilage up and away from the posterior pharyn-924.4. Discussion and Conclusiongeal wall, thus opening the esophagus; the opening of the UES will createan additional source of negative pressure (or suction force) in the upperesophagus, which would greatly enhance the efficiency of pharyngeal bolustransit [130]. Therefore, reproducing of bolus transit biomechanics of nor-mal swallowing requires a functional hyolaryngeal complex. Future work isneeded to increase the fidelity of hyolaryngeal complex model in the upper-airway template. Moreover, applying different image modalities, such astagged MRI, to investigate swallowing motion may provide more accuratekinematic information and help with reduction of the ambiguity of tissuemotion.Although the model failed to reproduce the hyolaryngeal motion, theproposed subject-specific modelling and simulation method can be appliedto assist with the prediction and assessment of the postoperative swallow-ing biomechanics before intralaryngeal prosthesis implantation. Intralaryn-geal prosthesis implantation is a treatment option for the patient sufferingfrom aspiration due to an impairment of the pharyngeal phase (or larynx).Such application does not require a fully functional larynx. Our modeldemonstrates the ability to reproduce the swallowing motion in the orophar-ynx and velopharynx. After insertion of a virtual intralaryngeal prosthesis,such as the prosthesis introduced in [123], coupled with certain fluid simula-tion methods, the model can help with estimating post-implantation bolus-transit biomechanics. In future work, efficient fluid-structure interactionmethods for swallowing simulation will be investigated. Besides, as the con-tact behaviours between organs may influence the biomechanics of the swal-lowing, we are investigating methods to incorporate unilateral constrains inthe inverse simulation.93Chapter 5Data-Driven SpeechSimulation: TowardsSpeaker-Specific ArticulatorySynthesisSpeech production is an essential part of daily life for most people. Speechsynthesis has long been a vital assistive technology tool which allows envi-ronmental barriers to be removed for people with a wide range of disabilities.The physical patterns carrying linguistic information in speech are gener-ated by movements of the articulatory apparatus, such as jaw and tongue.Hence, a careful analysis of the physical properties of the speech apparatus,including an anatomical, biomechanical and aerodynamic characterizationshould contribute to the understainding of the structures of languages and oftheir evolutions [117]. Articulatory synthesis refers to computational tech-niques for synthesizing speech based on models of the human vocal tract(controlled by speech articulators, such as the tongue and jaw) and the ar-ticulation processes occurring there. This technique targets at simulatingspecific voices, speaking styles, and emotions for an arbitrary speaker [13];it is widely considered as a valuable computational aid for the analysis andassessment of speech disorders of specific individuals.Following Perkell [116], who initiated pioneer works in physiologicalmodelling of tongue, more sophisticated and more realistic orofacial mod-els [17, 42, 43, 57, 96, 117, 131, 154] have been developed to enable com-prehensive understanding of the articulatory characters of the upper-airwayorgans in speech production. For example, Buchaillard et al. [17] use a 3Dbiomechanical model of tongue and oral cavity (shown in Figure 5) to studythe motor control patterns of the tongue during the production of Frenchcardinal vowels. Their model includes two dynamic organs, i.e. a FE tongueand a rigid-body-represented hyboid bone; the other fixed structures thatshape the oral cavity are linked with the tongue and hyoid using point-94Chapter 5. Data-Driven Speech Simulation(a) Model from Buchaillard et al. [17] (b) Model from Harandi et al. [57]Figure 5.1: Illustration of two biomechanical models for speech simu-lation reported in previous studies. (a) biomechanical model of upper-airway complex from Buchaillard et al [17]. This model include twodynamic components: the FE tongue (in magenta) and the rigid-body-represented hyoid bone (in yellow). Other structures shaping the the oralcavity are fixed in space to provide boundary conditions. c©AcousticalSociety of America (2009), Buchaillard et al. [17], adapted with permis-sion. (b) The template biomechanical model of oropharynx reported byHarandi et al. [57]. The vocal tract shape is controlled by the motionsof the tongue and jaw. The Planes – used to extract the area function ofthe vocal tract – are orthogonal to the vocal tract center line and evenlydistributed from the lip (#1) to the epiglottis (#20). c© IEEE (2015),Harandi et al. [57], adapted with permission.to-point muscles, and restrict the tongue motion via contact behaviours.Stavness et al. [137] develop a jaw-tongue-hyoid model using dynamic FEmethod combined with rigid-body dynamics; this model is able simulate co-articulation effects in speech production. Based on Stavness’s jaw-tongue-hyboid model, Harandi et al. [57] introduce a speaker-specific articulatorysynthesis framework: They first register the jaw-tongue-hyoid model to aMR data set (as shown in Figure 5) and simulate the tongue speech motionof the utterance a-geese (/@-gis/) based on tagged MR images; then the areafunction of the deformed vocal tract is extracted from the model as the in-put of an 1D implementation of the Navier-Stockes equations [150]. As theseveral important speech articulators (e.g. the face, soft-palate, larynx andpharynx) are missing in Harandi’s model, only inside of the oral cavity thevocal tract is deformable.In recent years, computational orofacial models have been applied to pre-955.1. Medical Datasetdict and assess the impact of head-and-neck surgery on speech production.Surgical treatment of head-and-neck cancer may cause server consequenceson mobility of the articulators and even strong impairments of speech pro-duction. Fujita et al. [38] constructe a FE tongue model based on MRI datato predict and verify the changes in the movements of the tongue with atumor before and after partial glossectomy. Similarly, Buchaillard et al. [16]build an oropharyngeal model based on MR and CT scans of the patientto predict the consequences of the tongue surgery on tongue movements,according to the size and location of the tissue loss.Currently, modelling of coupled articulatory biomechanical system stillattracts much less attention than modelling of isolated articulators. Thevocal tract shape of the speakers can provide vital geometric informationfor articulatory synthesis. In this chapter, we apply the proposed subject-specific modelling methods to create coupled upper-airway systems for twospeakers. Similar to Harandi et al.[57], we simulate the speech movementsbased on the kinematic information of the tongue and jaw extracted fromtagged MR images; the other speech articulators that lack tracking data actas functional regularizers moving along with them (the tongue and jaw).5.1 Medical DatasetOur MRI data captures two normal Caucasian American English speakersin the supine position, who repeated the utterances a-geese (/@-gis/) in timewith a metronome. Cine MRI is able to depict the motion of soft-tissueboundaries, but it usually fails to distinguish internal tissues. In contrast,tagged MR is capable of providing kinematic information of internal tissues,but it blurs organ boundaries. Therefore, we base our simulation on syn-chronized 3D tagged and cine MRI data, which capture the motion of tissuesin the tongue and chin. Both the cine and tagged MRI data were acquiredusing a Siemens 3.0T Time-Trio MRI scanner with a 12-channel head and a4-channel neck coil. The in-plane image resolution is 1.875mm × 1.875mmwith a slice thickness of 6mm. Other sequence parameters include the fol-lowing: TR 36ms, TE 1.47ms, flip angle 6◦, and turbo factor 11. The axial,sagittal and coronal stacks of cine MRI slices are combined to form isotropicsuper-resolution volumes for 26 TFs, using a maximum posterior-MarkovRandom Filed method with an edge-preserving regularization scheme [158].Table 5.1 summarizes the information of each individual speaker. Thephonemes of interest (/@/, /g/, /i/ and /s/) are identified in specific TimeFrames (TFs) in the utterance. Each vowel is identified at the TF before the965.1. Medical Datasettongue begins to move toward the next consonant, i.e. that is the maximumvowel position. Each consonant is identified at the TF when the tonguefirst contacts the palate, i.e. the initial frame, rather than the maximumconsonant position. The mid-sagittal slices of the cine MR images in thechosen TFs are illustrated in Figure 5.2.Speaker Sex Age TF # in /@-gis/index (M/F) (years) @ g i sA F 43 8 10 14 23B M 22 6 10 18 20Table 5.1: Speaker information in this study: Sex, age, and time framesassociated with individual sounds in the utterance a-geese (/@-gis/). Thecorresponding mid-sagittal slices of the cine MR images in these TFs areillustrated in Figure 5.2.Figure 5.2: Mid-sagittal slice of cine MR images at the specific TFs of thephonemes of interest (/@/, /g/, /i/ and /s/). The top and the bottom rowsshow the speaker A and speaker B respectively.975.2. Speaker-Specific ModellingOrgans (RC) Speaker Method Geometry ResultJaw A & B Manual PartialTongue A & B Manual CompleteLip & Chin A & B Auto PartialAirway A & B Auto -Table 5.2: Summary of segmentation.Speaker Metric Jaw Lip & Chin Tongue AirwayADmean(mm) 0.65 0.69 0.58 1.08Dmax(mm) 3.17 3.59 3.89 4.12σ(mm) 0.59 0.70 0.63 0.88BDmean(mm) 0.51 0.99 0.61 1.07Dmax(mm) 2.33 4.59 3.90 4.05σ(mm) 0.45 0.69 0.63 0.83Table 5.3: Registration errors within the two speaker-specific models ofupper-airway complex. Dmean, Dmax and σ are the mean surface distance,the maximum surface distances and the standard deviation.5.2 Speaker-Specific ModellingUsing the model registration methods introduced in Chapter 3, we createdtwo speaker-specific upper-airway models based on the first TF of the cineMR data. We extracted the segmentation for Reference Component (RC)from the cine MR images. The segmentation information is summarized inTable 5.2.Figure 5.3 illustrates our 3D upper-airway models for speakers A and B.Registration errors are shown in Table 5.2. Note the mean values of surface-representation error fall in the range of image resolution. Larger registrationerrors are localized around ill-defined features (e.g. the laryngeal inlet), orthe around poor-quality elements (e.g. the lip and laryngeal inlet.)Figure 5.4 compares average quality and the number of poor-qualityelements of FE meshes in the template and our speaker models. Note thatour registration method improves the average mesh quality in the tongue andface, while maintaining the quality about the same level for the pharynx, softpalate and larynx. Besides, the numbers of elements where the mean-ratiovalues are smaller than 0.1 were reduced for all the FE meshes.985.2. Speaker-Specific Modelling(a) Speaker A – Sagittal (b) Speaker A – Axial (c) Speaker A – Coronal(d) Speaker B – Sagittal (e) Speaker B – Axial (f) Speaker B – CoronalFigure 5.3: Speaker-specific models of upper-airways complex.995.2. Speaker-Specific Modelling(a)(b)Figure 5.4: Comparison of the element quality of the speaker-specific modelsand the template models. Figure (a) illustrates the average element qualities(ηmean) of the each FE components. Figure (b) illustrates the number ofelements whose mean-ratio values are smaller than 0.1 (#(η < 0.1)).1005.2. Speaker-Specific Modelling5.2.1 Inverse SimulationTongue protrusion is an important speech motion of the utterance /@-gis/.To demonstrate the tongue-protrusion ability of our models (including thetemplate, Speaker A and Speaker B), we performed an inverse simulationto produce a tongue-protrusion posture: The models targeted at keepingthe jaw at the neutral position (i.e. the initial position), while forcing thetongue blade and tongue center to touch the hard palate. As shown inFigure 5.5, we used six tracking markers (three at the tongue blade, theother three at the tongue center) for the tongue and three for the jaw. Inthe first 0.10s of the simulations, each model was allowed to settle undergravity. After that, the tracking markers of the jaw stayed at their originalpositions, while the tracking markers of the tongue gradually went upwardsuntil met with the hard palate at 0.17s. The models settled again withoutany further changes on the target positions until 0.20s. The tracking weights(wm), regularization weight (wa) and damping weight (wd) were set to 1.0,0.05, 0.05 respectively. Since the target motion is bilateral symmetry, weenforce the left and right muscles to be activated together and with the sameintensity. The target muscles used in the inverse simulation are listed below:1. Muscles of the tongue: genioglossus [GG], hyoglossus [HG], styloglos-sus [STY], verticalis [VERT], transversus [TRANS], geniohyoid [GH],anterior mylohyoid [MH], and longitudinal (inferior [IL], superior [SL]).GG, VERT and TRANS were further divided into three individual seg-ments (anterior [A], middle [M], posterior [P]).2. Muscles of the Jaw and Hyoid: jaw openers including digastric (an-terior [AD], posterior [PD]) and stylo-hyoid (SH); jaw closers includ-ing temporal (anterior [AT], middle [MT], posterior [PT]), masseter(superficial [SM], deep [DM]), and medial pterygoid (MP); other jawand hyoid muscles including pterygoid (superior-lateral [SP], inferior-lateral [IP]).1015.2. Speaker-Specific Modelling(a) Tracking Markers (b) End PostureFigure 5.5: Illustration of the inverse simulation. (a) The yellowpoints represent the tracking markers; we use six tracking markersfor the tongue, three tracking markers for the jaw. (b) the modelstarget at keeping the jaw at the neutral position, while forcingthe tongue to touch the hard palate. The dash lines representthe potential end postures of the jaw and tongue after the inversesimulation.The end postures and the inverse-predicted muscle activations of thethree models are illustrated in Figure 5.6. The muslces – whose activationlevels were greater than 1% – were considered as active muscles; other mus-cles (including GGA, GGM, IL, IP and SP) are not shown in Figure 5.6.As shown in Figure 5.6, three models used the same muscles to achieve thetongue protrusion postures. TRANS narrows the tongue and VERT widensand grooves it; when these muscles are activated together, they protrudethe tongue. From 0.1s onwards, Speaker A and Speaker B used TRANS inconjunction with VERTM and VERTP to protrude the tongue. However,VERT in the template showed a low level of activity throughout, which sug-gests the template had relatively less tongue protrusion. Other muscles ofthe three models were activated following a similar pattern during the simu-lations. GGP pulled the tongue root forwards. STY pulled the tongue dorsalbackwards to stabilize the tongue; in the meanwhile, this muscle assisted thetongue to move upwards. HG, GH and MH stabilized the elevated the hyoidto assist the upward and forward motion of the tongue. Jaw openers andclosers were activated simultaneously to stabilize the jaw.The tracking errors – average distances between the tracking markersand their corresponding target positions – of the template, Speaker A andSpeaker B are summarized in Table 5.2.1. Note that three models are ca-1025.2. Speaker-Specific Modelling(a) Template (b) Speaker A (c) Speaker BFigure 5.6: The end postures and the inverse-predicted muscle activationsof the template, Speaker A and Speaker B.pable of tongue protrusion and, in the meanwhile, keeping the jaw at theneutral position: For both the tongue and jaw, three models achieve sub-millimeter tracking accuracy.Template Speaker A Speaker BTongue 0.37± 0.11 (mm) 0.17± 0.04 (mm) 0.61± 0.37 (mm)Jaw 0.01± 0.00 (mm) 0.02± 0.00 (mm) 0.01± 0.00 (mm)Table 5.4: Tracking errors of the template, Speaker A and Speaker B at theend posture.1035.3. Speech Simulation5.3 Speech SimulationModels track the soft-tissue in the tongue, and lower jaws motion, basedon tagged MRI trajectories available for them. The tracking weights (wm),regularization weight (wa) and damping weight (wd) were set to 1.0, 0.05,0.05 respectively. To reduce the computational cost of the inverse problem,we consider the speech motion as bilateral symmetry; the left and rightmuscles were activated together and with the same intensity. The targetmuscles included in the speech simulation are the same as the muscles listedin Section 5.2.1.5.3.1 Definition of the TrajectoriesWe perform speech simulations based on the tissue points motion estimatedfrom the tagged MR and cine MR images. The two dimensional motion ofthe tongue and lower-chin tissue-points was estimated from tagged MR im-age slices using the HARmonic Phase (HARP) algorithm [111]. We furtherapplied the Enhanced Incompressible Deformation Estimation Algorithm(E-IDEA) to combine the 2D motion data and make a 3D deformationfield [160]. E-IDEA imposed a smooth, divergence-free, vector spline to in-terpolate velocity fields across the target organs. In HARP, the displacementfield at each TF was calculated with reference to the first TF when the tagswere initially applied. We calculate displacements between successive TFs– from the nth to the (n+ 1)th TF – by using the following process:Tn→n+1 = Tn→1 ◦ T1→n+1where Ti→j denotes the displacement field from the ith to the jth TF. TheTn→1 is computed by inverting the E-IDEA displacement field T1→n usinga simple fixed-point algorithm [25].To reduce the influence of the image noise on the estimated motion,we discarded the tissue points that are close to the surfaces (in the rangeof 2mm). The displacement vectors were averaged in a spherical regionof predefined radius (2mm) around each point of interest. As we noticedthe estimated motions of isolated tissue points at the lower chin were notreliable, we calculated their best-fit rigid transformations as the motion tra-jectory of the jaw model for our inverse simulations. For each speaker, wetracked about 40 tissue points in the tongue, and the rigid motion of the jawthat represented by 20 tracking markers attached to it. A linear interpo-lation was performed between successive TFs to calculate the intermediatedisplacements.1045.3. Speech Simulation5.3.2 ResultsFigure 5.7 demonstrates the cross section views of the Speaker A and SpeakerB in 5 key time frames. Figure 5.8 illustrates the estimated muscle activa-tions. Motor equivalence is evident in our dataset; especially in amount ofactivation used by each speaker, considerable similarity is also visible. Com-paring like colors for TRANS and VERT in Figure 5, red is anterior, bluemiddle, and yellow/green posterior, we can see similarity, in that neitherspeaker activates the TRANS and VERT simultaneously. This indicatesa lack of protrusion, despite the forward motion of the tongue from /g/to /i/ to /s/. The exception is a short pulse before /i/ when Speaker Adid activate TRANSA simultaneously with VERT. Looking at the musclesseparately, both speakers used a low-level activation pattern for TRANSthroughout the speech task, possibly to keep the tongue from spreading toowidely. During /gi/ they used VERTP in conjunction with GGP (row 2),to pull the tongue root forward. Interestingly, other than this one instance,GG was not used by either speakers, possibly because as the largest tonguemuscle [141] it is used for larger tongue motions. In this speech task, the jawwas quite closed and only small motions were needed to execute the sounds.Differences appear as well. Speaker A activated the entire VERT muscle(row 1) for /i/, to keep the tongue surface flat/grooved, while Speaker Bachieved that shape with VERTA, but not VERTM. The IL muscle (row 2),on the other hand, shows greater motor differences; Speaker A, but not B,used IL during /i/.When looking at the jaw and hyoid muscles in row 3, the speakers wereagain quite similar. The jaw closers (red) showed a low level of activitythroughout, since the jaw was quite closed during this speech task. Activa-tion was seen during /g/ and /s/ for the jaw openers and GH, which pulledthe hyoid forwards. These activity peaks may relate to the switch fromvoiceless /g/ to the voiced /i/, or they may elevate or stabilize the tongueroot during the consonants. Stabilization and elevation of the tongue rootand hyoid would assist the tongue to make stable contact with the palate toprecisely produce and direct the airflow for the consonants.The average tracking error – the mean values and standard deviations ofthe Euclidean distance between the target positions and the correspondingmarker positions – at the key TFs is summarized in Table 5.5 and Table 5.6.Note that the error is in the range of MRI resolution. However, comparedwith Speaker A, Speaker B has larger tracking error in both the tongue andjaw.1055.3.SpeechSimulationFigure 5.7: Simulation results for Speaker A (top row) v.s. Speaker B (bottom row) overlaid on the mid-sagittal slice of cine MRI. Meshes of the FE soft-tissues models are shown in white (tongue) and gray(others), while the bony structures are shown in cream.1065.3. Speech SimulationFigure 5.8: Estimated muscle activationsSpeaker @ g i sA 0.74±0.31 1.22±0.45 1.27±0.44 0.81±0.29B 1.33±0.82 1.87±0.80 1.87±0.81 1.72±0.59Table 5.5: Tongue tracking error (mm) over 40 contol points.Speaker @ g i sA 0.89±0.37 1.24±0.32 0.94±0.15 1.32±0.14B 1.23±0.13 1.89±0.25 1.62±0.39 2.04±0.84Table 5.6: Jaw tracking error (mm) over 20 contol points.1075.4. Discussion and Conclusion5.4 Discussion and ConclusionThis chapter demonstrates the potential of our proposed model registrationtechnique for creating speaker-specific biomechanical models of upper-airwaycomplex that facilitate personalized analysis of the speech production. Basedon the first TF of the cine MR images, we created upper-airway models fortwo specific speakers. We enabled an inverse simulation to test the abilityof our models with respect to an important speech movement, i.e. tongue-protrusion. As a result, the speaker-specific models are capable of protrudingtheir tongue and, in the meanwhile, keeping their jaw at the neutral position.Based on tagged and cine MRI of a simple speech motion, the speaker-specific models reproduced the speech motion of the utterance /@-gis/. Ourresults suggest that the inverse solver tracks well and predicts functionalmovements effectively at the areas where tracking data exist: The trackingerrors are in the range of MRI resolution. The inverse-estimated muscleactivations make good agreement with the expectation of speech researchers.We speculate that several factors may have contributed to larger trackingerror measued for Speaker B. First, Speaker B has a slightly larger tonguecompared to Speaker A, with 103.17cm2 vs. 95.61cm2 volume. This meansthat on average each element in the FE model of Speaker B’s tongue needsto account for deformations of a larger region of tissue. Second, SpeakerB shows larger local deformations as well. For example, at the 10th TF(/g/), the average deviation of motion, measured based on displacementof our 40 control points, is about 5.84mm for Speaker B vs. 3.25mm forSpeaker A. These factors suggest that Speaker B may need a higher FEtongue resolution to track the data more accurately. Finally, in this studythe left and right muscles are activated together as one exciter, since thespeech motion is believed to be bilateral. However, for example at the10th TF (/g/) Speaker B shows an average of 1.82mm unilateral tonguemotion, compared to just 0.15mm for Speaker A. We suspect that suchlarge unilateral motion also contributed to the tracking error.In this study, we track the motion of the tongue and lower jaw; theother FRANK components act as biomechanical regularizers in the simula-tions. To allow high-fidelity articulatory synthesis, the speech motion datafor other articulators, such as soft palate or larynx, need to be incorporatedinto the speech simulation. However, currently, tagged MRI still suffers fromnoise and inaccuracy, especially near edges, due to blurring of the tag pat-terns caused by the HARP bandpass filter and tag fading [159]. Figure 5.9shows an example of erroneous tracking that happens near the tongue sur-face. This limitation hampers the use of this technique for tracking small1085.4. Discussion and Conclusion(a) Before Motion (b) During MotionFigure 5.9: Illustration of erroneous tracking of tissue points near tonguesurface using tagged MRI. The blue points represent the tissue points com-puted from the tagged MR data set (in the mid-sagittal slice), which issynchronized with the cine MRI shown in the figure. As demonstrated bythe arrows, the computed tissue points fail to track the motions of tongueblade and tongue root.and thin organs, such as soft palate or lips.To summarize, this study demonstrates the potential of our proposedmodel registration technique for creating speaker-specific biomechanical mod-els of upper-airway complex that facilitate personalized analysis of the speechproduction. Our method regularizes the functional movement using themorphic generic geometry and muscle attachments for areas where motiontracking data is missing to accommodate multi-modal datasets. Our re-sults suggest the mesh resolution of FE models plays an important rolein determining how detailed the tissue motions can be accurately tracked.Therefore, in order to balance the computational cost and tracking accu-racy, the subject-specific methods need to tailor the mesh resolution of theresulting model to the target anatomical or kinematic details. Finally, weexpect to incorporate complete motion data of the upper-airway system inour simulation. Acquiring accurate tissue motion from medical data remainsto be an important subject of ongoing research.109Chapter 6Conclusion6.1 SummarySubject-specific modelling enables the transition from generic understand-ing of biomechanical phenomena to addressing biomechanics of a particularindividual. The goal of this thesis is to demonstrate the feasibility of us-ing in-vivo medical images to create comprehensive upper-airway models forparticular subjects and to simulate coupled upper-airway behaviours, suchas swallowing and speech. Creating a biomechanical model of upper-airwaycomplex relies heavily on expert interaction; the slow process of model cre-ation prevents us from simulating a large number of individual cases. To easethe modelling efforts, we explore the use of model-registration techniquesthat register a predefined comprehensive upper-airway model (FRANK) tosubject-specific medical data. The contributions of this thesis are summa-rized below:Identified a model-registration strategy for creating comprehen-sive subject-specific models of upper-airway complex. Model-registrationbased subject-specific modelling methods should maintain the numericalstability and accuracy of the comprehensive biomechanical template, andshould approximate the motor control behaviours of the target subject. Toresolve the ambiguity and sparsity of the subject data, we regularize theregistration by minimizing the morphological deviation from the template.This method targets at preserving three types of regularity. 1. inter-component regularity: Maintain the spatial relationship (including con-nectivity, topology, relative postures and size) between model components.2. intra-component regularity: Preserve the underlying discretizationstructures of every subcomponents. 3. functional regularity: Keepthe functional information (including coupling attachments between com-ponents, muscle attachments and biomechanical properties) similar to thetemplate but relevant to the new model geometry.1106.1. SummaryDeveloped a multi-structure registration technique, which can pre-serve both the inter- and intra-component regularity. This regis-tration technique generates a homeomorphic mapping function, which su-perimposes a collection of template meshes (with different types of geometricdiscretization) onto certain target dataset (surfaces or point cloud), whileminimizing the spatial distortion. For each individual mesh component,the mapping function is a similarity transformation, i.e. the local trans-formations are combinations of scaling and rotation; hence the generateddeformation field preserves the underlying discretization structures of thetemplate components.Developed a model-registration workflow for generation of subject-specific upper-airway complex models. This workflow is based on ourproposed multi-structure registration technique. According to correspon-dences established between a template model and subject data, we transfera functional model from the generic space into a specific subject space byenforcing the correspondence constraints and minimizing the morphologi-cal deviation from the template. A penalty-based element-quality control-ling method is applied to prevent poor-quality elements (in the templatemeshes) from further degrading during registration. To preserve the func-tional regularity, all the functional information, including muscles, couplingattachments between components and biomechancial properties, is updatedto stay relevant to the registered subject-specific meshes.Demonstrated the feasibility of our template-based subject-specificmodelling methods by creating two comprehensive upper-airwaymodels for the particular subjects, and demonstrated the modelfunctionality in a set of biomechanical simulations Based on geome-tries extracted from two medical image volumes, we created two subject-specific models of upper-airway complex. To demonstrate the functionalityof our models, we performed activation-driven simulations to model speechproduction on three vowels /a/, /i/ and /u/. Our models can maintain theirstability during the simulation and properly respond to the muscle activa-tions and external loadings (such as gravity). The kinematics obtained fromthe forward simulations of /a/ were used to drive the inverse simulations.Compared with the template model, our models showed comparable trackingperformance (smaller tracking errors) and yielded similar muscle-activationpredictions.1116.2. Future DirectionsDemonstrated the potential of the proposed subject-specific mod-elling methods for personalized analysis of swallowing biomechan-ics. The sequential segmentation of the moving upper-airway, jaw and hy-oid bones were used to drive the inverse simulation of normal swallowingusing a subject-specific model. The model simulated the swallowing motionfrom the outset of the oral transit phase to the early stage of the esoph-agus phase. The model demonstrated the ability to reproduce the tissuemotion happening in oral cavity, oropharynx and velopharynx. However, itshowed limited ability to reproduce the hyolaryngeal motion during pharyn-geal phase.Demonstrated the potential of the proposed subject-specific mod-elling methods for personalized analysis of speech production.Using the proposed model-registration methods, we created two speaker-specific models. We performed inverse simulations to test the tongue-protrusionability of our models. The result shows the speaker-specific models are ca-pable of protruding their tongue and, in the meanwhile, keeping their jawat the neutral position. Then, we enabled personalized speech simulationsof the utterance /@-gis/. The models reproduced the speech motion of thetongue and jaw, based on speaker-specific tagged and cine MRI data, withsub-voxel tracking error; the other upper-airway components acted as func-tional regularizers to move with the tongue and jaw in the simulations.The inverse-estimated muscle activations made good agreement with thespeech-expert knowledge; these results suggest our methods may facilitatethe investigation of the speech motor-control mechanisms.6.2 Future DirectionsSubject-specific biomechanical models bridge the gap between the humanknowledge and the clinical recordings and measurements of a specific indi-vidual. This thesis presents a modelling and simulation framework, whichfacilitates personalized analysis of the coupled upper-airway system. Thisframework is illustrated in Figure 6.1.1126.2. Future DirectionsFigure 6.1: Current subject-specific modelling framework. The subjectdata, the biomechanical models and the human knowledge are loosely cou-pled.Our methods are based on a comprehensive anatomical template, whichis created through continuous collaboration of a team of interdisciplinary re-searchers. Based on the structural and dynamic information extracted frommedical data, we map the comprehensive template from the generic space tothe subject domain and then perform functional analysis on the correspond-ing human behaviours. However, this framework requires pre-processing onthe subject data, which can be time-consuming, and may bring in additionalartifacts into the results. Moreover, currently, interaction with biomechan-ical model still requires considerable engineering knowledge, which hindersnot only the improvement of the model but also its applications in researchand biomedicine.We expect to continually narrow the gap between the subject data, thebiomechanical models, and human knowledge. As shown in Figure 6.2, onthe one hand, future work should minimize the pre-processing efforts onthe subject data; biomechanical models should act as functional anatomi-cal regularizers to reduce the ambiguity and assist interpretation of medicalrecordings and measurements; in turn, the medical data can provide the in-formation to calibrate the biomechanical models and drive clinically-relevantanalysis. On the other hand, the subject-specific modelling platform shouldallow experts, who do not have engineering background, to input the humanknowledge into the models, and also to retrieve outcomes from them.We will continue building the subjects-specific modelling platform thatwould assists accelerating the iterations in Figure 6.2, and ensures the humanknowledge, the anatomical models and the medical recordings converging tothe reality. In the following paragraphs, we would like to highlight a fewdirections in detail.1136.2. Future DirectionsFigure 6.2: Target subject-specific modelling framework. The subject data,the biomechanical models and the human knowledge inform each other.Improve Geometric Representation of the Biomechanical Tem-plate The template model (FRANK) has three limitations in terms of itsgeometric representation. First, since hexahedron mesh can void volumetriclocking and has relatively low computational cost, all the deformable com-ponents in FRANK are represented as hexahedron or hexahedron-dominantFE models. However, generation of good-quality hexahedron mesh for agiven geometry is still an ill-posed problem. Due to the complexity of organgeometry, the template model contains poor mesh-quality FE components.Second, the rigid body representation is over simplistic for modelling carti-lages. This representation fails to model certain important behaviours, suchas epiglottic inversion. Third, there is no systematic method available thatallows the functional resolution (e.g. mesh resolution of the FE models, DoFof the bony structures) of the template model to be tailored to the specificapplications and the data resolutions.Future work is needed to find alternative geometric representations fordeformable organs. For example, ANP-based tetrahedron FE model canbe an alternative representation for organs that have complex geometry;cartilages can be modelled as thin-shell structures, which is flexible and hasrelatively low computational cost. In addition, systematic methods shouldbe explored to adjust the functional resolution of the biomechanical modelsbased on the needs of the users; such improvement would allow the modelsto be used in a wide range of application scenarios.Statistical Atlas In order to handle the missing data, we regularize theregistration in the way of minimizing the morphological deviation from thetemplate. This regularization can effectively avoid undesirable distortions,but it may not be anatomically relevant. Our subject-specific modellingmethods can benefit from replacing the deterministic template with a sta-tistical atlas. A statistical atlas can not only represent the average informa-tion of human anatomy but also describe the inter-subject variations. This1146.2. Future Directionsanatomical prior knowledge can effectively reduce the morphological uncer-tainty and make anatomically-relevant interpretation of the subject data.Thus, it would increase our confidence on the registration results when thesubject organ geometry is partially unknown in the medical data. As anexample, Wang et al.[152] propose a statistical atlas based subject-specificFE modelling framework. Future work should explore a more generalizedframework to accommodate different geometric representations.Improve the Model Registration Technique First, the proposed structure-preserving FFD can be further improved by adjusting the resolution of thedeformation grid according to the spatial distribution of the registrationerror. Such improvement can significantly reduce computational cost andincrease the registration accuracy.Second, although our method is able to maintain the overall mesh qual-ity of the template, it does not guarantee minimum quality for elements.Mesh-Match-and-Repair (MMRep) [18] uses a post-registration mesh-repairmethod to improve the quality of the bad elements to the minimum value(0.03 JR); however, this minimal value is insufficient for maintaining the nu-merical stability and accuracy of FE models when large deformation involvesin biomechanical simulations; moreover, this method can fail to untangle themesh when excessive distortion is generated during mesh-matching process.In future work, we expect to blend our method into the MMRep framework,which would maintain the overall mesh quality of the template and alsoguarantee a minimum quality for individual elements. However, an extracare needs to be taken for preserving the FE-FE attachments. One solutionis to replace the local Gauss-Seidel mesh-repair method with appropriateglobal mesh optimization methods [92] that allow displacement constraintsat the attachment areas.Third, it is useful to replace the points-based correspondence with cer-tain intensity-based metric. Establishment of correspondence in the imagespace can help with removing the additional pre-processing steps (as shownin Figure 6.1 and Figure 6.2), such as segmentation, from the registrationworkflow.Fourth, we assume the template and the recorded subject are in thesame pose. However, this assumption may not be true in certain cases.Particularly, when articulated structures (such as cervical spine) involvesin the registration, the results can be sensitive to pose difference betweenthe model and the subject. Moreover, our assumption also ignores the mus-cle contraction of the subject. Future work should couple our registration1156.2. Future Directionsmethod with certain articulated registration techniques. An important ques-tion for future studies is to determine the initial muscle-contraction levelsof the subject-specific models.Pre-stressing Configuration Currently, we assume that the subject or-gan geometries extracted from the medical images are in stress-free config-urations. Such assumption does not hold in most in vivo conditions, dueto the external forces (such as gravity) and tetanic contractions. Futurestudy is needed to determine the pre-stressing reference configurations ofsubject-specific models. A good approximation of the pre-stressing configu-ration can avoid bias of the modelled biomechanics. Vavouraki et al. [151]propose an inverse analysis method to derive the pressure-free configurationof human aortas, and the gravity-free shape of the female breast. However,their studies only focus on isolated FE models. Further study is needed todetermine the unloading reference configuration of hybrid models (such asFRANK).Subject-Specific Muscle Configuration In this study, we assume thesubject has similar muscular configurations and forces with the template.To obtain more accurate motor control patterns of the specific subjects, weexpect to extract personalized muscular information (e.g. the fibre paths,pennation angles and PCSA) from in-vivo imaging techniques, such as Dif-fusion Tensor MRI (DTI).Subject-Specific Tissue Properties Our methods do not personalizethe elastic properties of soft-tissues. Tissue properties may influence themuscular coordinating patterns of a specific physiological task and are neededfor reliable subject-specific modelling. In vivo measurement tissue prop-erties is still an ill-posed problem. Elastography provides a non-invasiveway for such measurement. For example, Cheng et al. [26] apply magneticresonance elastography to measuring the viscoelastic properities of tongueand soft palate. However, as soft tissues exhibit non-linear stress-strain be-haviour, tissue properties at larger deformations cannot be inferred fromelastography alone [155]. Future work should further the investigation of invivo tissue-property measurement.Neural Control Currently, our knowledge of the upper-airway system– the relationship between the motor commands the upper-airway sub-structures receiving and the activity they performing – is still limited. The1166.3. Concluding Remarksmuscular structures and the organization of motor control are both essentialfor determining the range of the motion that is anatomically and physiolog-ically feasible. Unfortunately, due to the anatomical complexity and ethicalreasons, careful anatomical data about the compartmental organization ofhuman upper-airway system are not yet available. Nevertheless, with the ad-vancement of in-vivo imaging techniques, increasing computational power,and further improvement of the model fidelity, inverse modelling will even-tually allow inference to the organization of motor control. Future study willcontinue increasing the realism of the neurophysiological control mechanism.Multi-Physics Simulation Human behaviours, such as swallowing, breath-ing and speech production, involve multiple physical phenomena. Compre-hensive analysis of these behaviours needs coupling of different computa-tional simulation methods to allow integration of solid mechanics, hydrody-namics, airodynamics, etc. Currently, most biomechanical studies focus onstructural analysis of human bodies; multi-physics simulation methods at-tract much less attention and research effort, which limits the application ofsubject-specific biomechanical models. Future study should explore efficientand reliable multi-physics simulation methods.Human-Model Interaction Currently, interaction with biomechanicalmodels requires considerable engineering knowledge and programming ef-forts. It would be beneficial to explore interaction methods that can easethe human interventions in modelling and simulation processes.6.3 Concluding RemarksTo conclude, this thesis has presented new modelling and simulation tech-niques that enable transition of the upper-airway research from generic un-derstanding of isolated biomechanical phenomena to analyzing comprehen-sive biomechanics of a specific individual. We applied these techniques topersonalized analysis of complex upper-airway activities, including swallow-ing and speech production. This work provides a starting point for devel-oping an accurate, efficient, interactive, subject-specific modelling platform,which will eventually assist diagnosis and treatment of a wide range of upper-airway dysfunctions.117Bibliography[1] Hidalgo D. A and Pusic A. L. Free-flap mandibular reconstruc-tion: a 10-year follow-up study. Plastic and Reconstructive Surgery,110(2):438–49; discussion 450, 2002.[2] Hossam Abdelmunim and Aly A. Farag. Elastic shape registrationusing an incremental free form deformation approach with the icpalgorithm. In 2011 18th IEEE International Conference on ImageProcessing, pages 729–732, 2011.[3] Dicko Ali-Hamadi, Tiantian Liu, Benjamin Gilles, Ladislav Kavan,Franois Faure, Olivier Palombi, and Marie-Paule Cani. Anatomytransfer. ACM Transactions on Graphics (TOG), 32(6):1–8, 2013.[4] Brett Allen, Brian Curless, and Zoran Popovi. Articulated body de-formation from range scan data. pages 612–619. ACM, 2002.[5] Brett Allen, Brian Curless, and Zoran Popovi. The space of humanbody shapes: reconstruction and parameterization from range scans.ACM Transactions on Graphics, 22(3):587, 2003.[6] B. Amberg, S. Romdhani, and T. Vetter. Optimal step nonrigid icpalgorithms for surface registration. pages 1–8. IEEE, 2007.[7] Peter Anderson, Negar M. Harandi, Scott Moisik, Ian Stavness, andSidney Fels. A comprehensive 3d biomechanically-driven vocal tractmodel including inverse dynamics for speech research. In INTER-SPEECH, 2015.[8] Dragomir Anguelov, Praveen Srinivasan, Daphne Koller, SebastianThrun, Jim Rodgers, and James Davis. Scape: shape completionand animation of people. ACM Transactions on Graphics (TOG),24(3):408–416, 2005.[9] K. S. Arun, T. S. Huang, and S. D. Blostein. Least-squares fittingof two 3-d point sets. IEEE Transactions on Pattern Analysis andMachine Intelligence, PAMI-9(5):698–700, 1987.118Bibliography[10] A. Aubel and D. Thalmann. Interactive modeling of the human mus-culature. pages 167–255. IEEE, 2001.[11] Thomas Baer, J. Peter Alfonso, and Kiyoshi Honda. Electromyogra-phy of the tongue muscles during vowels in /apvp/ environment. AnnBull RILP, 22:7–19, 1988.[12] P. J. Besl and H. D. McKay. A method for registration of 3-d shapes.IEEE Transactions on Pattern Analysis and Machine Intelligence,14(2):239–256, 1992.[13] P. Birkholz. Modeling consonant-vowel coarticulation for articulatoryspeech synthesis. PLOS ONE, 8(4):e60603, 2013.[14] J. Bonet and A. J. Burton. A simple average nodal pressure tetrahedralelement for incompressible and nearly incompressible dynamic explicitapplications. Communications in Numerical Methods in Engineering,14(5):437–449, 1998.[15] Alexander M. Bronstein, Michael M. Bronstein, and Ron Kimmel.Topology-invariant similarity of nonrigid shapes. International Jour-nal of Computer Vision, 81(3):281–301, 2009.[16] Stphanie Buchaillard, Muriel Brix, Pascal Perrier, and Yohan Payan.Simulations of the consequences of tongue surgery on tongue mo-bility: implications for speech production in postsurgery conditions.The International Journal of Medical Robotics and Computer AssistedSurgery, 3(3):252–261, 2007.[17] Stphanie Buchaillard, Pascal Perrier, and Yohan Payan. A biomechan-ical model of cardinal vowel production: Muscle activations and theimpact of gravity on tongue positioning. The Journal of the AcousticalSociety of America, 126(4):2033–2051, 2009.[18] Marek Bucki, Claudio Lobos, and Yohan Payan. A fast and robustpatient specific finite element mesh registration technique: Applicationto 60 clinical cases. Medical Image Analysis, 14(3):303–317, 2010.[19] Marek Bucki, Claudio Lobos, Yohan Payan, and Nancy Hitschfeld.Jacobian-based repair method for finite element meshes after registra-tion. Engineering with Computers, 27(3):285–297, 2011.119Bibliography[20] Cynthia Burke, Karen Patrias, National Library of Medicine (U.S.),Reference, and Web Services Section. Visible human project: January1987 through march 2007 : 912 citations. Technical Report 2007-1, U.S. Dept. of Health and Human Services, Public Health Service,National Library of Medicine, Reference and Web Services Section,2007.[21] TA Burkhart, DM Andrews, and CE Dunning. Finite element mod-eling mesh quality, energy balance and validation methods: A reviewwith recommendations associated with the modeling of bone tissue.JOURNAL OF BIOMECHANICS, 46(9):1477–1488, 2013.[22] JQ Campbell and AJ Petrella. Automated finite element modelingof the lumbar spine: Using a statistical shape model to generatea virtual population of models. JOURNAL OF BIOMECHANICS,49(13):2593–2599, 2016.[23] Matthieu Chabanas, Vincent Luboz, and Yohan Payan. Patient spe-cific finite element model of the face soft tissues for computer-assistedmaxillofacial surgery. Medical Image Analysis, 7(2):131–151, 2003.[24] Hui Chen, Sidney Fels, Tricia Pang, Ling Tsou, Fernanda R.de Almeida, and Alan A. Lowe. Three-dimensional reconstruction ofsoft palate modeling from subject-specific magnetic resonance imagingdata. Sleep and Breathing, 16(4):1113–1119, 2012.[25] Mingli Chen, Weiguo Lu, Quan Chen, Kenneth J. Ruchala, and Gus-tavo H. Olivera. A simple fixed-point approach to invert a deformationfielda. Medical Physics, 35(1):81–88, 2008.[26] S. Cheng, S.C. Gandevia, M. Green, R. Sinkus, and L.E. Bilston.Viscoelastic properties of the tongue and soft palate using {MR} elas-tography. Journal of Biomechanics, 44(3):450 – 454, 2011.[27] Yongchoel Choi and Seungyong Lee. Injectivity conditions of 2d and3d uniform cubic b-spline functions. Graphical Models, 62(6):411–427,2000.[28] Haili Chui and Anand Rangarajan. A new point matching algorithmfor non-rigid registration. Computer Vision and Image Understanding,89(2):114–141, 2003.120Bibliography[29] S. Corazza, E. Gambaretto, L. Mundermann, and T. P. Andriacchi.Automatic generation of a subject-specific model for accurate mark-erless motion capture and biomechanical applications. IEEE Transac-tions on Biomedical Engineering, 57(4):806–812, 2010.[30] Batrice Couteau, Yohan Payan, and Stphane Lavalle. The mesh-matching algorithm: an automatic 3d mesh generator for finite ele-ment structures. Journal of Biomechanics, 33(8):1005–1009, 2000.[31] Francis A. Duck. Physical Properties of Tissue: A ComprehensiveReference Book. Academic Press, 1990.[32] Philip J. Edwards, Derek L. G. Hill, John A. Little, and David J.Hawkes. A three-component deformation model for image-guidedsurgery. Medical Image Analysis, 2(4):355–367, 1998.[33] Cumhur Ertekin, Gaye Eryaar, Nevin Grgr, ehnaz Arc, Yaprak Secil,and Tlay Kurt. Orbicularis oculi muscle activation during swallowingin humans. Experimental Brain Research, 224(1):79–91, 2013.[34] Sidney Fels and Bryan Gick. Interdisciplinary approaches for advanc-ing articulatory speech theory and synthesis. Canadian Acoustics,44(3), 2016.[35] Cormac Flynn, Ian Stavness, John Lloyd, and Sidney Fels. A finiteelement model of the face including an orthotropic skin model underin vivo tension. Computer Methods in Biomechanics and BiomedicalEngineering, 18(6):571, 2015.[36] LA Freitag and P. Plassmann. Local optimization-based simplicialmesh untangling and improvement. INTERNATIONAL JOURNALFOR NUMERICAL METHODS IN ENGINEERING, 49(1-2):109–125, 2000.[37] Naoko Fujii, Yoko Inamoto, Eiichi Saitoh, Mikoto Baba, SumikoOkada, Satoshi Yoshioka, Toshiaki Nakai, Yoshihiro Ida, KazuhiroKatada, and Jeffrey B. Palmer. Evaluation of swallowing using 320-detector-row multislice ct. part i: Single- and multiphase volume scan-ning for three-dimensional morphological and kinematic analysis. Dys-phagia, 26(2):99–107, 2010.[38] Satoru Fujita, Jianwu Dang, Noriko Suzuki, and Kiyoshi Honda. Acomputational tongue model and its clinical application. Oral ScienceInternational, 4(2):97–109, 2007.121Bibliography[39] Terry A. Gaige, Thomas Benner, Ruopeng Wang, Van J. Wedeen,and Richard J. Gilbert. Three dimensional myoarchitecture of thehuman tongue determined in vivo by diffusion tensor imaging withtractography. Journal of Magnetic Resonance Imaging, 26(3):654–661,2007.[40] Amit Gefen. Patient-specific modeling in tomorrow’s medicine, vol-ume 9. Springer, New York;Heidelberg;, 2012 edition, 2012.[41] Michle Gentil and Thomas Gay. Neuromuscular specialization ofthe mandibular motor system: Speech versus non-speech movements.Speech Communication, 5(1):69 – 82, 1986.[42] Jean-Michel Gerard, Pascal Perrier, and Yohan Payan. 3D biomechan-ical tongue modeling to study speech production. Speech Production:Models, Phonetic Processes, and Techniques, Psychology Press, 2006.[43] Jean-Michel Gerard, Reiner Wilhelms-Tricarico, Pascal Perrier, andYohan Payan. A 3d dynamical biomechanical tongue model to studyspeech motor control. Research Developments in Biomechanics, 1:49–64, 2003.[44] J.M. Gerard, J. Ohayon, V. Luboz, P. Perrier, and Y. Payan. Non-linear elastic properties of the lingual and facial tissues assessed byindentation technique: Application to the biomechanics of speech pro-duction. Medical Engineering & Physics, 27(10):884 – 892, 2005. Ad-vances in the finite element modelling of soft tissue deformation.[45] Bryan Gick, Peter Anderson, Hui Chen, Chenhao Chiu, Ho B. Kwon,Ian Stavness, Ling Tsou, and Sidney Fels. Speech function of theoropharyngeal isthmus: a modelling study. Computer Methods inBiomechanics and Biomedical Engineering: Imaging & Visualization,2(4):217–222, 2014.[46] Bryan Gick, Ian Wilson, and Donald Derrick. Articulatory phonetics.Wiley-Blackwell, Malden, MA;Chichester, West Sussex;, 2013.[47] B. Gilles, L. Reveret, and DK Pai. Creating and animating subject-specific anatomical models. COMPUTER GRAPHICS FORUM,29(8):2340–2351, 2010.[48] Benjamin Gilles, Laurent Moccozet, and Nadia Magnenat-Thalmann.Anatomical Modelling of the Musculoskeletal System from MRI, pages289–296. Springer Berlin Heidelberg, Berlin, Heidelberg, 2006.122Bibliography[49] Evgeny Gladilin and Alexander Ivanov. Computational modellingand optimisation of soft tissue outcome in cranio-maxillofacial surgeryplanning. Computer Methods in Biomechanics and Biomedical Engi-neering, 12(3):305–318, 2009.[50] Leslie Glupker, Katherine Kula, Edwin Parks, William Babler, KeltonStewart, and Ahmed Ghoneima. Three-dimensional computed tomog-raphy analysis of airway volume changes between open and closed jawpositions. American journal of orthodontics and dentofacial orthope-dics : official publication of the American Association of Orthodon-tists, its constituent societies, and the American Board of Orthodon-tics, 147(4):426, 2015.[51] S. Gold and A. Rangarajan. A graduated assignment algorithm forgraph matching. IEEE Transactions on Pattern Analysis and MachineIntelligence, 18(4):377–388, 1996.[52] Manuel Gonzlez Hidalgo, Arnau M. Torres, Javier Varona Gmez, andSpringerLink ebooks Engineering. Deformation models: tracking, an-imation, and applications, volume 7;7.;. Springer, Dordrecht;NewYork;, 2013.[53] Dan Grauer, Lucia S. H. Cevidanes, Martin A. Styner, James L. Ack-erman, and William R. Proffit. Pharyngeal airway volume and shapefrom cone-beam computed tomography: Relationship to facial mor-phology. American Journal of Orthodontics & Dentofacial Orthope-dics, 136(6):805–814, 2009.[54] Nicole M. Grosland, Ritesh Bafna, and Vincent A. Magnotta. Auto-mated hexahedral meshing of anatomic structures using deformableregistration. Computer Methods in Biomechanics and Biomedical En-gineering, 12(1):35–43, 2009.[55] A. G. Hannam, I. Stavness, J. E. Lloyd, and S. Fels. A dynamicmodel of jaw and hyoid biomechanics during chewing. Journal ofBiomechanics, 41(5):1069–1076, 2008.[56] Alan G. Hannam, Ian K. Stavness, John E. Lloyd, S. S. Fels, Arthur J.Miller, and Donald A. Curtis. A comparison of simulated jaw dy-namics in models of segmental mandibular resection versus resectionwith alloplastic reconstruction. The Journal of Prosthetic Dentistry,104(3):191–198, 2010.123Bibliography[57] N. M. Harandi, J. Woo, M. R. Farazi, L. Stavness, M. Stone, S. Fels,and R. Abugharbieh. Subject-specific biomechanical modelling of theoropharynx with application to speech production. pages 1389–1392.IEEE, 2015.[58] Negar Harandi Mohaghegh. 3d subject-specific biomechanical model-ing and simulation of the oral region and airway with application tospeech production, 2016.[59] Christian Head, Daniel Alam, Joel A. Sercarz, Jivianne T. Lee, Jef-frey D. Rawnsley, Gerald S. Berke, and Keith E. Blackwell. Microvas-cular flap reconstruction of the mandible: a comparison of bone graftsand bridging plates for restoration of mandibular continuity. Otolaryn-gology - Head and Neck Surgery, 129(1):48–54, 2003.[60] AM A. Heemskerk, GJ G. Strijkers, A. A. Vilanova, MR M. st, andK. K. Nicolaij. Determination of mouse skeletal muscle architectureusing three dimensional diffusion tensor imaging. Magnetic Resonancein Medicine, 53(6):1333–1340, 2005.[61] A. V. Hill. The mechanics of active muscle. Proceedings of the RoyalSociety of London B: Biological Sciences, 141(902):104–117, 1953.[62] Andrew Ho, Mark Nicosia, Angela Dietsch, William Pearson, JanaRieger, Nancy Solomon, Maureen Stone, Yoko Inamoto, Eiichi Saitoh,Sheldon Green, and Sidney Fels. 3d dynamic visualization of swallow-ing from multi-slice computed tomography. pages 1–1. ACM, 2014.[63] Andrew K. Ho, Ling Tsou, Sheldon Green, and Sidney Fels. A 3dswallowing simulation using smoothed particle hydrodynamics. Com-puter Methods in Biomechanics and Biomedical Engineering: Imaging& Visualization, 2(4):237–244, 2014.[64] QX Huang, B. Adams, M. Wicke, and LJ Guibas. Non-rigid regis-tration under isometric deformations. COMPUTER GRAPHICS FO-RUM, 27(5):1449–1457, 2008.[65] Thomas J. R. Hughes. The finite element method: linear static anddynamic finite element analysis. Dover Publications, Mineola, NY,2000.[66] Y. Inamoto, E. Saitoh, S. Okada, H. Kagaya, S. Shibata, M. Baba,K. Onogi, S. Hashimoto, K. Katada, P. Wattanapan, and J. B. Palmer.124BibliographyAnatomy of the larynx and pharynx: effects of age, gender and heightrevealed by multidetector computed tomography. Journal of Oral Re-habilitation, 42(9):670–677, 2015.[67] Yoko Inamoto, Naoko Fujii, Eiichi Saitoh, Mikoto Baba, SumikoOkada, Kazuhiro Katada, Yasunori Ozeki, Daisuke Kanamori, andJeffrey B. Palmer. Evaluation of swallowing using 320-detector-rowmultislice ct. part ii: Kinematic analysis of laryngeal closure duringnormal swallowing. Dysphagia, 26(3):209–217, 2011.[68] Haruhi Inokuchi, Marls Gonzlez-Fernndez, Koichiro Matsuo, Mar-tin B. Brodsky, Mitsumasa Yoda, Hiroshige Taniguchi, HidetoOkazaki, Takashi Hiraoka, and Jeffrey B. Palmer. Electromyogra-phy of swallowing with fine wire intramuscular electrodes in healthyhuman: Amplitude difference of selected hyoid muscles. Dysphagia,31(1):33–40, 2015.[69] Bing Jian and B. C. Vemuri. Robust point set registration using gaus-sian mixture models. IEEE Transactions on Pattern Analysis andMachine Intelligence, 33(8):1633–1645, 2011.[70] Barry Joe. Shape measures for quadrilaterals, pyramids, wedges, andhexahedra. Technical report, Zhou Computing Services Inc., 2008.[71] Grand R. Joldes, Adam Wittek, and Karol Miller. Non-locking tetra-hedral finite element for surgical simulation. Communications in Nu-merical Methods in Engineering, 25(7):827–836, 2009.[72] Takahiro Kikuchi, Yukihiro Michiwaki, Tetsu Kamiya, Yoshio Toyama,Tasuku Tamai, and Seiichi Koshizuka. Human swallowing simulationbased on videofluorography images using hamiltonian mps method.Computational Particle Mechanics, 2(3):247–260, 2015.[73] Yoon-Chul Kim, R. M. Lebel, Ziyue Wu, Sally L. D. Ward, MichaelC. K. Khoo, and Krishna S. Nayak. Real-time 3d magnetic resonanceimaging of the pharyngeal airway in sleep apnea: Real-time 3d mri ofthe pharyngeal airway. Magnetic Resonance in Medicine, 71(4):1501–1510, 2014.[74] YoonChul Kim, Cecil E. Hayes, Shrikanth S. Narayanan, and Kr-ishna S. Nayak. Novel 16channel receive coil array for acceleratedupper airway mri at 3 tesla. Magnetic Resonance in Medicine,65(6):1711–1717, 2011.125Bibliography[75] Patrick M. Knupp. On the invertibility of the isoparametric map.Computer Methods in Applied Mechanics and Engineering, 78(3):313–329, 1990.[76] Patrick M Knupp. Hexahedral and tetrahedral mesh untangling. En-gineering with Computers, 17(3):261–268, 2001.[77] PM Knupp. Achieving finite element mesh quality via optimization ofthe jacobian matrix norm and associated quantities. part ii - a frame-work for volume mesh optimization and the condition number of thejacobian matrix. INTERNATIONAL JOURNAL FOR NUMERICALMETHODS IN ENGINEERING, 48(8):1165–1185, 2000.[78] Cornelius Lanczos. The variational principles of mechanics. DoverPublications, 1986.[79] David J. Larkman and Rita G. Nunes. Parallel magnetic resonanceimaging. Physics in Medicine and Biology, 52(7):R15–R55, 2007.[80] Sung-Hee Lee, Eftychios Sifakis, and Demetri Terzopoulos. Compre-hensive biomechanical modeling and simulation of the upper body.ACM Transactions on Graphics (TOG), 28(4):1–17, 2009.[81] Z. Levi and C. Gotsman. D-snake: Image registration by as-similar-as-possible template deformation. IEEE Transactions on Visualizationand Computer Graphics, 19(2):331–343, 2013.[82] J. P. Lewis, Matt Cordner, and Nickson Fong. Pose space deformation:A unified approach to shape interpolation and skeleton-driven defor-mation. In Proceedings of the 27th Annual Conference on ComputerGraphics and Interactive Techniques, SIGGRAPH ’00, pages 165–172,New York, NY, USA, 2000. ACM Press/Addison-Wesley PublishingCo.[83] Hao Li, Robert W. Sumner, and Mark Pauly. Global correspon-dence optimization for non-rigid registration of depth scans. ComputerGraphics Forum, 27(5):1421–1430, 2008.[84] Zhi Li, Jeremy P. M. Mogk, Dongwoon Lee, Jacobo Bibliowicz, andAnne M. Agur. Development of an architecturally comprehensivedatabase of forearm flexors and extensors from a single cadaveric spec-imen. Computer Methods in Biomechanics and Biomedical Engineer-ing: Imaging & Visualization, 3(1):3–12, 2015.126Bibliography[85] Miao Liao, Qing Zhang, Huamin Wang, Ruigang Yang, and MinglunGong. Modeling deformable objects from a single depth camera. pages167–174, 2009.[86] Sajan G. Lingala, Brad P. Sutton, Marc E. Miquel, and Krishna S.Nayak. Recommendations for realtime speech mri. Journal of Mag-netic Resonance Imaging, 43(1):28–44, 2016.[87] Y. Lipman and T. Funkhouser. Mobius voting for surface correspon-dence. ACM TRANSACTIONS ON GRAPHICS, 28(3):1, 2009.[88] A. Liu and B. Joe. Relationship between tetrahedron shape measures.BIT, 34(2):268–287, 1994.[89] Anwei Liu and Barry Joe. On the shape of tetrahedra from bisection.Mathematics of Computation, 63(207):141–154, 1994.[90] Yonghuai Liu. Automatic 3d free form shape matching using the grad-uated assignment algorithm. Pattern Recognition, 38(10):1615–1631,2005.[91] Yonghuai Liu. A mean field annealing approach to accurate free formshape matching. Pattern Recognition, 40(9):2418–2436, 2007.[92] Marco Livesu, Alla Sheffer, Nicholas Vining, and Marco Tarini. Prac-tical hex-mesh optimization via edge-cone rectification. ACM Trans-actions on Graphics (TOG), 34(4):1–11, 2015.[93] John E Lloyd, Ian Stavness, and Sidney Fels. Artisynth: A fast inter-active biomechanical modeling toolkit combining multibody and finiteelement simulation. In Soft Tissue Biomechanical Modeling for Com-puter Assisted Surgery, pages 355–394. Springer, 2012.[94] Jeri A. Logemann. Evaluation and treatment of swallowing disorders.College-Hill Press, San Diego, CA, 1983.[95] V. Luboz, Y. Payan, and B Couteau. Automatic 3d finite element meshgeneration: an atlas fitted onto patient data. In Proceedings of theFifth International Symposium on Computer Methods in Biomechanicsand Biomedical Engineering, 2001.[96] Jorge C. Lucero and Kevin G. Munhall. A model of facial biomechan-ics for speech production. The Journal of the Acoustical Society ofAmerica, 106(5):2834–2842, 1999.127Bibliography[97] DONNA S. LUNDY, CHRISTINE SMITH, LAURA COLANGELO,PAULA A. SULLIVAN, JERILYN A. LOGEMANN, CATHY L.LAZARUS, LISA A. NEWMAN, TOM MURRY, LORI LOMBARD,and JOY GAZIANO. Aspiration: Cause and implications. Otolaryn-gology - Head and Neck Surgery, 120(4):474–478, 1999.[98] Peter F. Mac Neilage. Electromyographic study of the tongue duringvowel production. The Journal of the Acoustical Society of America,35(5):783, 1963.[99] Massimo R. Mannarino, Francesco Di Filippo, and Matteo Pirro.Obstructive sleep apnea syndrome. European journal of internalmedicine, 23(7):586, 2012.[100] D. Mateus, R. Horaud, D. Knossow, F. Cuzzolin, and E. Boyer. Articu-lated shape matching using laplacian eigenfunctions and unsupervisedpoint registration. pages 1–8. IEEE, 2008.[101] Martin J. McKeown, Dana C. Torpey, and Wendy C. Gehm. Non-invasive monitoring of functionally distinct muscle activations duringswallowing. Clinical Neurophysiology, 113(3):354–366, 2002.[102] Biren Mehta and Stefan Schaal. Forward models in visuomotor control.Journal of Neurophysiology, 88(2):942–953, 2002.[103] M. Meyer, M. Desbrun, P. Schroder, and A.H. Barr. DiscreteDifferential-Geometry Operators for Triangulated 2-Manifolds. Visu-alization and Mathematics III, 2003.[104] Srboljub M. Mijailovich, Boban Stojanovic, Milos Kojic, Alvin Liang,Van J. Wedeen, and Richard J. Gilbert. Derivation of a finite-elementmodel of lingual deformation during swallowing from the mechanics ofmesoscale myofiber tracts obtained by mri. Journal of Applied Physi-ology, 109(5):1500–1514, 2010.[105] A. C. Miracle and S. K. Mukherji. Conebeam ct of the head andneck, part 1: Physical principles. American Journal of Neuroradiology,30(6):1088, 2009.[106] H. Mizunuma, M. Sonomura, K. Shimokasa, H. Ogoshi, S. Nakamura,and N. Tayama. Numerical modelling and simulation on the swallow-ing of jelly. Journal of Texture Studies, 40(4):406–426, 2009.128Bibliography[107] Scott Moisik and Bryan Gick. The quantal larynx revisited. TheJournal of the Acoustical Society of America, 133(5):3522, 2013.[108] W. Mollemans, F. Schutyser, N. Nadjmi, F. Maes, and P. Suetens.Predicting soft tissue deformations for a maxillofacial surgery plan-ning system: From computational strategies to a complete clinicalvalidation. Medical Image Analysis, 11(3):282–301, 2007.[109] Andriy Myronenko and Xubo Song. Point set registration: Coher-ent point drift. IEEE Transactions on Pattern Analysis and MachineIntelligence, 32(12):2262–2275, 2010.[110] Mohammad A. Nazari, Pascal Perrier, Matthieu Chabanas, and YohanPayan. Simulation of dynamic orofacial movements using a constitu-tive law varying with muscle activation. Computer Methods in Biome-chanics and Biomedical Engineering, 13(4):469–482, 2010.[111] N. F. Osman, E. R. McVeigh, and J. L. Prince. Imaging heart motionusing harmonic phase mri. IEEE Transactions on Medical Imaging,19(3):186–202, 2000.[112] C. Papazov and D. Burschka. Deformable 3d shape registration basedon local similarity transforms. Computer Graphics Forum, 30(5):1493–1502, 2011.[113] William G. Pearson Jr, Susan E. Langmore, Louis B. Yu, and Ann C.Zumwalt. Structural analysis of muscles elevating the hyolaryngealcomplex. Dysphagia, 27(4):445–451, 2012.[114] William G. Pearson Jr, Sonja M. Molfenter, Zachary M. Smith,and Catriona M. Steele. Image-based measurement of post-swallowresidue: The normalized residue ratio scale. Dysphagia, 28(2):167–177, 2013.[115] C. C. Peck, G. E. J. Langenbach, and A. G. Hannam. Dynamic simu-lation of muscle and articular properties during human wide jaw open-ing. Archives of Oral Biology, 45(11):963–982, 2000.[116] Joseph S. Perkell. A physiologically-oriented model of tongue activ-ity in speech production. Thesis (Ph. D.)Massachusetts Institute ofTechnology, Dept. of Electrical Engineering, 1974.129Bibliography[117] Pascal Perrier, Yohan Payan, Stphanie Buchaillard, Mohammad A.Nazari, and Matthieu Chabanas. Biomechanical models to studyspeech. Faits de Langues, 37:155–171, 2011.[118] Igor Sazonov Perumal Nithiarasu and Si Yong Yeo. Scan-Based FlowModelling in Human Upper Airways, chapter 0, pages 241 – 280.Springer, 2012.[119] Guy J. Petruzzelli, Kelly Cunningham, and Darl Vandevender. Impactof mandibular condyle preservation on patterns of failure in head andneck cancer. Otolaryngology - Head and Neck Surgery, 137(5):717–721,2007.[120] Les A. Piegl and Wayne Tiller. The NURBS book. Springer, NewYork;Berlin;, 2nd edition, 1997.[121] U. Pinkall and K. Polthier. Computing discrete minimal surfaces andtheir conjugates. Exper Math, 2(1):15–36, 1993.[122] Knupp P.M. Proceedings of the 45th AIAA Aerospace Sciences Meetingand Exhibit, 2007.[123] Thibaut Raguin, Jean Carvalho, Sophie Riehm, Catherine Takeda,and Agns Dupret-Bories. Method for dealing with severe aspirationusing a new concept of intralaryngeal prosthesis: A case report: Newconcept of intralaryngeal prosthesis. Head & Neck, 38(10):E2504–E2507, 2016.[124] Dylan F. Roden and Kenneth W. Altman. Causes of dysphagia amongdifferent age groups: a systematic review of the literature. Otolaryn-gologic clinics of North America, 46(6):965, 2013.[125] D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, andD. J. Hawkes. Nonrigid registration using free-form deformations: ap-plication to breast mr images. IEEE Transactions on Medical Imaging,18(8):712–721, 1999.[126] Daniel Rueckert, Paul Aljabar, Rolf A. Heckemann, Joseph V. Hajnal,and Alexander Hammers. Diffeomorphic Registration Using B-Splines,volume 4191, pages 702–709. Springer Berlin Heidelberg, Berlin, Hei-delberg, 2006.130Bibliography[127] H. Schliephake, R. Schmelzeisen, R. Schnweiler, T. Schneller, andC. Altenbernd. Speech, deglutition and life quality after intraoraltumour resection. a prospective study. International journal of oraland maxillofacial surgery, 27(2):99, 1998.[128] Julia A. Schnabel, Daniel Rueckert, Marcel Quist, Jane M. Black-all, Andy D. Castellano-Smith, Thomas Hartkens, Graeme P. Penney,Walter A. Hall, Haiying Liu, Charles L. Truwit, Frans A. Gerritsen,Derek L. G. Hill, and David J. Hawkes. A Generic Framework forNon-rigid Registration Based on Non-uniform Multi-level Free-FormDeformations, pages 573–581. Springer Berlin Heidelberg, Berlin, Hei-delberg, 2001.[129] Thomas Sederberg and Scott Parry. Free-form deformation of solidgeometric models. pages 151–160. ACM, 1986.[130] Stephanie M. Shaw and Rosemary Martino. The normal swallow:muscular and neurophysiological control. Otolaryngologic clinics ofNorth America, 46(6):937, 2013.[131] Eftychios Sifakis, Andrew Selle, Avram Robinson-Mosher, and RonaldFedkiw. Simulating speech with a physics-based facial muscle model.pages 261–270. Eurographics Association, 2006.[132] Karan Singh and Evangelos Kokkevis. Skinning characters using sur-face oriented free-form deformations. pages 35–42. Proceedings of theGraphics Interface 2000 Conference, 2000.[133] Y. Song, S. Wang, M. Chan, B. Chandra, A. Dhawan, and Y. Song.Femoral fracture risk assessment after intensity modulated radiationtherapy (imrt) for the treatment of soft tissue sarcoma using a novelmathematical model. pages 95–98. IEEE, 2006.[134] M. Sonomura, H. Mizunuma, T. Numamor, H. Michiwaki, andK. Nishinari. Numerical simulation of the swallowing of liquid bo-lus. Journal of Texture Studies, 42(3):203–211, 2011.[135] Olga Sorkine and Marc Alexa. As-rigid-as-possible surface modeling.In Proceedings of the Fifth Eurographics Symposium on Geometry Pro-cessing, SGP ’07, pages 109–116, Aire-la-Ville, Switzerland, Switzer-land, 2007. Eurographics Association.131Bibliography[136] Ian Stavness, Sidney Fels, John E. Lloyd, and Alan G. Hannam. Pre-dicting muscle patterns for hemimandibulectomy models. Computermethods in biomechanics and biomedical engineering, 13(4):483–491,2010.[137] Ian Stavness, John E. Lloyd, Yohan Payan, and Sidney Fels. Coupledhardsoft tissue simulation with contact and constraints applied to jaw-tonguehyoid dynamics. International Journal for Numerical Methodsin Biomedical Engineering, 27(3):367–390, 2011.[138] Ian Stavness, Christy L. Ludlow, Bethany Chung, and Sidney Fels.Hyolaryngeal biomechanics modeling with intramuscular stimulationdata. In 1st International Workshop on Dynamic Modeling of theOral, Pharyngeal and Laryngeal Complex for Biomedical Applications(OPAL-2009), pages 73–78, Vancouver, BC, Canada, 2009.[139] Ian Stavness, Mohammad A. Nazari, Pascal Perrier, Didier Demolin,and Yohan Payan. A biomechanical modeling study of the effects ofthe orbicularis oris muscle and jaw posture on lip shape. Journal ofspeech, language, and hearing research : JSLHR, 56(3):878–890, 2013.[140] Ian Stavness, C. Snchez, John Lloyd, Andrew Ho, Johnty Wang, Sid-ney Fels, and Danny Huang. Unified skinning of rigid and deformablemodels for anatomical simulations. pages 1–4. ACM, 2014.[141] Maureen Stone, Jonghye Woo, Junghoon Lee, Tera Poole, Amy Sea-graves, Michael Chung, Eric Kim, Emi Z Murano, Jerry L Prince, andSilvia S Blemker. Structure and variability in human tongue muscleanatomy. Computer Methods in Biomechanics and Biomedical Engi-neering: Imaging & Visualization, pages 1–9, 2016.[142] S. Sueda, A. Kaufman, and DK Pai. Musculotendon simulation forhand animation. ACM TRANSACTIONS ON GRAPHICS, 27(3):1,2008.[143] Jochen Smuth, Marco Winter, and Gnther Greiner. Reconstructinganimated meshes from time-varying point clouds. Computer GraphicsForum, 27(5):1469–1476, 2008.[144] Gary A. Thibodeau and Kevin T. Patton. Anatomy & physiology.Mosby, St. Louis, Mo, 5th edition, 2003.132Bibliography[145] Y. Tsin and T. Kanade. A correlation-based approach to robust pointset registration, volume 3023, pages 558–569. SPRINGER-VERLAGBERLIN, BERLIN, 2004.[146] Ling Tsou. The effects of muscle aging on hyoid motion during swal-lowing : a study using a 3d biomechanical model, 2012.[147] Betty Tuller, KS Harris, and Bob Gross. Electromyographic study ofthe jaw muscles during speech. J Phon., 9:175–188, 1981.[148] Martin Uecker, Shuo Zhang, Dirk Voit, Alexander Karaus, KlausDi-etmar Merboldt, and Jens Frahm. Realtime mri at a resolution of 20ms. NMR in Biomedicine, 23(8):986–994, 2010.[149] Michael Vaiman, Ephraim Eviatar, and Samuel Segal. Surface elec-tromyographic studies of swallowing in normal subjects: A review of440 adults. report 1. quantitative data: Timing measures, 2004.[150] K. van den Doel and U. M. Ascher. Real-time numerical solutionof webster’s equation on a nonuniform grid. IEEE Transactions onAudio, Speech, and Language Processing, 16(6):1163–1172, Aug 2008.[151] Vasileios Vavourakis, John H. Hipwell, and David J. Hawkes. Aninverse finite element u/p-formulation to predict the unloaded stateof in vivo biological soft tissues. Annals of Biomedical Engineering,44(1):187–201, 2016.[152] XL Wang and XP Qian. A statistical atlas based approach to auto-mated subject-specific fe modeling. COMPUTER-AIDED DESIGN,70:67–77, 2016.[153] Anders Westermark, Stefan Zachow, and Barry L. Eppley. Three-dimensional osteotomy planning in maxillofacial surgery including softtissue prediction. The Journal of craniofacial surgery, 16(1):100–104,2005.[154] Reiner Wilhelms-Tricarico. Physiological modeling of speech produc-tion: Methods for modeling soft-tissue articulators. The Journal ofthe Acoustical Society of America, 97(5 Pt 1):3085–3098, 1995.[155] Adam Wittek, Nicole M. Grosland, Grand R. Joldes, Vincent Mag-notta, and Karol Miller. From finite element meshes to clouds of133Bibliographypoints: A review of methods for generation of computational biome-chanics models for patient-specific applications. Annals of BiomedicalEngineering, 44(1):3–15, 2016.[156] Wojciech Wolan´ski, Boz˙ena Gzik-Zroska, Edyta Kawlewska, MarekGzik, Dawid Larysz, Jo´zef Dzielicki, and Adam Rudnik. Preopera-tive Planning of Surgical Treatment with the Use of 3D Visualizationand Finite Element Method, pages 139–163. Springer InternationalPublishing, Cham, 2015.[157] Jonghye Woo, Junghoon Lee, Emi Z. Murano, Fangxu Xing, MeenaAl-Talib, Maureen Stone, and Jerry L. Prince. A high-resolution atlasand statistical model of the vocal tract from structural mri. ComputerMethods in Biomechanics and Biomedical Engineering: Imaging &Visualization, 3(1):47–60, 2015.[158] Jonghye Woo, E. Z. Murano, M. Stone, and J. L. Prince. Reconstruc-tion of high-resolution tongue volumes from mri. IEEE Transactionson Biomedical Engineering, 59(12):3511–3524, 2012.[159] Jonghye Woo, Maureen Stone, Yuanming Suo, Emi Z. Murano, andJerry L. Prince. Tissue-point motion tracking in the tongue fromcine mri and tagged mri. Journal of Speech, Language, and HearingResearch, 57(2):S626, 2014.[160] Fangxu Xing, Jonghye Woo, Emi Z. Murano, Junghoon Lee, MaureenStone, and Jerry L. Prince. 3d tongue motion from tagged and cine mrimages. Medical image computing and computer-assisted intervention: MICCAI . International Conference on Medical Image Computingand Computer-Assisted Intervention, 16(Pt 3):41, 2013.[161] Shuntaro Yamazaki, Satoshi Kagami, and Masaaki Mochimaru. Non-rigid shape registration using similarity-invariant differential coordi-nates. pages 191–198. IEEE, 2013.[162] Yusuke Yoshiyasu, Wan-Chun Ma, Eiichi Yoshida, and Fumio Kane-hiro. As-conformal-as-possible surface registration. Computer Graph-ics Forum, 33(5):257–267, 2014.[163] Weimin Yu, Moritz Tannast, and Guoyan Zheng. Non-rigid free-form2d3d registration using a b-spline-based statistical deformation model.Pattern Recognition, 2015.134[164] Paul A. Yushkevich, Joseph Piven, Heather Cody Hazlett, RachelGimpel Smith, Sean Ho, James C. Gee, and Guido Gerig. User-guided 3D active contour segmentation of anatomical structures: Sig-nificantly improved efficiency and reliability. Neuroimage, 31(3):1116–1128, 2006.135Appendix AMuscles and Ligaments inFRANKThis appendix summarizes all of the muscles and ligaments included in theFRANK model.A.1 MusclesThis appendix summarizes all of the muscles included in the FRANK model.Most muscles are associated with a soft tissue component, but these associa-tions should be considered model organization choices rather than anatomically-based groupings. The muscles are presented in the following tables accordingto their associations, which include: the jaw and hyoid region (Table A.1),the face (Table A.2), the tongue (Table A.3), the soft palate (Table A.4),the pharynx (Table A.5), and the larynx (Table A.6).136A.1. MusclesName Abb. Fmax `0 Lopt Lmax T.R.Anterior Temporal AT 158.00 9.16 0.97 1.24 0.50Middle Temporal MT 95.60 8.47 0.95 1.34 0.48Posterior Temporal PT 75.60 6.72 0.93 1.22 0.51Superficial Masseter SM 190.40 5.70 0.96 1.25 0.46Deep Masseter DM 81.60 3.43 0.97 1.50 0.29Medial Pterygoid MP 174.80 5.38 0.98 1.23 0.64Superior Lateral Ptery-goidSP 28.67 2.20 1.11 1.51 0.00Inferior Lateral Ptery-goidIP 66.90 2.96 1.05 1.38 0.00Anterior Digastric AD 40.00 4.16 1.18 1.51 0.00Posterior Digastric PD 40.00 8.72 1.00 1.28 0.00Posterior Mylohyoid PM 35.40 3.39 1.10 1.41 0.00Stylohyoid SH 15.60 10.32 1.00 1.28 0.00Table A.1: Jaw, hyoid muscles in FRANK. The table includes the musclename, the abbreviated name (Abb.), maximum force (Fmax) in Newtons,muscle rest length `0 in cm (for muscles symmetrical through the midsagittalplane, ` is the length on right half of the model), optimal length as a ratio ofthe rest length (Lopt), maximum length as a ratio of the rest length (Lmax),and tendon ratio (T.R.). All muscles in this table use a Peck muscle model[115], with a passive fraction of 0.015 and damping of 0.001.137A.1. MusclesName Abb. Fmax `0 Lopt Lmax T.R.Depressor Anguli Oris DAO 1.00 2.69 1.00 2.00 0.00Buccinator BUC 1.00 3.74 1.00 2.00 0.00Depressor Labii Inferi-orisDLI 1.00 1.85 1.00 2.00 0.00Mentalis MENT 1.00 1.62 1.00 2.00 0.00Obicularis Oris Middle OOM 1.00 7.44 1.00 2.00 0.00Obicularis Oris Periph-eralOOP 1.00 8.51 1.00 2.00 0.00Levator Labii SuperiorisAlaeque NasiLLSAN 1.00 3.25 1.00 2.00 0.00Levator Anguli Oris LAO 1.00 3.49 1.00 2.00 0.00Risorius RIS 1.00 5.30 1.00 2.00 0.00Zygomaticus ZYG 1.00 5.57 1.00 2.00 0.00Levator Labii Superioris LLS 1.00 3.51 1.00 2.00 0.00Table A.2: Face muscles in FRANK. Table headings are the same as TableA.1. All muscles in this table use a Constant muscle model (muscle force isthe product of Fmax and the activation level), with a passive fraction of 0and damping of 0.138A.1. MusclesName Abb. Fmax `0 Lopt Lmax T.R.Genioglossus Anterior GGA 32.8 3.55 1.00 2.00 0.00Genioglossus Middle GGM 22.0 4.59 1.00 2.00 0.00Genioglossus Posterior GGP 67.2 4.80 1.00 2.00 0.00Geniohyoid GH 32.0 3.08 1.00 2.00 0.00Hyoglossus HG 118.0 3.85 1.00 2.00 0.00Anterior Mylohyoid AM 46.8 2.41 1.00 2.00 0.00Styloglossus STY 43.6 9.42 1.00 2.00 0.00Transversus Anterior TRANSA 90.8 1.37 1.00 2.00 0.00Transversus Middle TRANSM 90.8 1.37 1.00 2.00 0.00Transversus Posterior TRANSP 90.8 1.37 1.00 2.00 0.00Verticalis Anterior VERTA 36.4 1.64 1.00 2.00 0.00Verticalis Middle VERTM 36.4 1.64 1.00 2.00 0.00Verticalis Posterior VERTP 36.4 1.64 1.00 2.00 0.00Inferior Longitudinal ILA 16.4 8.61 1.00 2.00 0.00Superior Longitudinal SLA 34.4 9.49 1.00 2.00 0.00Table A.3: Tongue muscles in FRANK. Table headings are the same asTable A.1. All muscles in this table use a Peck muscle model [115], with apassive fraction of 0 and damping of 0.Name Abb. Fmax `0 Lopt Lmax T.R.Levator Veli Palitini LVP 10.8 4.20 1.00 2.00 0.00Musculus Uvulae MU 5.56 3.31 1.00 2.00 0.00Palatoglossus Anterior PGA 3.22 5.60 1.00 2.00 0.00Palatoglossus Posterior PGP 3.22 5.24 1.00 2.00 0.00Palatopharyngeus PP 12.8 10.05 1.00 2.00 0.00Tensor Veli Palitini TVP 4.71 3.66 1.00 2.00 0.00Table A.4: Soft palate muscles in FRANK. Table headings are the same asTable A.1. All muscles in this table use a Peck muscle model [115], with apassive fraction of 0.5 and damping of 0.139A.1. MusclesName Abb. Fmax `0 Lopt Lmax T.R.Inferior Constrictor 1 IC1 3.0 5.09 1.00 2.00 0.00Inferior Constrictor 2 IC2 3.0 4.18 1.00 2.00 0.00Inferior Constrictor 3 IC3 3.0 3.65 1.00 2.00 0.00Middle Constrictor 1 MC1 3.0 6.36 1.00 2.00 0.00Middle Constrictor 2 MC2 3.0 3.36 1.00 2.00 0.00Middle Constrictor 3 MC3 3.0 3.21 1.00 2.00 0.00Superior Constrictor 1 SC1 3.0 4.01 1.00 2.00 0.00Superior Constrictor 2 SC2 3.0 4.18 1.00 2.00 0.00Superior Constrictor 3 SC3 3.0 4.55 1.00 2.00 0.00Crico Pharyngeal CP 3.0 2.61 1.00 2.00 0.00Salpingo Pharyngeus SalP 3.0 9.31 1.00 2.00 0.00Stylo Pharyngeus StyP 3.0 8.19 1.00 2.00 0.00Table A.5: Pharynx muscles in FRANK. Table headings are the same asTable A.1. All muscles in this table use a Peck muscle model [115], with apassive fraction of 0 and damping of 0.Name Abb. Fmax `0 Lopt Lmax T.R.Criothyroid Pars Recta CPR 1.00 1.64 0.95 1.15 0.10Criothyroid ParsObliqueCPO 1.00 2.14 0.95 1.15 0.10Interarytenoid Trans-verseIAT 1.00 1.78 0.95 1.15 0.10Interarytenoid Oblique IAO 1.00 1.84 0.95 1.15 0.10Thyrohyoid Superior TS 1.00 1.96 0.95 1.15 0.10Thyrohyoid Inferior TI 1.00 2.66 0.95 1.15 0.10Thyroarytenoid Exter-nalTE 0.60 1.92 0.95 1.15 0.10Thyroarytenoid Vocalis TV 1.00 1.76 0.95 1.15 0.10Lateral Cricoarytenoid LC 1.50 0.91 0.95 1.15 0.10Posterior Cricoary-tenoid ObliquePCO 1.00 1.92 0.95 1.15 0.10Sternothyroid ST 0.75 8.81 0.95 1.15 0.10Sternohyoid SteH 0.50 10.32 0.95 1.15 0.10Table A.6: Larynx muscles in FRANK. Table headings are the same asTable A.1. All muscles in this table use a Peck muscle model [115], with apassive fraction of 0.2 and damping of 0.140A.2. LigamentsA.2 LigamentsThe ligaments included in the FRANK model are given in Table A.7.Name Abb. Stiffness DampingCricotracheal CTL 100.00 0.001Cricotracheal Posterior CTPL 100.00 0.001Cricotracheal Anterior CTAL 100.00 0.001Cricoarytenoid Inferior CAIL 500.00 0.001Cricoarytenoid Medial CAML 100.00 0.001Cricoarytenoid Lateral CALL 200.00 0.001Cricoarytenoid Anterior CAAL 100.00 0.001Cricoarytenoid Posterior CAPL 100.00 0.001Thyroepiglottic TEL 50.00 0.001Thyrohyoid Lateral TLL 5.00 0.001Thyrohyoid Median TML 5.00 0.001Cricothyroid Lateral CLL 5000.00 0.001Cricothyroid Median CML 50.00 0.001Vocal Ligament VL 100.00 0.001Hyoepiglottic Median HML 100.00 0.001Hyoepiglottic Lateral HLL 100.00 0.001Thyrohyoid Anterosuperior Membrane 1 TAM1 5.00 0.001Thyrohyoid Anterosuperior Membrane 2 TAM2 5.00 0.001Thyrohyoid Posterosuperior Membrane 1 TPM1 5.00 0.001Thyrohyoid Posterosuperior Membrane 2 TPM2 5.00 0.001Table A.7: List of all ligaments in FRANK, including full and abbreviated(Abb.) name. Ligaments are modeled as springs with a stiffness (N/m)and damping. Some membranes, included in this table, are also modeled ashaving a spring-like contribution.141

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0347294/manifest

Comment

Related Items