"Science, Faculty of"@en . "Zoology, Department of"@en . "DSpace"@en . "UBCV"@en . "Senthivasan, Shreeram"@en . "2020-07-13T13:43:09Z"@en . "2020"@en . "Master of Science - MSc"@en . "University of British Columbia"@en . "Birds accomplish an impressive diversity of flight maneuvers primarily through variation in the motion of their wings. It is for this reason that an understanding of wing kinematics is of broad interest to the study of the physics and control of bird flight. However, current optical approaches to animal motion capture struggle to automate the tracking of fixed points on the wing due to the periodic folding and occlusion of flight surfaces during flapping flight. This greatly increases the time and effort needed to record wing kinematics, as the raw data must be digitized by hand. In\r\nthis thesis, I made progress towards a new high-throughput approach to recording wing kinematics using body- and wing-mounted inertial sensors. The data loggers designed for this purpose have a mass of 4g including a battery, and are capable of collecting inertial data from up to four sensors at 450Hz for 20 minutes. An accompanying set of R functions were developed to estimate the orientations of the body and wing segments from the raw inertial data, which in turn were used to estimate joint angles over time. These tools were validated against an optical motion capture\r\nsystem and were able estimate the orientation of individual bodies within 3\u00B0 and angles between bodies within 6\u00B0 in controlled tests. However, the tools could not be used to record the wing kinematics of pigeons because the angular velocity of the wings exceeded the sensing range of the gyroscopes used in the current design. Specialized high-range gyroscopes are commercially available and could be incorporated into future designs to overcome this limitation. Inertial motion capture has the potential to be a high-throughput, cost-effective, and portable alternative to high-speed video for recording wing kinematics in freely flying birds. This approach could also be used to record detailed kinematics in other animal systems where optical motion capture is infeasible, such as in situations with poor visibility or to study behaviours that occur over large spatial scales."@en . "https://circle.library.ubc.ca/rest/handle/2429/75127?expand=metadata"@en . "High-throughput kinematic tracking of bird wingsusing inertial sensor arraysbyShreeram SenthivasanB.Sc. (Hons.), University of Toronto, 2016A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinTHE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES(Zoology)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)June 2020\u00C2\u00A9 Shreeram Senthivasan, 2020, Professor, Department of Zoology, UBCAbstractBirds accomplish an impressive diversity of flight maneuvers primarily through vari-ation in the motion of their wings. It is for this reason that an understanding of wingkinematics is of broad interest to the study of the physics and control of bird flight.However, current optical approaches to animal motion capture struggle to automatethe tracking of fixed points on the wing due to the periodic folding and occlusionof flight surfaces during flapping flight. This greatly increases the time and effortneeded to record wing kinematics, as the raw data must be digitized by hand. Inthis thesis, I made progress towards a new high-throughput approach to recordingwing kinematics using body- and wing-mounted inertial sensors. The data loggersdesigned for this purpose have a mass of 4g including a battery, and are capable ofcollecting inertial data from up to four sensors at 450Hz for 20 minutes. An accom-panying set of R functions were developed to estimate the orientations of the bodyand wing segments from the raw inertial data, which in turn were used to estimatejoint angles over time. These tools were validated against an optical motion capturesystem and were able estimate the orientation of individual bodies within 3\u00C2\u00B0 andangles between bodies within 6\u00C2\u00B0 in controlled tests. However, the tools could notbe used to record the wing kinematics of pigeons because the angular velocity ofthe wings exceeded the sensing range of the gyroscopes used in the current design.Specialized high-range gyroscopes are commercially available and could be incorpo-rated into future designs to overcome this limitation. Inertial motion capture hasiiithe potential to be a high-throughput, cost-effective, and portable alternative tohigh-speed video for recording wing kinematics in freely flying birds. This approachcould also be used to record detailed kinematics in other animal systems where opti-cal motion capture is infeasible, such as in situations with poor visibility or to studybehaviours that occur over large spatial scales.ivLay SummaryUnderstanding the subtle changes birds make to their wing strokes during flightis important to furthering our understanding of how birds move and maneuver inthe air. However, current methods for recording three-dimensional animal motionusing high-speed camera arrays are not well suited for tracking the wings of flyingbirds. Bird wings fold, flex, and can be obscured during flapping flight, whichmakes it difficult for algorithms to recognize fixed points on the wing without directhuman supervision. This adds considerable time and effort to recording wing motion.In this thesis, I made progress towards developing a new high-throughput way ofrecording animal motion using purpose-built body-mounted data loggers and dataprocessing software. Although further changes to the hardware are needed to recordthe motion of bird wings in flight, I demonstrate that this approach could providea cost-effective, portable, and automated alternative to current methods of animalmotion capture.vPrefaceThis dissertation is the original, unpublished work of the author, S. Senthivasan,with supervision and feedback from D.L. Altshuler. B. Yechuri contributed the finalcircuit board layouts presented in Chapter 2. All of the research was conducted inthe Altshuler Laboratory at the University of British Columbia, Vancouver campus.The work with live birds described in Chapter 2 was approved under the UBCAnimal Care Certificate number A19-0113.viTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Recursive Bayesian estimation . . . . . . . . . . . . . . . . . . . . . . 21.1.1 Modelling systems with recursive Bayesian estimation . . . . . 31.1.2 A visual example with Kalman filters . . . . . . . . . . . . . . 51.1.3 Extension beyond Kalman filters . . . . . . . . . . . . . . . . 81.1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.2 Kinematics with quaternions . . . . . . . . . . . . . . . . . . . . . . . 101.2.1 Representing orientations . . . . . . . . . . . . . . . . . . . . 111.2.2 Unit quaternions . . . . . . . . . . . . . . . . . . . . . . . . . 131.2.3 Axis-angle conversion . . . . . . . . . . . . . . . . . . . . . . 14vii1.2.4 Applying quaternion rotations . . . . . . . . . . . . . . . . . . 141.2.5 Calculating angles across a hinge with quaternions . . . . . . 161.2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Tool development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.2.1 Inertial sensor array . . . . . . . . . . . . . . . . . . . . . . . 232.2.2 Flash memory unit . . . . . . . . . . . . . . . . . . . . . . . . 252.2.3 Microcontroller . . . . . . . . . . . . . . . . . . . . . . . . . . 262.2.4 Power system . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.2.5 Mounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.3 Body axes alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.4 Orientation estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 302.4.1 Algorithm choice . . . . . . . . . . . . . . . . . . . . . . . . . 312.4.2 Model definition . . . . . . . . . . . . . . . . . . . . . . . . . 322.5 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402.5.1 Orientation estimation of a single sensor . . . . . . . . . . . . 412.5.2 Body axes alignment . . . . . . . . . . . . . . . . . . . . . . . 422.5.3 Joint angle measurement . . . . . . . . . . . . . . . . . . . . . 452.5.4 Estimating wing stroke parameters in a deceased bird . . . . . 512.5.5 Estimating wing stroke parameters in vivo . . . . . . . . . . . 552.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623.1 Significance of high-throughput methods . . . . . . . . . . . . . . . . 633.1.1 Formalizing flight style . . . . . . . . . . . . . . . . . . . . . . 643.1.2 Deconstructing flight control . . . . . . . . . . . . . . . . . . . 66viii3.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69ixList of Tables2.1 Wing stroke parameters estimated using inertial sensors and Opti-Track for three simulated wing strokes. . . . . . . . . . . . . . . . . . 542.2 Flight speed and flapping frequency of sixteen species collected in thefield using an anemometer and video data. Adapted from Pennycuick[75] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60xList of Figures1.1 State transition diagram for a partially observed Markov chain . . . . 51.2 A visual example of a Kalman filter applied to a simple estimationproblem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.1 Schematic of the main inertial data logger board . . . . . . . . . . . . 222.2 Two-sided board layout of the main inertial data logger board . . . . 232.3 Schematic of the external sensor board for the inertial data logger . . 232.4 Two-sided board layout of the external sensor board for the inertialdata logger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.5 Validation of single body orientation estimation using inertial data . 432.6 Validation of body axes alignment using misaligned sensors . . . . . . 462.7 Demonstration of the Euler angle singularity point . . . . . . . . . . 472.8 Validation of joint angle estimates using a simple hinge . . . . . . . . 492.9 Demonstration of motion rejection using non-inertial frames of reference 502.10 Orientation of the wing relative to the body estimated over threesimulated wing strokes of a deceased pigeon . . . . . . . . . . . . . . 552.11 Data collected from the inertial tracking system mounted on a livepigeon during a short free flight . . . . . . . . . . . . . . . . . . . . . 57xiAcknowledgementsI would like to thank Doug, the members of the Altshuler Lab, and my friendsthroughout the department and beyond. You were all friendly, supportive, and anabsolute pleasure to spend time with both in and out of the lab. I would also liketo thank my friends and family back home who always supported me even whenneither of us really knew what exactly it was that I was doing here.xiiChapter 1IntroductionThree independent evolutions of powered flight are represented in extant animals[1] and all three of these taxa achieve this unique form of locomotion through theflapping of analogous, specialized appendages we call wings [2\u00E2\u0080\u00934]. Unlike in human-engineered systems that typically disentangle the structures responsible for thrustgeneration (e.g., propellers, jet engines), lift generation (e.g., wing surfaces, spoil-ers, flaps), and attitude control (e.g., ailerons, elevators, rudders), flighted animalsaccomplish all of these tasks primarily by variation in the motion of the wings [1].It is for this reason that wing kinematics are of broad interest in the study of thephysics and control of animal flight [5\u00E2\u0080\u00937].However, current methods for recording animal motion using arrays of high-speed cameras [8, 9] struggle to automate the collection of bird wing kinematics,in part due to the considerable morphing of wings during flapping flight [10]. Thisproblem is generally overcome by manually assessing every frame from every camerain the array and correcting the digitization as necessary. This manual digitizationrequires considerable time and effort, which is reflected in the limited sample sizesof most studies that make use of detailed bird wing kinematics [11\u00E2\u0080\u009314].In contrast, high-throughput methods exist for collecting wing kinematic data11. Introductionfrom tethered flies which allows in part for much larger samples sizes and more de-tailed analyses [15, 16]. The development of automated methods for recording bodyposition [17] and aerodynamic forces [18] in bird flight similarly enabled researchersto address large comparative [19] and more detailed mechanistic [20] questions.In this thesis, I made progress towards the development of a new set of toolsfor the high-throughput collection of wing kinematic data from freely flying birds.I use body-mounted inertial sensors to track the orientations of the body and wingsegments during flight, which in turn could be used to reconstruct the overall motionof the wings. As part of this approach, I rely heavily upon previous work in twobranches of mathematics that may be unfamiliar to many readers\u00E2\u0080\u0094namely hiddenstate estimation using recursive Bayesian inference and calculations on orientationswith unit quaternions. I briefly introduce these two topics as they relate to thecurrent problem in the following two sections.1.1 Recursive Bayesian estimationRecursive Bayesian estimation is a general framework for estimating a time varyingprocess, or state, that cannot be measured directly [21]. The approach here is to useinformation about how the system changes over time as well as direct observationsof related variables to recursively refine our estimate of the hidden state.This is useful to us as we cannot use inertial sensors to directly measure howbirds position their wings at each point in a wing beat. Instead, we directly measurethe angular velocity, linear acceleration, and magnetic heading of individual bodysegments, which provide us with indirect information about the motion of the wingsas a whole. Recursive Bayesian estimation provides this link between what we canmeasure and what we want to know.21. Introduction1.1.1 Modelling systems with recursive Bayesian estimationTo apply the recursive Bayesian framework, we need to be able to make two assump-tions about the structure of the causal relationships in the system. First, we requirethat the probability of the state at any given time depends only the probability ofthe state in the previous time step. This can be written asp(xt | x1:t\u00E2\u0088\u00921) = p(xt | xt\u00E2\u0088\u00921)where p(v | w) is the probability of v given w, xt is the state at time t, and x1:t is thecollection of states from time 1 to time t. This is also known as the Markov propertyand greatly simplifies the process of modelling how the state evolves over time [22].Substituting (t + 1) for t above gives us the expression for the transformation thatmaps the probability density of the current state to the same for the next time step.p(xt+1 | xt) (1.1)We call this transformation the transition function and it must be constructedto model the system we would like to explore. This in turn requires that we havesome prior knowledge about how the system behaves.The other key assumption is that the direct measurements, or the observationof the system, made at any time step depend only the current state. This can bewritten asp(yt | x1:t, y1:t\u00E2\u0088\u00921) = p(yt | xt)where p(v | w, x) is the probability of v given the joint probability of w and x,and yt is the observation at time t. This is important as it allows us to use Bayestheorem to use express the probability of the current state in terms of the current31. Introductionobservation [23].p(xt | yt) = p(yt | xt)p(xt)p(yt)(1.2)Note that p(yt | xt), or the probability of the current observation given thecurrent state, is a relationship in our system and must be modelled just like thetransition function.Substituting the transition function (1.1) at time (t \u00E2\u0088\u0092 1) for the probability ofthe state prior to observation incorporation in (1.2), we get an expression for theposterior probability of the state given both lines of evidence.p(xt | yt, xt\u00E2\u0088\u00921) = p(yt | xt)p(xt | xt\u00E2\u0088\u00921)p(yt)(1.3)This is the expression that we wish to solve for at every time step using recursiveBayesian estimation. However, p(xt\u00E2\u0088\u00921) or the probability of the state at the previoustime step in (1.3) could also be rewritten using (1.3) itself.p(xt | yt, p(xt\u00E2\u0088\u00921 | yt\u00E2\u0088\u00921, xt\u00E2\u0088\u00922))=p(yt | xt)p(xt | p(xt\u00E2\u0088\u00921 | yt\u00E2\u0088\u00921, xt\u00E2\u0088\u00922))p(yt)p(xt | yt, yt\u00E2\u0088\u00921, xt\u00E2\u0088\u00922) = p(yt | xt)p(xt | yt\u00E2\u0088\u00921, xt\u00E2\u0088\u00922)p(yt)Repeating this substitution recursively, we getp(xt | y1:t, x0) = p(yt | xt)p(xt | y1:t\u00E2\u0088\u00921, x0)p(yt)where x0 is the initial state. Note that the probability distribution of x0 can betaken to be the uniform distribution if we do not have any prior knowledge aboutthe initial state. This, admittedly abstractly, demonstrates how recursive Bayesianestimation can be used to estimate a hidden state using only sequential indirect41. Introductionobservations. A visual demonstration of this idea using a toy example is providedbelow in Section 1.1.2.Finally, if we choose our state xt such that it includes both the hidden variablesof interest and the observable variables yt, we can see that our two key assumptionsare equivalent to assuming that the system can be modelled as a partially observedMarkov chain [22]. Such a Markov chain is shown visually in Figure 1.1 using astate transition diagram....... xt-1 xt xt+1yt-1 yt yt+1f (xt-1) f (xt)g(xt-1) g(xt) g(xt+1)Figure 1.1: State transition diagram for a partially observed Markov chain. Thetrue state xt evolves over time following some transition function f that dependsonly the previous state. The observation yt is the subset of the true state that canbe measured directly and is related to the current state by the function g. Thesetwo functions are sufficient to characterize the Markov chain [22].1.1.2 A visual example with Kalman filtersThe Kalman filter is one of the simplest and most widely-used implementations ofrecursive Bayesian estimation [24\u00E2\u0080\u009326]. By requiring the models to be linear and theprobabilities to be normally distributed, Kalman filters achieve impressive compu-tational efficiency, with computation time increasing linearly with the complexity ofthe model [27]. It is likely for this reason that the navigational computer on boardthe Apollo 11 spacecraft used a Kalman filter to estimate its trajectory [28], despitehaving less computing power than a TI-84 Plus graphing calculator [29].To help develop an intuition for how recursive Bayesian inference can be used51. Introductionto refine our estimates of hidden variables, I provide a visual example to applyinga Kalman filter to a simple estimation problem. In this example we are interestedin estimating the velocity of an animal using a video recording of it running alonga straight track. Notice that while the video clearly captures information aboutthe velocity (hidden state) of the animal, no one frame (observation) in isolationprovides any reliable measure of the same. Each frame does however provide uswith the position of the animal along the track, which can be related to the hiddenstate.We start by considering everything we know given only the first frame of therecording. We have a pretty clear idea of the position of the animal, but it isimportant to recognize that the measurement will not be perfect. The measurementerror in this example could be due to motion blur or lens distortion along the marginsof the frame. In contrast, we have no information about the velocity of the animalor any covariance between the animals velocity and position. Figure 1.2A shows atwo-dimensional normal distribution that visualizes what we know about these twovariables.Next, we apply the transition function to generate a prior estimate for the state inthe second frame. As we have no reason to suspect that the velocity of the animal willchange one way or the other over a time step, our transition function should reflectthis by leaving the expected velocity unchanged over the transition. Our expectationfor the position however will depend linearly on the velocity of the animal as wellas the previous position. To visualize this transition, we can pick points along thecurrent probability distribution and track how they are transformed by the transitionfunction. We can then use the transformed points to visually reconstruct the priorprobability of the state in the second frame. Figure 1.2B shows this procedure aswell as the resulting transitioned probability density distribution.We can now overlay the actual measurement information from the second frame6Figure 1.2: A visual example of a Kalman filter applied to a simple estimationproblem. Bivariate normal distributions describing estimates of the state are shownusing contour plots. Points are discrete samples from the normal distributions andare used to visualize transformations on the probabilities, which are representedby the corresponding arrows. Colour is used to indicate the information used toconstruct a given plot element. Red indicates an observation, blue indicates a statetransition, and green indicates that both lines of evidence were used. The sequenceshows (A) an initial observation, (B) the modelled transition between observations,and (C) the incorporation of a new observation. The filter is continued by repeatingthe transition and observation incorporation steps for all remaining observations.71. Introductionover our prior estimate from the previous transition. As before, this observationprovides reliable information about the position of the animal but no informationabout the velocity whatsoever. Combining these two probability distributions how-ever provides us with a reasonably precise estimate of the velocity of the animal inthe second frame. This measurement incorporation is illustrated in Figure 1.2C.This two-step procedure of transition and measurement incorporation can thenbe repeated for each subsequent frame to estimate velocity over the whole recording.Note that the hidden state is never directly measured, but information is extractedfrom the observations as well as the modelled relationship between the observationand the hidden state.1.1.3 Extension beyond Kalman filtersDespite the computational benefits of using Kalman filters, the two additional as-sumptions made regarding model linearity and normally distributed errors are notalways appropriate for the system being modelled [30]. Other recursive Bayesian es-timation algorithms may better approximate the system in these situation, thoughthis flexibility generally comes with some cost to computational efficiency. The ex-tended and unscented Kalman filters for example relax the need for model linearityin different ways [30, 31], while others like the Monte Carlo filter and dynamic gen-eralized linear models use different probability distributions to model systems wherevariables are not normally distributed [32, 33].We have also only considered recursive Bayesian filters up to this point. Filtersin this context are algorithms that solve the filtering problem [34]p(xt | y1:t) (t = 1, . . . , i )where i is the final time step in a given sequence of observations. In words,filters estimate each state using all of the observations up to and including the81. Introductioncurrent time step. Predictors are another set of algorithms within the recursiveBayesian framework that solve the prediction problem [34]p(xt | y1:t\u00E2\u0088\u00921) (t = 1, . . . , i )This is similar to the filtering problem, except that observation from the currenttime step is not available. In contrast, smoothers use observations from the entiresequence to estimate each state to solve the smoothing problem [34]p(xt | y1:T ) (t = 1, . . . , i )Filters will generally outperform predictors and smoothers will generally outper-form filters, but the additional observations needed to use these algorithms are notalways available when the estimate is needed.The algorithms discussed thus far also implicitly assume that the model param-eters are known more or less exactly. In linear systems with unknown parameters itis generally tractable to calculate a maximum likelihood estimate of the parametersgiven the observations [35]. In more complex systems however, it may be neces-sary to simultaneously estimate both the parameters and the state using a recursiveBayesian framework [36, 37]. This is not a perfect solution as it can result in poorperformance in early time steps where neither the state nor the parameters are wellknown. One solution to this problem is to first use all of the available data torecursively generate a probability distribution of the parameters before using thisposterior parameter distribution to recursively estimate the state as usual [38]. Al-though this solution could be used to filter or predict the state, as the posteriorparameter distribution is built using the entire sequence of observations, it suffersfrom the same dependence on the availability of future observations as smoothingalgorithms.91. Introduction1.1.4 SummaryRecursive Bayesian estimation provides a probabilistic framework for estimatingvariables of interest using indirect measurements taken over time. However, theactual calculations used in a given implementation will depend on the system andthe way we choose to model it. To reconstruct bird wing kinematics from inertialdata using this framework, we need a mathematical representation for kinematicsthat can be related back to our direct observations. In the following section, Iintroduce a number system that can be used for such a representation.1.2 Kinematics with quaternionsOne approach to quantifying kinematics is to start by modelling the body as a seriesof rigid body segments connected by simple joints [39]. Although body segmentsare never truly rigid [40] and biological joints do allow some lateral motion [41], thismodel is a useful approximation for most vertebrate systems.Under this model, all we need to reconstruct the kinematics of an animal arethe angles of all the relevant joints measured along each axis and recorded overtime. A useful observation is that we can easily calculate joint angles if we knowthe orientations of the surrounding two body segments. For example, if you knowthat your forearm and upper arm are aligned, you know that your elbow is straightregardless of what you are doing with your shoulder. Accordingly, the problemof quantifying kinematics quickly becomes a problem of tracking the orientationof individual body segments over time. This is especially convenient as recursiveBayesian estimation can be used to estimate orientation from inertial data [42, 43].What we need now is a set of tools for working with orientation.101. Introduction1.2.1 Representing orientationsRepresenting and manipulating orientations in three dimensions can often feel unin-tuitive, in part because it is easy to confuse the mathematical idea of an orientationwith orientations in common parlance. A useful analogy for thinking about theformer is to view orientation as the angular counterpart to vector position. Just asposition in three dimensions can be represented with a series of translations alongthree perpendicular axes, orientation can be thought of as a comparable series of ro-tations. This is made explicit in the Euler angle system [44]. Although this is a validand complete system for working with orientations, the analogy is not perfect. Theorder in which the three rotations are applied cannot be changed without changingthe resulting orientation [44]; it is ambiguous whether the three axes should rotatewith the body or remain fixed as you apply the three rotations [44]; and the spaceis discontinuous when there is a rotation of \u00C2\u00B190\u00C2\u00B0 in pitch [45], but not along theother two axes.An alternative approach is to view an orientation as a transformation betweentwo frames [46]. Although a frame is properly any set of basis vectors for a givenspace [47], for the purposes of this project I will restrict this definition to mean threemutually perpendicular unit vectors in three-dimensional space. We can representphysical objects with these frames by choosing the three vectors to align with somenatural axes of the object, such as along the walls and floor a square room, orthe anteroposterior, mediolateral, and dorsoventral axes of an animal. I will alsorestrict our discussion to frames that satisfy the right-hand rule to avoid issues withmirrored frames [48].This analogy makes it clear that an orientation requires at least two frames tomake sense. A single object floating in a vacuum can be assigned a correspondingobject frame, but without some frame of reference it is not clear what its orientationshould be. This reference frame can be external, such as the frame of another object111. Introductionfloating by, or it can be the frame of the first object at another point in time. Both ofthese choices are valid, but the resulting orientations represent completely differenttransformations. This is why it is helpful to be explicit about which frames arebeing discussed, especially as we start manipulating orientations.The other necessary piece of information needed to represent an orientation isthe transformation between the two frames [46]. The ambiguity we saw in the Eulerangle system stems from the fact there are many ways decompose this transformationinto three sequential rotations around perpendicular axes. This highlights the needfor being clear about which conventions are being used when working with Eulerangles [44].Another option is to represent the transformation as a single smooth rotationbetween the two frames characterized by an axis and magnitude of rotation [49].This is useful as every rotation will have a unique axis-angle representation thatminimizes the magnitude of the rotation. The only exception is when the magnitudeof the rotation is 180\u00C2\u00B0, in which case rotating either way around the axis of rotationis equivalent. In contrast, the Euler angle system has infinitely many equivalentrepresentations when the pitch is \u00C2\u00B190\u00C2\u00B0 due to gimbal lock [45]. See Figure 2.7 for avisual demonstration of this problem.Although the axis-angle representation is an intuitive and complete descriptionof the transformation between two frames, it does not provide an algorithm foractually applying this rotation to vectors in three dimensions or other orientationsrepresented in this way [49]. Rotation matrices provide such an algorithm by way ofmatrix multiplication [47]. This system also represents the rotation between framesas a single transformation, which overcomes the issue of gimbal lock. However,converting between axis-angle representations and rotation matrices is relativelyslow [50], as is matrix orthogonalization without bias [51]. As both these calculationswill be used very often in the following methods, I chose to use a different system121. Introductionfor representing orientation.1.2.2 Unit quaternionsQuaternions are a four-dimensional extension of complex numbers, written asq\u00E2\u0083\u0097 = w + xi\u00CB\u0086+ yj\u00CB\u0086 + zk\u00CB\u0086=\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00AF\u00EF\u00A3\u00AF\u00EF\u00A3\u00AF\u00EF\u00A3\u00AF\u00EF\u00A3\u00AF\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0qwqxqyqz\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BA\u00EF\u00A3\u00BA\u00EF\u00A3\u00BA\u00EF\u00A3\u00BA\u00EF\u00A3\u00BA\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB(1.4)where i\u00CB\u0086, j\u00CB\u0086, and k\u00CB\u0086 are unit numbers perpendicular the real numbers, much like iin complex numbers [52].As with rotation matrices, a unit quaternion encodes all the information neededto rotate between two frames [46]. One way to intuit the approximate transfor-mation encoded by a given unit quaternion is to consider what each componentrepresents in isolation. The unit quaternion [1 0 0 0]T represents no rotation what-soever, while [0 1 0 0]T , [0 0 1 0]T , and [0 0 0 1]T represent 180\u00C2\u00B0 rotations around x,y, and z respectively. Note that the transpose (qT ) of these quaternions are usedhere for compactness. Other unit quaternions can then be thought of as mixes ofthese four orientations weighted by their corresponding components.Throughout this paper, I use the notation BA q\u00CB\u0086 to express the unit quaternionrepresentation of the transformation from frame A to frame W. This is equivalentto the orientation of frame W in the reference frame A [46].131. Introduction1.2.3 Axis-angle conversionWe can construct a quaternion from an axis-angle pair using the following definition:BA q\u00CB\u0086 =\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0 xos(\u00CE\u00B8R2)n\u00CB\u0086 sin(\u00CE\u00B8R2)\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB (1.5)where \u00CE\u00B8 is the magnitude of the rotation between frame A and frame W and n\u00CB\u0086is the three-dimensional unit vector parallel to the axis of rotation [46].This can also be used to construct the inverse of a quaternion, or in other wordsthe transformation from frame W back to frame A [49]. We do this by negating themagnitude of the rotation in the axis-angle representation.BA q\u00CB\u0086\u00E2\u0088\u00921 = AB q\u00CB\u0086=\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0 xos(\u00E2\u0088\u0092\u00CE\u00B8R2)n\u00CB\u0086 sin(\u00E2\u0088\u0092\u00CE\u00B8R2)\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB=\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0 xos(\u00CE\u00B8R2)\u00E2\u0088\u0092n\u00CB\u0086 sin(\u00CE\u00B8R2)\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB(1.6)This allows us to efficiently compute the inverse of a unit quaternion simply bynegating the final three components. Calculating the inverses of rotation matrices,for comparison, is considerably slower [47].1.2.4 Applying quaternion rotationsUsing our analogy of orientations as transformations, we can also use unit quater-nions to rotate other objects. With rotation matrices, this rotation was accomplishedusing matrix multiplication [47]. With unit quaternions, we use quaternion multi-plication, also known as the Hamilton product [49].141. IntroductionQuaternion multiplication is similar to matrix multiplication in that it is notcommutative, ieqa \u00C3\u0097 qb \u00CC\u00B8= qb \u00C3\u0097 qawhere \u00C3\u0097 is the Hamilton product [47, 49]. Also as with matrices, we applyquaternion transformations by multiplying specifically from the left [46, 47]. Forexample, if we start with a unit quaternion representing the transformation fromframe A to frame B (i.e., BA q\u00CB\u0086), we can apply another transformation on the left tomove from frame B to a new frame C (i.e., WB q\u00CB\u0086).WB q\u00CB\u0086 \u00C3\u0097 BA q\u00CB\u0086 = WAq\u00CB\u0086 (1.7)Reusing the analogy of orientations as transformations, this composition can beviewed either as either a series of rotations from frame A to W to C, or as a changeof reference frame for the orientation of C from frame W to frame A.We can also apply quaternion rotations to vectors in three dimensions, but theprocess is a little more complicated as the Hamilton product is only defined forquaternions. To get around this, we first cast the three-dimensional vector as aquaternion with the real component set to zero [46]. For an arbitrary vector inthree dimensions v\u00E2\u0083\u0097, this looks like so:\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B00v\u00E2\u0083\u0097\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BBThe procedure for rotating this vector is also different from the procedure for151. Introductionrotating a unit quaternion and is known as conjugation [46].\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B00w\u00E2\u0083\u0097\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB = BA q\u00CB\u0086 \u00C3\u0097\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B00v\u00E2\u0083\u0097\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB\u00C3\u0097 BA q\u00CB\u0086\u00E2\u0088\u00921 (1.8)where v\u00E2\u0083\u0097 is an arbitrary three-dimensional vector and w\u00E2\u0083\u0097 is the new rotated vector[46]. As before, there are a couple different ways of interpreting this transformation.We could choose to view w\u00E2\u0083\u0097 as v\u00E2\u0083\u0097 rotated by the transformation from frame A to frameW, or we could could view v\u00E2\u0083\u0097 as a vector expressed in frame A and w\u00E2\u0083\u0097 as that samevector now expressed in a new coordinate system, W. Both of these interpretationswill be useful to us later.1.2.5 Calculating angles across a hinge with quaternionsTo demonstrate how unit quaternions can be used to solve problems, I show theprocedure for calculating the angle across a simple hinge that can only rotate arounda single axis.We begin by naming three frames in our system: the left side of the hinge (a),the right side of the hinge (g), and some external frame of reference (E). Supposewe have been been tracking the two sides of the hinge independently and know theorientation of each side of the hinge relative to the external frame. In other words,we know LE q\u00CB\u0086 and RE q\u00CB\u0086.Our first step is to get rid of the external frame of reference E, because it hasno bearing on the angle of the hinge. Notice that the two orientations can beinterpreted as the transformations from frame E to a and from E to g. As theyboth have frame E in common, we can manipulate the two unit quaternions tofind the transformation from a to E to g. To do this we need to invert the firstquaternion using (1.6) to find the transformation from a to E and then apply thesecond quaternion using (1.7) to go from E to g.161. IntroductionRL q\u00CB\u0086 =RE q\u00CB\u0086 \u00C3\u0097 LE q\u00CB\u0086\u00E2\u0088\u00921We now have a unit quaternion that represents the transformation from the leftside of the hinge directly to the right side. As our simple hinge can only rotate aroundone axis, the angle of the hinge is simply the magnitude of the rotation describedby the aforementioned transformation. Using the axis-angle representation of a unitquaternion (1.5), we can solve for this angle \u00CE\u00B8 using just the first component of theunit quaternion.RLqw = xos(\u00CE\u00B82)\u00CE\u00B8 = 2 xos\u00E2\u0088\u00921(RLqw)where RLqw is the first component of the unit quaternion RL q\u00CB\u0086, as introduced in(1.4).1.2.6 SummaryUnit quaternions are a robust and computationally efficient way of representing andmanipulating orientation. By modelling an animal as a series of rigid bodies con-nected by joints, quaternions can also be used to compactly describe kinematics.This, along with recursive Bayesian estimation, provides us with all the mathemat-ical tools we need to implement inertial motion capture using body-mounted dataloggers.17Chapter 2Tool development2.1 IntroductionCurrent studies of animal kinematics commonly rely on data from arrays of high-speed cameras to reconstruct the position of marked points on the body using DirectLinear Transformation (DLT) [53\u00E2\u0080\u009355]. Although these methods have been shownto be quite reliable [9], they require direct line of sight of markers by multiplecameras simultaneously to reliably reconstruct position. This is problematic formarkers placed on the wings of birds as they are prone to occlusion by the wings,body, and feathers over the course of a wing stroke. Although this problem can bemitigated to some degree by using greater numbers of cameras and perspectives,markers can still be completely hidden from view during wing flexion or if the wingscome together at the top or bottom of a wing stroke. Using many inward facingcameras to improve marker tracking also requires considerable space and equipmentto record wing kinematics in a relatively small flight volume, which can limit thediversity of species and behaviours that can reliably be recorded.The other key constraint that makes DLT inviable for high-throughput recordingof bird wing kinematics is the considerable time required to digitize the resulting182. Tool developmentfootage to ensure that all markers are appropriately labelled in every frame of everyperspective [5, 11, 13, 14]. Newer implementations such as DeepLabCut [8] and thelatest iteration of DLTdv [9] as well as proprietary systems such as OptiTrack makesignificant strides in automating this process for most use cases. However, due to theinconsistency of marker capture across frames as a result of occlusion, considerableeffort is still required to check and clean the results by hand. Methods that rely onshape-based feature recognition, such as those used in some studies of insect flight[56], improve upon these shortcomings but cannot reliably deal with the degreeof wing morphing seen in bird flight [10] or even some insect taxa [57]. Similarphotogrammetric [58] and structured light [59] approaches that do not attemptfeature recognition can be used to efficiently record the body profile of flying birds,but isolating the wings from the body and extracting kinematic data again requiresmanual effort.An alternative to these optical approaches would be to use body-mounted inertialsensors to track the orientations of the major skeletal components of the wing overtime. Although inertial motion capture has recently gained considerable popularityin human exercise science [60\u00E2\u0080\u009362], there has been considerably less interest in otherfields. Further, the few studies that apply these methods to other systems havebeen restricted to large mammals [63, 64], perhaps due to size limitations. However,the growing popularity of microelectronics and small-batch circuit assembly servicesmakes it both feasible and cost-effective to design and manufacture complete inertialdata logging systems small enough to be mounted on birds.Developing customized hardware for inertial motion capture comes with its ownchallenges when it comes to estimating orientation from the raw sensor data. Mosthuman applications of inertial motion capture depend on proprietary sensor fusionalgorithms for orientation estimation that come as part of commercial inertial mo-tion capture systems [60, 61]. In contrast, most open-source code for estimating192. Tool developmentorientation using inertial data optimize for computational speed over accuracy [42,43]. This is likely driven by use cases where orientation must be estimated in real-time but where precision is less critical, such as in drone stabilization and for userinteraction with mobile applications. Instead, other recursive Bayesian approachesexist that better optimize for accuracy and account for uncertainty in model param-eters [38, 65]. However, this comes at the cost of being far slower to compute, whichis likely why there is currently no openly available code to implement orientationestimation using these methods.To develop a high-throughput, portable, and minimally invasive means of record-ing wing kinematics, I designed and programmed body-mounted sensors to collectinertial data from the body and wings of birds in flight and implemented a MonteCarlo recursive Bayesian smoother with parameter learning in R to estimate kine-matics from the raw inertial data.2.2 HardwareInertial motion capture models the subject as a series of rigid bodies connected byjoints that can freely flex and extend over time [39]. Inertial data collected fromeach of the rigid bodies of interest are used to estimate the orientation of each bodysegment in space at every time step. Joint dynamics can then be inferred from therelative orientation of adjacent body segments and are sufficient to describe bodykinematics in this model framework.To collect the data needed for this approach, the inertial data loggers must be ca-pable of taking inertial measurements at each rigid body; storing the collected datafor post hoc orientation estimation; timing and coordinating sampling across com-ponents; and providing and regulating power for untethered operation. The inertialdata logger design is conceptually split into four distinct subsystems that accomplish202. Tool developmenteach of these tasks: the inertial sensor array, flash memory unit, microcontroller,and power system.Physically, the logger is split into a main board and multiple external sensorboards that can be wired to the main board as needed. The main board provides allthe components to log data from a single rigid body, provide basic I/O, and interfacewith a computer. As a result, the main board is relatively large, with a footprintof 21 \u00C3\u0097 23mm and a mass of 1.3g without a battery. It is intended to be mountedon and take measurements from the torso of the bird. The schematic for the mainboard is shown in Figure 2.1 and the two-sided board layout is shown in Figure 2.2.Schematics and board layouts were made in Eagle (Autodesk Inc).The external sensor boards consist only of a sensor and two capacitors for powerfiltering (see Section 2.2.4 for details). The main circuit supplies power and controlsignals to the external sensor boards, which are used to take inertial measurementsfrom the relevant segments of the wing. In addition to providing flexibility in thenumber of sensors to log from and the wire lengths between sensors, this modulardesign also allows for minimizing the mass that needs to be mounted on the wing.Each external sensor board is 0.1g on a 8 \u00C3\u0097 9mm footprint. The schematic forthe external sensor board is shown in Figure 2.3 and the two-sided board layout isshown in Figure 2.4. The finalized designs of both boards were printed and assembledin a small-batch production run by Seeed Fusion. Including parts, assembly, andshipping, the main boards work out to $60 CAD per unit for 5 boards, and theexternal sensor boards work out to just under $13 CAD per unit for 15 boards.The data loggers are capable of reliably logging data at 1.8kHz, but this datarate must be split across the number of sensors that are collecting data. This worksout to a maximum sampling rate of 900Hz, 600Hz, or 450Hz, when logging from 2,3, or 4 sensors, respectively, which is comparable to many high-speed video systems.21Figure 2.1: Schematic of the main board of the inertial data logger.222. Tool developmentFigure 2.2: Two-sided board layout of the main board of the inertial data logger.The top layer is shown on the left and the bottom layer is shown on the right.Copper traces on the top surface are shown in red, those on the bottom are shownin blue, and copper that extends through the middle layer is shown in teal. Yellowlines show board boundaries. A 1mm scale is shown in black.Figure 2.3: Schematic of the external sensor board of the inertial data logger.2.2.1 Inertial sensor arrayThe inertial sensor array is a collection of magnetic, angular rate, and gravity(MARG) sensors that are all connected to a common set of communication lines.While local magnetism is not an inertial measurement, MARG sensors are used oversensors that only measure angular velocity and acceleration as acceleration aloneis not enough to correct for drift in gyroscope measurements along all axes. SeeSection 2.4 for more details.I chose to use the MPU-9250 MARG sensors from Invensense in the sensor array232. Tool developmentFigure 2.4: Two-sided board layout of the external sensor board of the inertial datalogger. The top layer is shown on the left and the bottom layer is shown on theright. Copper traces on the top surface are shown in red, those on the bottom areshown in blue, and copper that extends through the middle layer is shown in teal.Yellow lines show board boundaries. A 1mm scale is shown in black.as the other prominent MARG sensors on the market did not have a physical gyro-scope (mCube MC6470), had lower maximum communication speeds (STMicroelec-tronics LSM9DS1TR), had irregular packages that made it difficult to test duringthe design phase (Bosch BMX055), or were simply too large for this application(Xsens MTi-1). Another option would be to spread the gyroscope, accelerometerand magnetometer functionality of each sensor across two or more chips. Althoughthis greatly increases the options available to include specialized sensors with signifi-cantly better performance, I chose to use an all-in-one chip to minimize the hardwaresize and limit the number of communication transactions needed per sample.It should be noted that the MPU-9250 chips are reaching the end of their pro-duction cycle and Invensense has released the ICM-20948 chip as its replacement.However, as these sensors are relatively new and not widely adopted, they were notstocked by the circuit assembly service I used (Seeed Fusion). Modifying the designto use the newer sensors would accordingly have added a considerable lead time dur-ing production. Future developments should consider switching to the newer chips,especially for its tolerance of lower running voltages, which would improve powerefficiency.I used the Serial Peripheral Interface (SPI) protocol to communicate with the242. Tool developmentsensors in the array. Although this does require an extra wire for every additionalsensor in the array, unlike the Inter-Integrated Circuit (I2C) protocol, it is signifi-cantly faster than I2C and does not have issues addressing multiple identical chipson the same set of communication lines.2.2.2 Flash memory unitAs power loss is not always avoidable for systems running on battery power, a non-volatile memory system was needed to store the data collected from the sensors evenin the event of power loss. The system also needs to be electronically addressed,as physically addressed systems are too large for this application. Of the systemsmeeting these requirements, Erasable Programmable Read Only Memory (EPROM)and Ferroelectric Random Access Memory (F-RAM) are not available with sufficientcapacity, which only leaves flash memory.I chose to use the W25Q256FV, a 256Mb NOR flash memory chip fromWinbond,for this purpose. These chips are significantly smaller than a microSD card and slot,and data can be written directly to memory without a file system layer to improvewrite speeds and space efficiency. However, as the memory system is not removable,the data logger must be unmounted from the bird and connected to a computer tooffload data between trials.The 256Mb chips can store 1.8 million data samples from the MPU-9250 sensorsafter accounting for the space lost to block segmentation. For an array of 4 sensorssampling at 400Hz, this corresponds to just under 20 minutes of data collection.Recently released NAND flash memory chips from Winbond provide a drop-in re-placement that would increase the storage capacity of the current data loggers fourfold.252. Tool development2.2.3 MicrocontrollerI chose to use the ATtiny816 AVR microcontroller from Microchip Technology tocontrol the data logger. While there are many viable options for small, low-powermicrocontrollers, the AVR architecture has the most community support and toolsfor prototyping. The ATtiny 1-series are the AVR chips available in the smallestfootprints and with the lowest power consumption. Despite this, they can run atreasonably high clock speeds (20MHz) and have sufficient I/O for this design.I developed the firmware for the data loggers in C, which splits operation intothree modes: a stand-by mode that is entered when the loggers are connected topower; a data collection mode that polls the sensor array for data at regular intervalsand writes the data to memory; and a data output mode that reads the data frommemory and outputs it over a Transistor-Transistor Logic (TTL) interface. A singlelatching hall effect switch is used to choose between data collection and data outputmodes, and disconnecting power is used to end data collection or output. A singleLED is similarly used to indicate the current status and prompt actions during bodyaxes alignment as described in Section 2.3.Interfacing with computers is provided by a two-pin JST SH port, which waschosen for its small size. One pin connects to the reset pin of the microcontroller andallows for modification of the microcontroller firmware and fuses using the UnifiedProgram and Debug Interface (UPDI). The other pin is the transmission line of aTTL interface, which is used to output data. This pin can be directly tied to thereceiving pin of a USB interface and interpreted by a computer, given a commonground. During data output, the microcontroller sends the Unicode representationof each byte in hexadecimal, separated by the Unicode character for a new line atthe end of every data sample, or 18 bytes. The computer then just needs to read inthe stream and write it directly to a file to save the data from a given trial. An Rscript is then used to convert the raw data into natural units and format the data262. Tool developmentinto an appropriately labelled CSV file. The script currently requires the user tomanually indicate the number of sensors in the array and the sampling frequency.2.2.4 Power systemThe components used in the rest of the design can all run at 2.5 to 3.3V. However assmall batteries tend to have considerable fall off in output voltage over a dischargecycle, a more reliable approach is to use a battery with a slightly higher nominalvoltage with a voltage regulator. Although linear regulators require fewer accessorycomponents, I chose to use a switching regulator for its considerable increase inefficiency. Buck-boost switching regulators also have the added benefit of being ableto boost voltage back up if the battery dips below the target voltage. I chose touse the TPS63030 from Texas Instruments primarily because it was stocked on-siteby Seeed Fusion, but any low-power switching regulator should produce comparableresults. Future designs that opt to use the newer sensors from Invensense would alsohave the option of running the entire system at 1.8V, which should further improvepower efficiency.For the power source, I chose to use the 70mAh, 3.7V lithium polymer batteriessold by TinyCircuits for testing. Although these batteries are quite large at 1.9g,they are cheap, easy to recharge, and can power the device for 3.5 hours betweencharges, which is useful for testing. For actual use, smaller batteries are availablethat could greatly decrease the size of the complete data logger. The 10mAh, 3.1VMS920SE lithium batteries from Seiko Instruments for example, has a mass of only0.5g, but require additional paperwork to be imported into Canada due to firehazards during transport.Finally, a series of passive components are placed around the circuit for powerconditioning. Two 10\u00C2\u00B5F capacitors are placed in parallel across the output andground of the switching regulator as recommended by Texas Instruments and are272. Tool developmentused to filter the ripple currents produced by switching and system-wide low-frequency noise. Each active component also has a 100nF bypass capacitor placedacross their power and ground pins to filter out high-frequency noise. Somewhatsurprisingly, reading and writing to flash memory accounts for roughly 80% ofthe expected peak current draw during operation. I chose to place a large 470\u00C2\u00B5Fcapacitor near the flash memory to unambiguously isolate these loads from the restof the power supply, as is commonly done with motors and other components withlarge current draws. However, future designs could likely scale this back safely asthese current draw peaks are on the order of 20mA, which in perspective are not allthat large.2.2.5 MountingThe inertial sensors must be rigidly mounted to the body sections they are trackingto accurately estimate orientation and infer kinematics further down the line. Tomount the the main data logger board to the torso of the birds, I used a customdesigned elastic harness. I first tied the main board down to a piece of card stockusing the mounting holes drilled during manufacturing (see Figure 2.2). The cardstock is meant to sit against the back of the birds with relief cuts along the sidesto avoid affecting range of motion of the wings. I wove two 1/4\u00E2\u0080\u009D flat elastic loopsthrough holes on the fore and aft ends of the card stock to act as head and bodyloops for the harness. The loops are opened prior to mounting and are closed andfitted to the birds using small double D-ring fasteners. Finally, another elastic isused to connect the two harness loops along the ventral surface of the bird withanother double D-ring fastener used for fitting. To remove the harness, the threefasteners are opened in reverse order and the harness can then be lifted off the backof the bird.I mounted the external sensor boards to the ventral surface of the wings, where282. Tool developmentthe feathers could be moved out of the way to expose the skin covering the manusand long bones of the wing. A single drop of cyanoacrylate was used to mount theboards and they were removed with gentle scraping after each trial. In the few caseswhere the cyanoacrylate made contact with nearby down feathers, the feathers wereeither plucked or trimmed to minimize stress for the birds. The wires connectingthe main board to external sensor boards were braided together and sized to allowcomplete natural range of motion of the wings without risking entanglement.2.3 Body axes alignmentThe raw data from the inertial data loggers represent the motion of the sensor rela-tive to an inertial frame of reference (often called the world frame), measured alongthe internal sensing axes of the chip, or the sensor frame. This is problematic as evenslight variation in the way the sensor array is mounted will change the relationshipbetween the individual sensor frames and the rigid bodies we are trying to track,or the body frame. Further, there is no way of relating the orientations estimatedfrom one sensor to another without information about the initial arrangement of thesensors and rigid bodies.To overcome these two issues while minimizing time spent handling the birds,I chose to adapt the two-pose body axes alignment used in some human inertialmotion capture studies [60, 66]. After mounting the sensors, I held the wings closedagainst the body and collected 100 samples from the sensor array as the bird washeld in each of two poses. I held the back of the bird upright in the first pose to findthe anteroposterior axis in each of the individual sensor frames using the directionof gravity, as measured by the accelerometers. I then held the bird level to findthe dorsoventral axis in a similar manner, again for each sensor. The mediolateralaxis expressed in the sensor frame was then be found by taking the cross product292. Tool developmentof these two vectors. I then redefined the dorsoventral axis using the cross productof the mediolateral and anteroposterior axes to ensure that all three are mutuallyperpendicular. After normalization, these three vectors form a basis for the bodyframe in the sensor frame for each sensor in the array. I used these bases to rotateall of the raw data from the individual sensor frames to their corresponding bodyframes prior to estimating orientation. This should correct for any variation acrosstrials in how the sensors are mounted. By holding the wing in a known postureduring the alignment, we also know the initial arrangement of the rigid bodies. Thisis important for measuring joint angles after the orientation of individual sensorshave been estimated.2.4 Orientation estimationOrientation estimation using inertial measurements primarily relies on angular ratedata from gyroscopes. As orientation is analogous to angular position, a first-orderapproximation would be simply to integrate the angular velocity of the rigid bodyover time. Given the initial orientation, perfect gyroscope measurements, and aninfinite sampling rate, the orientation of a rigid body can be calculated exactly bythis method. In practice, measurement error and information lost to discretizationin time leads to the accumulation of error in orientation estimates over time.To correct for this accumulating drift, we need a way of assessing our orientationestimate at a given time without reference to previous estimates. An intuitive optionis to use the relative orientation of locally-fixed reference vectors, like gravity andthe Earth\u00E2\u0080\u0099s magnetic field, to assess a given estimate of the orientation of the rigidbody. Put another way, if gravity is pulling you up, there is a good chance you areupside down.The challenge then is to make the most efficient use of these two distinct methods302. Tool developmentof inferring orientation to produce a single best estimate at every time point. Auseful observation is that this system is well approximated by a Markov chain withcontinuous states. The state (i.e., orientation) at any point in time depends onlyon the previous state and the angular velocity during the transition between states.Further, we can make observations (i.e., orientation of reference vectors) that dependonly on the current state. This congruence allows us to apply a recursive Bayesianapproach to estimating the orientation using the inertial sensor data. See Section1.1 for an introduction to Bayesian inference with Markov models.To facilitate calculations on orientation, I use the unit quaternion representationof orientations throughout this project. This representation does not suffer fromgimbal lock, unlike Euler angles [45], and is slightly more computationally efficientand numerically stable in recursion than rotation matrices [67]. Euler angles arehowever used in visualizations as they are more intuitive to most readers. Please seeSection 1.2 for a brief introduction to working with unit quaternions as orientations.2.4.1 Algorithm choiceI chose to use a Monte Carlo smoother with parameter estimation [37, 38, 65] toestimate orientation from the inertial data. A Monte Carlo approach is used overmore compact state descriptions that depend on model linearity (e.g., Kalman fil-ters [27]) near-linearity (e.g., extended Kalman filters [30]), or Gaussian-distributedprobabilities (e.g., unscented Kalman filters [68]) as the observation incorporationis strongly non-linear. It should be noted however that the Bingham and matrixvon Mises-Fisher distributions provide analogs to multivariate normal distributionsin n-dimensional spheres. These can be used to adapt non-Monte Carlo recursiveBayesian algorithms to linear models of orientation using unit quaternions, as thisspace is a 4-dimensional unit sphere. The rationale for using a non-linear modelhere is explored in more detail below, in Section 2.4.2.312. Tool developmentI chose to use a smoother in favour of a filter as estimation accuracy is moreimportant than computational efficiency for this application. Please note that inthe context of recursive Bayesian inference, a filter is an algorithm that estimatesthe probability density distribution of a time-varying state, xt, given all observationsup to that time point, y1:t. In contrast, a smoother is an algorithm that does thesame using all available observations, including those take after time t. See Section1.1.3 for more information.This is especially important here as the measurement uncertainty of the obser-vations will not be constant over time. During periods with relatively unreliableobservations, filtering algorithms can only make use of reliable observations fromthe past to minimize error in the state estimate. Smoothers on the other handcan make use of information from both the past and future to inform estimates ofthe state during these periods. This of course requires the estimation to be doneafter-the-fact, which limits the utility of smoothers in many other applications. Re-cursively incorporating both past and future information into the state estimate atevery time point also adds considerable computational complexity. In the case ofMonte Carlo smoothers and filters, Yang et al. [38] found that the computationtime grew quadratically with the length of the input for smoothing as opposed tolinearly for filtering.I also chose to use an algorithm that incorporates parameter estimation [37],as a number of the model parameters could not feasibly be characterized for everysensor in every inertial logger.2.4.2 Model definitionAs described in Section 1.1, recursive Bayesian inference requires modelling twoprocesses: state transitions and the relationship between observations and the state.Smoothing also requires an inverse state transition function, but this is generally322. Tool developmentstraight forward to construct from the forward transition function. These processesare necessarily stochastic and are characterized by measurement and transition er-rors that in turn must be modelled.Modelling errorI chose to model the measurement error for all three types of sensors as a lineartransformation plus Gaussian noise. Using the accelerometer data as an example,we get:v = (v\u00CB\u009C+ \u00CF\u0084a) sa + ,a (,a \u00E2\u0088\u00BC N3(0, \u00CF\u0083aI3)) (2.1)where v is the true acceleration experienced by the rigid body, v\u00CB\u009C is the rawmeasurement, \u00CF\u0084a is the systemic bias or trim along each axis, sa is the sensitivity ofthe sensor along each axis, N3(\u00CE\u00B1, \u00CE\u00B2) is the normal distribution function in R3 withmean \u00CE\u00B1 and variance-covariance \u00CE\u00B2, \u00CF\u0083a is the variance of the measurement noise, andI3 is the identity matrix in R3. I chose to model the Gaussian noise as independentand with equal variation along each axis primarily to limit the complexity of theparameters that would be estimated by the algorithm.The trim vector for the gyroscope can easily be estimated and corrected for byaveraging many samples taken when the sensor is stationary. This allows us tosimplify this model asg\u00CB\u009C = gsg + ,g (,g \u00E2\u0088\u00BC N3(0, \u00CF\u0083gI3))The transition error on the other hand depends only on the random variation inthe angular rate measurements as well as error due to discretization in time. BecauseI am modelling both of these processes as normally distributed with a mean of zeroand because the gyroscope noise will only ever be used when transitioning states,332. Tool developmentI chose to consolidate their respective parameters into a single parameter \u00CF\u0083T thatcharacterizes the overall variation of the transition error, modelled as a multivariatenormal distribution.This gives us 7 model parameters with a total dimensionality of 18.\u00CF\u0095 =[\u00CF\u0084a \u00CF\u0084m sa sm sg \u00CF\u0083a \u00CF\u0083m \u00CF\u0083T]TState transitionThe state transition function in a Monte Carlo Markov chain should map eachparticle in the state distribution at a given time step to a new particle at the nexttime step. As a particle in this model is a point estimate of orientation, any transitionbetween particles can be expressed as a single rotation. Using the unit quaternionrepresentation of orientations and (1.7), we can write this transition asBk q\u00CB\u0086(i)t+1 =\u00E2\u0088\u0086Bk q\u00CB\u0086(i)t \u00C3\u0097 Bk q\u00CB\u0086(i)t (2.2)where Bk q\u00CB\u0086 is the orientation of the body (W) in with world frame (l ) representedas a unit quaternion, \u00E2\u0088\u0086Bk q\u00CB\u0086 is the change in this orientation over a time step, and i isthe particle index. See Section 1.2.4 for a primer on rotating orientations.We can then use the axis-angle representation (1.5) to decompose the change inorientation over the time step.Bk q\u00CB\u0086(i)t+1 =\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0 xos ( \u00CE\u00B8t2 )k n\u00CB\u0086t sin(\u00CE\u00B8t2)\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB\u00C3\u0097 Bk q\u00CB\u0086(i)t=\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0 xos(\u00CE\u00B8\u00CB\u0099t2\u00E2\u0088\u0086t)k n\u00CB\u0086t sin(\u00CE\u00B8\u00CB\u0099t2\u00E2\u0088\u0086t)\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB\u00C3\u0097 Bk q\u00CB\u0086(i)t342. Tool developmentwhere \u00CE\u00B8 is the magnitude of the rotation, \u00CE\u00B8\u00CB\u0099 is the scalar angular rate, and k n\u00CB\u0086 isthe axis of the rotation in the world frame.The scalar angular rate is the magnitude of the corrected gyroscope reading.\u00CE\u00B8\u00CB\u0099t = \u00E2\u0080\u0096gt\u00E2\u0080\u0096= \u00E2\u0080\u0096g\u00CB\u009Ctsg\u00E2\u0080\u0096The axis of the rotation can similarly be calculated by normalization.Bn\u00CB\u0086 = g\u00CB\u0086t=gt\u00E2\u0080\u0096gt\u00E2\u0080\u0096=g\u00CB\u009Ctsg\u00E2\u0080\u0096g\u00CB\u009Ctsg\u00E2\u0080\u0096In practise, this also requires a check to ensure that the magnitude of the cor-rected gyroscope reading is not zero. As the gyroscope data is collected in the bodyframe, the axis of rotation calculated from it must be rotated into the world frameusing the current orientation of the body and (1.8).\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0 0k n\u00CB\u0086\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB = Bk q\u00CB\u0086(i)t \u00C3\u0097\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0 0Bn\u00CB\u0086\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB\u00C3\u0097 (Bk q\u00CB\u0086(i)t )\u00E2\u0088\u00921where q\u00CB\u0086\u00E2\u0088\u00921 is the inverse of the unit quaternion q\u00CB\u0086. Note that the procedure forapplying quaternion rotations to vectors in R3 differs from the procedure for applyingquaternion rotations to other quaternions. See Section 1.2.4 for more details.Substituting these back into (2.2) gives us the deterministic transition function.352. Tool developmentBk q\u00CB\u0086(i)t+1 =\u00EF\u00A3\u00AB\u00EF\u00A3\u00AC\u00EF\u00A3\u00AD\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0xos(\u00E2\u0080\u0096g\u00CB\u009Ctsg\u00E2\u0080\u00962\u00E2\u0088\u0086t)03\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB+ Bk q\u00CB\u0086(i)t \u00C3\u0097\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0 0g\u00CB\u009Ctsg\u00E2\u0080\u0096g\u00CB\u009Ctsg\u00E2\u0080\u0096\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB\u00C3\u0097 (Bk q\u00CB\u0086(i)t )\u00E2\u0088\u00921\u00EF\u00A3\u00B6\u00EF\u00A3\u00B7\u00EF\u00A3\u00B8\u00C3\u0097 Bk q\u00CB\u0086(i)twhere 03 is the zero vector in R3. I then sample from a normal distributioncentered at the deterministic solution to incorporate transition error.Bk q(i)t+1 \u00E2\u0088\u00BC N4(Bk q\u00CB\u0086(i)t+1, \u00CF\u0083T I4)The resulting quaternion must finally be normalized to ensure that it is a unitquaternion.Observation incorporationObservation incorporation is accomplished by finding the joint probability of thestate given all past information (i.e., the results of the previous state transition)and the state given the current observation.p(xt | y1:t) = p(xt | y1:t\u00E2\u0088\u00921, yt)= p(xt | xt\u00E2\u0088\u00921, yt) (t = 1, . . . , i )(2.3)using the assumptions of a Markov model applied recursively.Joint probabilities of distributions approximated by sets of particles are calcu-lated using weighted bootstrapping. The challenge here lies in appropriately as-signing weights to the particles given the observations from the corresponding timepoint. The general procedure for this weighting assignment using the measurementof reference vectors is given below, followed by considerations specific to the physicalconstraints of accelerometers and magnetometers individually.362. Tool developmentOrienting with reference vectors \u00E2\u0080\u0094 The accelerometer and magnetometerdata both provide indirect measurements of orientation through the direct measure-ment of vectors that are locally uniform and static in the world frame, i.e., gravityand the Earth\u00E2\u0080\u0099s magnetic field, respectively. The key idea is that a rotation thatdescribes the transformation of a sensor from one orientation to another in the worldframe also describes the inverse of this transformation for the reference vector asmeasured by the sensor. Put another way, turning yourself (sensor) 90\u00C2\u00B0clockwiseis equivalent to turning the room (reference vector) 90\u00C2\u00B0counterclockwise, from your(sensors) perspective.Orientation with reference vectors in this way requires a measurement of thereference in both the current and initial orientation. Data samples collected in thesecond posture of body axes alignment are averaged and normalized after correctionfor scale and trim to estimate Bv\u00CB\u00860 and Bm\u00CB\u00860, the gravity and magnetic field vectorsmeasured in the initial orientation. See Section 2.3 for details on the body axesalignment procedure.Observations of the reference vectors are not used to directly estimate orientationas any valid observation could be produced by an infinite set of orientations. Thisset is spanned by rotating any one valid orientation around the reference vector.Equivalently, you can tell that you are upright by the direction of gravity, but notwhich cardinal direction you are facing. The inverse of this transformation, however,is useful as the expected measurement of the reference vector is unique for a givenorientation and initial measurement. If you believe that you are upright and facingwest, you expect gravity to pull you down.The generalized particle weight for a sensor modality y with measurement errormodelled as in (2.1) can then be modelled as372. Tool developmentw(i)t,y = N3(B y\u00C2\u00AF(i)t | B y\u00CB\u0086t, \u00CF\u0083y/y,tI3)= N3(B y\u00C2\u00AF(i)t\u00E2\u0088\u00A3\u00E2\u0088\u00A3\u00E2\u0088\u00A3\u00E2\u0088\u00A3 (B y\u00CB\u009Ct \u00E2\u0088\u0092 \u00CF\u0084y) sy\u00E2\u0080\u0096(B y\u00CB\u009Ct \u00E2\u0088\u0092 \u00CF\u0084y) sy\u00E2\u0080\u0096 , \u00CF\u0083y/y,tI3)where w(i) is the weight of the particle i, N (\u00CE\u00B3 | \u00CE\u00B1, \u00CE\u00B2) is the density of the multi-variate normal distribution N (\u00CE\u00B1, \u00CE\u00B2) evaluated at \u00CE\u00B3, y\u00C2\u00AF(i) is the expected normalizedsensor reading given particle i, y\u00CB\u0086 is the actual corrected normalized sensor reading, y\u00CB\u009Cis the sensor reading prior to correction, and /y,t is a time-dependent correction forthe measurement noise, \u00CF\u0083y, that depends on the physical properties of the sensor.The expected sensor reading here is calculated by rotation of the initial reading, asdescribed above.\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0 0B y\u00C2\u00AF(i)t\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB = (Bk q\u00CB\u0086(i)t )\u00E2\u0088\u00921 \u00C3\u0097\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0 0B y\u00CB\u00860\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB\u00C3\u0097 Bk q\u00CB\u0086(i)tAccelerometer correction \u00E2\u0080\u0094 Accelerometer data is challenging to use in iso-lation as gravity and the motion of the body in the world frame can not reliablybe distinguished. I approximate this motion-dependent increase in measurementuncertainty as being proportional to the relative change in the magnitude of theaccelerometer reading from 1G. That is, for every n-fold increase or decrease inthe magnitude of the measured acceleration from the stationary case, I scale theaccelerometer measurement error by n./a,t = |ln (\u00E2\u0080\u0096Bvt\u00E2\u0080\u0096)|+ 1This correction evaluates to 1 when the magnitude of the accelerometer readingis 1G and grows without bound as the magnitude of the accelerometer approaches0 or \u00E2\u0088\u009E.382. Tool developmentThe correction notably does not account for non-zero linear accelerations thatsum with gravity to produce readings with magnitude 1G, such as an accelerationof 2G downwards. As these situations are expected to be transient and observationincorporation works by recursion over time, the effect of these cases is expected tobe minimal.Magnetometer correction \u00E2\u0080\u0094 The primary concern with magnetometer read-ings are that they update slowly relative to gyroscopes and accelerometers. Whilethis is likely not a concern for angular rates on the scale of human limb kinematics,bird wing kinematics are considerably faster. I chose to approximate the rate-dependent increase in measurement error as/m,t = e\u00E2\u0080\u0096Bgt\u00E2\u0080\u0096This correction evaluates to 1 when there is no angular motion and increaseswithout bound as the magnitude of the angular velocity tends to \u00E2\u0088\u009E.The weights calculated for each particle given each sensor modality is then com-bined by summation to produce the final weight for each particle. These weights arethen normalized and used to resample the prior distribution of the state producedby the previous state transition to produce the posterior filtering distribution forthe state.Inverse transition functionMany smoothing algorithms, including the one used here [65], use a forwards filter-ing, backwards resampling approach. In these algorithms, the filtered distribution isfirst calculated recursively forwards in time before iterating backwards over time tocalculate the joint probability of the state given all observations. Using the equationfor the filtered distribution (2.3), this can be expressed as392. Tool developmentp(xt | y1:T ) = p(xt | y1:t, yt+1:T )= p(xt | x\u00CB\u009Ct, xt+1) (t = 1, . . . , i \u00E2\u0088\u0092 1)where x\u00CB\u009Ct is the filtered distribution p(xt | y1:t). Note that this backwards recur-sion is possible because the filtered and the smoothed distributions are the samewhen t = i .To evaluate this joint probability with particle distributions we need to assignbootstrapping weights to the filtered particles given the smoothed state in the fol-lowing time step. Using (2.2), I chose to model the smoothing weights for thisinverse transition as\u00CF\u0089(i)t = N4(\u00E2\u0088\u0086Bk q\u00CB\u0086t \u00C3\u0097 Bk q\u00CB\u009C(i)t | Bk q\u00CB\u0086t+1, \u00CF\u0083T I4)where q\u00CB\u009C(i)t is a particle (unit quaternion) in the filtered distribution.The final smoothed distribution at each time would normally be found by boot-strap resampling the filtered particles using these weights, following Godsill et al[65]. However, this procedure is modified as described in the refiltering smootheralgorithm [38] to marginalize the smoothed distribution over the distribution of themodel parameters estimated following Liu and West [37]. For individuals drawsfrom the posterior distribution of the parameter set, a full set of particles is filteredforwards in time and a single smoothed trajectory is sampled backwards during therefiltering step. This set of trajectories is the final smoothed orientation estimate.2.5 ValidationI ran a series of tests to validate the tools and procedures developed in this project.The tests were designed to incrementally assess the assumptions that must be made402. Tool developmentto infer bird wing kinematics from inertial measurements collected by body-mountedinertial data loggers.2.5.1 Orientation estimation of a single sensorMotion capture using inertial sensors depends on reliably being able to estimate theorientation of sensors in the world frame. To assess the accuracy with which we canestimate orientation using these sensors, I recorded the orientation of a single sensorover time as I rotated the board by hand. I then compared the resulting orientationestimates to those from an optical motion capture system, OptiTrack (NaturalPointInc).I placed retro-reflective markers onto the board of the inertial data logger tofacilitate tracking by both motion capture systems and used an external trigger de-vice (eSync2, NaturalPoint Inc) to synchronize the two systems at a 200Hz samplingrate. I had to modify the data logger code to use the external trigger rather thanthe internal clock for timing samples. No other changes were made to the sensors.To distinguish estimation error from error due to imprecise body axes alignment,the body frame for the inertial motion capture was taken to be co-incident with thesensor frame. The body frame axes in OptiTrack were similarly aligned to the edgesof the circuit board. Although this does not account for slight errors in how thesensor was mounted to the board or how the board was cut, this was far more re-producible than attempting to align to the sensor itself due to its small size. Threecameras were used to record the markers in the optical system. 10,000 particleswere simulated for the Monte Carlo smoother and 16 trajectories were sampled inthe refiltering step.During the recording, the sensor was manually rotated so as to maximize excur-sion without losing sight of the markers in the optical tracking. The accuracy ofthe inertial motion capture was assessed using the angular distance between the two412. Tool developmentorientation estimates at each time point.Euler angle traces of the orientation estimated by the two systems are shown inFigure 2.5A and the absolute angular distance between measures are shown in Figure2.5B. The Euler angles shown here and elsewhere in the paper are Tait-Bryan angles,applied in the order yaw, pitch, roll, and are active, intrinsic rotations. The absoluteerror of the orientation estimation remains within 3\u00C2\u00B0 for the entire trace, with errorincreasing as the sensors departed from the initial orientation, but decreasing as thesensor returned. Slight misalignment between the two motion capture system bodyaxes and gyroscope scaling errors could both account in part for this pattern.2.5.2 Body axes alignmentIt is unreasonable to assume that the inertial sensors can be mounted onto thebirds in exactly the same orientation every time. Accordingly, we use body axesalignment as described in Section 2.3 to estimate orientation in biologically relevantaxes irrespective of the orientation of the sensor relative to the body.To validate this procedure, I compared the orientation estimates from two sensorsmounted on a single rigid body with their sensing axes misaligned both from thebody frame and from one another. If orientation is estimated directly withoutrotating the raw data to align to the body frame, the two estimates should divergeas the rigid body moves away from its initial orientation. This is because the sensorswill view each incremental rotation as being made around different axes. However,if the body axes alignment procedure is successful, both sensors should be able totrack the orientation of the rigid body as a whole, which should not depend on themounting orientation of each individual sensor. We can assess the repeatability ofthe body axes alignment using the angular distance between these two orientationestimates at each time point. Further, to qualitatively ensure that the body axesinferred by the alignment procedure coincide with those of the actual rigid body,42Figure 2.5: Comparison of orientation estimates of a single rigid body made usinginertial sensors and an optical motion tracking system, OptiTrack. (A) Traces of Eu-ler angles estimated using the two systems. (B) Absolute angular distance betweenthe two orientations estimates.432. Tool developmentI rotated the rigid body 90\u00C2\u00B0 around each of the true body axes in turn during therecording. Specifically, I rotated the rigid body around the z, y, and x axes, in thatorder, which correspond to rotations in yaw, pitch, and roll, respectively. Data werecollected at 800Hz, but were down sampled to 400Hz prior to estimation to saveprocessing time. As before, 10,000 particles were used in the Monte Carlo smootherand 16 trajectories were sampled in the refiltering.Euler angle traces of the orientations estimated by the two sensors without bodyaxes alignment are shown in Figure 2.6A. As expected, the two estimates divergeexcept when in the initial orientation. Further, the three rotations did not alignwith the body frame as expected given the recorded motion for either sensor. Co-incidentally, the first sensor did read the second rotation as a pure rotation aroundz, but this rotation should be a 90\u00C2\u00B0 rotation around around y in the body frame.In contrast, the Euler angle traces estimated by the two sensors with body align-ment (Figure 2.6B) show considerably more agreement. The key exception is duringthe second rotation where the two sensors appear to disagree by over 90\u00C2\u00B0 along twoaxes. However, this rotation should be a 90\u00C2\u00B0 rotation in pitch, which results ingimbal lock in the Euler angle system. Accordingly, slight changes in orientationaround this point can result in very large and even discontinuous changes in Eulerangles. Here, both sensors estimate a pitch of approximately -90\u00C2\u00B0, but the first sen-sor also estimates a roll and yaw of approximately \u00C2\u00B1180\u00C2\u00B0 while the second sensorestimates a roll and yaw of approximately -90\u00C2\u00B0 and 90\u00C2\u00B0, respectively. Figure 2.7demonstrates how these two Euler angle triplets are equivalent to a pure pitch of-90\u00C2\u00B0. Accordingly, the two sensors show strong agreement after alignment and bothshow rotations around the expected body axes in the expected order.Finally, the absolute distance between the two estimates using body axes align-ment are shown in Figure 2.6C. Error between the estimates remains within 6\u00C2\u00B0 forthe entire trace, with error increasing as the body departs from its initial orienta-442. Tool developmenttion. This is consistent with the previous results with a single sensor as error iscompounded when comparing two estimated values. In practise, this means thatthe orientation at which a sensor is mounted onto a given body segment can becorrected for within 6\u00C2\u00B0 with the body axes alignment.2.5.3 Joint angle measurementThe tests up to this point establish whether or not we can reliably track the orien-tation of individual rigid sections of the body over time. However, to extract usefulkinematic data, we also need to be able to reliably measure relative orientation be-tween body sections. This should allow us to extract joint angles along biologicallyrelevant axes.To assess the accuracy with which we can measure joint angles, I built a simplehinge and compared joint angle estimates made over time by the inertial sensorsand OptiTrack. The hinge consisted of a piece of foam core board which was scoredto bend along a single axis. I placed an inertial sensor on either side of the hingeand aligned the axis of rotation to the x axis of the body frame using body axesalignment. Retro-reflective markers were also placed on either side of the hinge andan eSync2 external trigger was again used to synchronize samples in both motioncapture systems at 200Hz. As before, the sensors were modified to use the externaltrigger. However, the data samples used by the inertial system to define body axeswere driven by the internal clock and were not recorded in the optical tracking sys-tem as the markers were not visible in the calibration postures. As before, 10,000particles were simulated in the Monte Carlo smoother and 16 trajectories were sam-pled during refiltering. In addition to manually flexing the hinge, I also rotated theentire assembly continuously through the entire recording to test whether or notthe joint angle estimates could reliably reject the motion of the body.To measure joint angles, I start by finding a quaternion that represents the45Figure 2.6: Comparison of orientation estimates of a single rigid body made bytwo misaligned sensors with and without body axes alignment. (A) Euler anglesestimated from the two inertial sensors without axis alignment. (B) Euler anglesestimated from the two inertial sensors with axis alignment as described in Section2.3. (C) Absolute angular distance between the two orientations estimates with axisalignment.46Figure 2.7: Demonstration of the Euler angle singularity point with three differ-ent angle triplets that encode the same orientation. Rows correspond to the threedifferent angle triplets. The first column of each row shows a paper airplane inan unrotated orientation and each subsequent column applies another componentof the Euler angle corresponding to its row. The three Euler angle triplets are:(A) Yaw 0\u00C2\u00B0, Pitch -90\u00C2\u00B0, Roll 0\u00C2\u00B0; (B) Yaw 180\u00C2\u00B0, Pitch -90\u00C2\u00B0, Roll -180\u00C2\u00B0; (C) Yaw90\u00C2\u00B0, Pitch -90\u00C2\u00B0, Roll -90\u00C2\u00B0. The Euler angles here and elsewhere in the paper areapplied in the order yaw, pitch, roll, and are intrinsic, active rotations. The im-ages in this figure were generated using the Euler angle visualization tool found athttp://danceswithcode.net/engineeringnotes/rotations_in_3d/demo3D/rotations_in_3d_tool.html [69].472. Tool developmentrotation from the first body segment to the second body segment for each samplein time. This rotation is equivalent to the orientation of the second body segmentin the first body segment frame. As described in Section 1.2.4, this change of framecan be achieved by conjugation:Bnk q\u00CB\u0086\u00E2\u0088\u00921 \u00C3\u0097 Bn+1k q\u00CB\u0086 = Bn+1Bn q\u00CB\u0086where Bnk q\u00CB\u0086 is the unit quaternion representing the orientation of the nth rigidbody (W) in the world frame (l ).From this relative orientation, we can use the axis-angle representation of aquaternion (1.5) to solve for the magnitude of the rotation for every sample in time:Bn+1Bn\u00CE\u00B8 = 2 xos\u00E2\u0088\u00921(Bn+1Bnqw)Note that qw here is the first, or real, component of a unit quaternion q\u00CB\u0086.I then use the difference between the joint angles estimated by the two systemsto assess the accuracy of the joint angle measures. Further, as the hinge angle wasaligned to the x axis of the body frame, we can assess our ability to measure alongbiologically relevant axes using the angles measured across the hinge along the othertwo axes.The joint angles measured over time by the two systems are shown in Figure 2.8A.Both systems show strong agreement, although the data collected by OptiTrack showmore jitter as well as a few frames with missing data. This is likely due to accidentalpartial occlusion of markers by my hand as I rotated and flexed the hinge withinthe recording volume. Error between the two estimates are shown in Figure 2.8Band remain within 5\u00C2\u00B0 over the recording. The joint angles estimated by the inertialsensors also are not consistently over- or underestimated relative to the reference.Figure 2.9A demonstrates how the motion of the body is rejected from the orien-482. Tool developmenttation estimates by using another body segment as a reference frame. As expected,the estimated orientations show no clear patterns when plotted relative to the worldframe as the flexion of the hinge cannot be distinguished from the motion of theentire assembly. However, by using the relative orientation of the two segments, itis clear that the rotation seen across the hinge is primarily around the x axis of thebody, or in other words, roll. Figure 2.9B shows the angles measured across thehinge along the other two body axes, or pitch and yaw. As before, the error remainswithin 5\u00C2\u00B0 over the recording, which is consistent with the previous results.Figure 2.8: Comparison of joint angle estimates made from a simple hinge usinginertial sensors and OptiTrack. (A) Angles estimated across the hinge by the twosystems over time. (B) Difference between the two estimates over time.49Figure 2.9: Demonstration of how the motion of the body can be rejected to recordjoint dynamics by changing frames of reference. (A) Orientations of two sensorsmounted across a simple hinge plotted over time using different frames of reference.(B) Angles measured across the hinge along body axes perpendicular to the expectedaxis of rotation.502. Tool development2.5.4 Estimating wing stroke parameters in a deceased birdOne common goal with tracking wing kinematics is estimating wing stroke parame-ters that might change over some experimental manipulation [11, 13]. In particular,stroke amplitude and the inclination of the stroke plane are of common interest[12,70, 71]. To assess the accuracy with which these parameters can be estimated, Icompared the estimates made over a range of simulated wing strokes using boththe inertial tracking system and OptiTrack. The primary benefit of validating thesemeasurements initially with simulated wing strokes in deceased birds is the relativeease with which the two motion capture systems can be aligned and synchronized.I mounted sensors to the body and forearm of a deceased pigeon (Columba livia)provided by the Beaty Biodiversity Museum as described in Section 2.2.5. As before,I used retro-reflective markers to track the body and wings of the pigeon in OptiTrackand an external trigger to synchronize the two motion capture systems at 200Hz.The sensors were similarly modified to use the external trigger, except for duringbody axes alignment. 10,000 particles were simulated in the Monte Carlo smootherand 16 trajectories were sampled during refiltering.To calculate stroke amplitude, I first calculate the relative orientation of theforearm relative to the body using the data from the two sensors as in (2.5.3).I then separate the trace into individual strokes using the wing elevation, whichcorrespond to the local extrema in rotation around the longitudinal axis of the bird,or roll in the body frame. I then find the quaternion representing the shortestrotation between these two stroke end points for each stroke, again by conjugation.wingboxy q\u00CB\u0086stroky ynx \u00C3\u0097 wingboxy q\u00CB\u0086\u00E2\u0088\u00921stroky start = stroky ynxstroky startq\u00CB\u0086Using the axis-angle representation (1.5) we can solve for the magnitude of thisrotation as in (2.5.3). However, this magnitude includes average wing twist over512. Tool developmentthe stroke, which is generally not the goal with estimates of wing stroke amplitude.To account for this, I use (1.5) and (2.5.3) to solve for the axis of the rotation thatdescribes the wing stroke.stroky startn\u00CB\u0086 =\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00AF\u00EF\u00A3\u00AF\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0qxqyqz\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BA\u00EF\u00A3\u00BA\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB R sin(\u00CE\u00B82)However, this is still in the frame of the wing at the start of the stroke. To findthe axis of the wing stroke rotation in the body frame of the bird I rotate the vectorusing the body orientation quaternion at the start of the stroke using (1.8).\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0 0boxyn\u00CB\u0086\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB = wingboxy q\u00CB\u0086\u00E2\u0088\u00921stroky start \u00C3\u0097\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0 0stroky startn\u00CB\u0086\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB\u00C3\u0097 wingboxy q\u00CB\u0086stroky startThe projection of this axis of rotation onto the sagittal plane of the bird thenrepresents the component of the wing stroke rotation that does not include twist.Therefore, stroke amplitude can be calculated as the magnitude of the total rotationmultiplied by the proportion of the rotation that does not include twist.strokz vmplituyz =\u00E2\u0080\u0096boxyn\u00CB\u0086\u00E2\u0080\u0096sagittal\u00E2\u0080\u0096\u00E2\u0080\u0096boxyn\u00CB\u0086\u00E2\u0080\u0096 \u00CE\u00B8strokywhere \u00CE\u00B1\u00CB\u0086\u00E2\u0080\u0096\u00CE\u00B2 is the projection of the unit vector \u00CE\u00B1\u00CB\u0086 onto the plane \u00CE\u00B2.I can then calculate the inclination of the wing stroke plane relative to the bodyaxes using the inverse tangent of the axis of the stroke rotation. Once again, thisvector must be expressed in the body frame and projected onto the sagittal planeto reject twist.strokz inxlinvtion = tvn\u00E2\u0088\u00921(boxynzboxynx)522. Tool developmentAs the pitch of the body relative to the world frame could change over the courseof the stroke, I average the body pitch over the two stroke end points to approximatea body pitch for the whole stroke. I then calculate the inclination of the stroke planein the world frame by subtracting the averaged body pitch from the stroke planerelative to the body.Finally, I compare these estimated wing stroke parameters to those calculatedfrom the optical motion capture data for corresponding strokes to assess the accuracyof this approach.The two estimates of the orientation of the wing relative to the body are shownin Figure 2.10 with dashed lines indicating the ranges over which individual strokeswere identified. Given the considerable noise in the OptiTrack traces near the wingstroke end points, I chose to filter the OptiTrack data by the first derivative of the xcomponent of the relative orientation of the wing. The filtered OptiTrack data arealso shown in Figure 2.10.The zero points for the orientations estimated by the two systems differ as themarkers on the wing are not visible during the closed wing posture used to definebody axes in the inertial system. This results in the vertical offset seen between theEuler angle traces estimated by the two systems in Figure 2.10. Despite this, thewing stroke signal is clear in both systems. Further, as the wing stroke parametersare all calculated using the difference between the wing stroke end point orientations,this vertical offset should not affect our estimates. The wing stroke amplitude,inclination relative to the body, and inclination relative to the world frame forthe three complete simulated wing strokes are shown in Table 2.1 along with theresidual error. All stroke parameters were estimated within 5\u00C2\u00B0 for the three simulatedwing strokes, which is consistent with previous results. Wing stroke amplitude wasconsistently overestimated by about 3\u00C2\u00B0. However, due to the low sample size, it isdifficult to tell if this was a coincidence or indicative of any systemic error.53Stroke amplitude (\u00C2\u00B0)Stroke index Inertial sensor OptiTrack Difference1 106.80 104.16 2.652 90.30 87.48 2.833 90.77 87.96 2.81Stroke plane inclination, relative to body (\u00C2\u00B0)Stroke index Inertial sensor OptiTrack Difference1 -17.02 -12.44 -4.582 -16.10 -13.17 -2.933 -15.53 -15.63 0.10Stroke plane inclination, relative to world (\u00C2\u00B0)Stroke index Inertial sensor OptiTrack Difference1 -5.67 -3.21 -2.472 -3.74 -3.08 -0.663 -3.26 -4.93 1.67Table 2.1: Wing stroke parameters estimated using inertial sensors and OptiTrackfor three simulated wing strokes.542. Tool developmentFigure 2.10: Orientation of the wing relative to the body estimated over three sim-ulated wing strokes of a deceased pigeon using both inertial sensors and OptiTrack.Both the raw orientations estimated by OptiTrack as well as the filtered data usedfor estimating stroke parameters are shown. Vertical dashed lines indicate rangesover which individual strokes were defined.2.5.5 Estimating wing stroke parameters in vivoTo assess the accuracy of these methods in vivo, I repeated the previous test usinglive pigeons. I once again mounted the sensors on the body and forearm of thepigeon along with retro-reflective markers for optical tracking. After defining thebody axes for the inertial motion capture system, I released the pigeons from chestheight and recorded the wing kinematics as they flew to the ground. I caught thepigeons after each flight and removed the sensors and markers before returning the552. Tool developmentpigeons to their housing cage.Unlike in previous comparisons between the two systems, an external triggercould not be used as it would require the birds to be tethered. Instead, I hadplanned to identify wing strokes independently in both systems and match themsequentially. I then planned to compare wing stroke parameters estimated for eachpair of wing strokes in the two systems.I set both systems to maximize sampling rate with the inertial data loggerssampling at 800Hz and OptiTrack recording at 240Hz with four cameras. As before,the body axes alignment postures were not captured in the OptiTrack recording asthe markers were not visible with wings held closed. 10,000 particles were simulatedin the Monte Carlo smoother and 16 trajectories were sampled during refiltering.Euler angle traces of the orientation of the wing relative to the body as esti-mated by the inertial sensors are shown in Figure 2.11A. Over successive strokes,the wing appears to drift in roll, ultimately leading to wing motions that are com-pletely unreasonable. The problem here is most clearly seen in the raw gyroscopedata from the sensor mounted on the forearm, as shown in Figure 2.11B. The wingstrokes consistently exceed the gyroscope measurement range of \u00C2\u00B12000\u00C2\u00B0/s, whichafter integration leads to consistent underestimation of motion. Accordingly, thedrift seen in the estimated orientation of the wing is likely due to unequal portionsof the upstroke and downstroke exceeding the sensing threshold.2.6 DiscussionThe inertial data loggers and orientation estimation algorithm developed in thisproject were able estimate the orientation of a single rigid body within 3\u00C2\u00B0 in con-trolled, simulated situations. This translated to an error of 6\u00C2\u00B0 when assessing bodyaxes alignment and estimating joint dynamics, likely due to the accumulation of562. Tool developmentFigure 2.11: Data collected from the inertial tracking system mounted on a livepigeon during a short free flight. (A) Orientation of the wing relative to the bodyshows unreasonable drift in estimated roll. (B) Raw gyroscope data from the inertialsensor mounted on the wing shows motions that exceed the hardware sensing limitsof \u00C2\u00B12000\u00C2\u00B0/s.error from using two estimated orientations [72]. However, the data loggers couldnot be used for tracking wing kinematics in vivo as the sensors used in the currentdesign did not have the necessary gyroscope sensing range.The most direct solution to this problem would be to modify the hardware touse different inertial sensors. Assuming a flapping frequency of around 10Hz, whichis typical for pigeons [73], a wing stroke amplitude of 180\u00C2\u00B0, and a sinusoidal modelof wing motion, we would expect a maximum angular velocity of approximately\u00C2\u00B15600\u00C2\u00B0/s around the stroke axis. Most gyroscopes built into MARG sensor chipshave a maximum sensing range of \u00C2\u00B12000\u00C2\u00B0/s (e.g., LSM9DS1TR, BMX055, MX6470).This range likely reflect the typical angular rates experienced by sensors in mobileand automotive applications. These markets were valued at $1.2 and $2.7 billion572. Tool developmentrespectively in 2016 [74] and are likely to be responsible for the diversity and com-petitive pricing of inertial sensors available today. As a result, it is unlikely thatnewer iterations of these general purpose inertial sensors will greatly increase sens-ing range unless the market changes to include popular applications that require it.Such a change has happened in part with some newer sensors designed explicitly forsmart wearable athletic devices, such as the ICM-20601 from Invensense, that havea gyroscope sensing range of \u00C2\u00B14000\u00C2\u00B0/s.To achieve the sensing ranges needed for recording bird wing kinematics usingcurrently available off-the-shelf components, we must make use of specialized sen-sors that separate MARG sensor functionality over multiple chips. The ADXRS649by Analog Devices, for example, provides a gyroscope with a sensing range of\u00C2\u00B120,000\u00C2\u00B0/s in a 6 \u00C3\u0097 7mm footprint. Note that a specialized magnetometer is notneeded as the magnitude of the local magnetic field does not vary with angular rate.While the wing-mounted sensors will experience linear accelerations beyond the sens-ing range of the current data loggers, a specialized accelerometer is not necessaryas these clipped observations are already disregarded by the orientation estimationalgorithm as they do not provide information about the direction of gravity. SeeSection 2.4.2 for details. As a result, the sensing range issue can be overcome bypairing a specialized gyroscope to every sensor currently in the design. The MPU-9250 sensors in the current design could also be swapped out for the smaller MC6470sensors as integrated gyroscope function would no longer be important.Another option would be to limit the application of these sensors to bird speciesthat have flapping frequencies within the sensing range of the current sensors, ornear drop-in replacements like the aforementioned ICM-20601. Using the previ-ous assumptions, the current data loggers should be capable of tracking the wingkinematics for flapping frequencies up to about 3.5Hz, or 7Hz with the sensors de-signed for athletic wearables. Table 2.2, adapted from Pennycuick [75], lists flapping582. Tool developmentfrequencies of various species measured in the field. With a safety factor of 50%,large corvids (Corvus corone), raptors (Buteo buteo, Milvus milvus), herons (Ardeacinera), waterfowl (Cygnus olor), and gulls (Larus canus, Larus argentinus, Larusmarinus) could all feasibly be studied with minimal modification to the currenthardware. The large raptors are of particular interest as many of these birds aretrained for falconry, which could allow the current data loggers to be assessed ina fairly controlled setting. For comparison, modifying the data loggers to use theADXRS649 gyroscope would allow for tracking wing kinematics for flapping frequen-cies up to 35Hz, or a little over 25Hz with a 50% safety factor. This would encompassthe flapping frequencies seen in pigeons as well as the sixteen other species listed inTable 2.2.Using the data loggers as is with a restricted set of species would severely limittheir utility for comparative studies, as it would introduce a strong size bias. How-ever, some bias is already unavoidable as a number of species, such as the 5g black-tailed gnatcatcher (Polioptila melanura), are too small to feasibly have untetheredinertial data loggers mounted on them, at least given currently available electroniccomponents. Further, these tools would still provide a novel high-throughput meansof collecting wing kinematic data for addressing questions within species.Beyond studies of bird flight, the tools developed in this project could also beused with minimal modification to implement inertial motion capture in many othervertebrate systems. Cheetahs and greyhounds, for example, only achieve a maximumstride frequency of about 4Hz during sprints [76]. Similarly, while some thunniformswimmers can achieve tail beat frequencies of around 10-12Hz during bursts [77],many fish employ much slower forms of body-caudal propulsion [78].The primary benefit of these tools over the current commercially available iner-tial motion capture systems are their small size, high sample rates, and untetheredoperation [60\u00E2\u0080\u009362]. Outside of human kinesiology, inertial motion capture has pre-59Equivalent air speed (m s\u00E2\u0088\u00921) Wingbeat frequency (Hz)Species Observed Vmp Observed frefCorvus corone 10.5\u00C2\u00B12.03 12.1\u00C2\u00B10.976 3.84\u00C2\u00B10.0522 4.74\u00C2\u00B10.181Sturnus vulgaris 17.3\u00C2\u00B11.75 10.2\u00C2\u00B10.864 10.6\u00C2\u00B11.25 9.97\u00C2\u00B10.385Fring illa coe lebs 15.3\u00C2\u00B13.46 7.82\u00C2\u00B10.715 18.2\u00C2\u00B11.74 10.8\u00C2\u00B10.691Buteo buteo 9.31\u00C2\u00B11.92 12.3\u00C2\u00B11.21 3.63\u00C2\u00B10.168 3.54\u00C2\u00B10.218Acc ipiter nis us 8.72\u00C2\u00B12.40 10.5\u00C2\u00B11.03 5.10\u00C2\u00B10.321 6.30\u00C2\u00B10.388Milvus milvus 7.90\u00C2\u00B11.81 10.9\u00C2\u00B11.08 2.88\u00C2\u00B10.0926 2.75\u00C2\u00B10.170Ardea c inerea 11.0\u00C2\u00B11.66 11.9\u00C2\u00B11.17 2.90\u00C2\u00B10.0354 2.79\u00C2\u00B10.172Cygnus olor 16.0\u00C2\u00B10.700 19.4\u00C2\u00B12.08 3.38\u00C2\u00B10.0863 3.37\u00C2\u00B10.264Anas pene lope 17.1\u00C2\u00B11.99 14.3\u00C2\u00B11.41 6.83\u00C2\u00B10.330 7.27\u00C2\u00B10.500Somateria mollis s ima 20.2\u00C2\u00B13.51 15.9\u00C2\u00B12.19 6.47\u00C2\u00B10.294 6.60\u00C2\u00B10.586Phalacrocorax carbo 15.0\u00C2\u00B11.80 16.6\u00C2\u00B11.39 4.83\u00C2\u00B10.156 5.09\u00C2\u00B10.265Columba palumbus 15.4\u00C2\u00B12.06 12.9\u00C2\u00B11.27 5.61\u00C2\u00B10.256 6.81\u00C2\u00B10.420Larus ridibundus 10.1\u00C2\u00B11.89 9.42\u00C2\u00B10.964 3.27\u00C2\u00B10.0110 4.04\u00C2\u00B10.274Larus canus 11.6\u00C2\u00B11.74 9.62\u00C2\u00B10.946 2.98\u00C2\u00B10.151 3.50\u00C2\u00B10.216Larus arg entatus 11.8\u00C2\u00B12.07 11.9\u00C2\u00B11.06 3.13\u00C2\u00B10.0761 3.61\u00C2\u00B10.218Larus marinus 12.8\u00C2\u00B11.31 12.6\u00C2\u00B11.35 2.91\u00C2\u00B10.223 3.14\u00C2\u00B10.215Numbers after the \u00C2\u00B1 symbols are standard deviations of observed quantities and uncertainties of calculated quantities (N=6).Vmp, minimum power speed; fref, calculated equivalent wingbeat frequency.Table 2.2: Flight speed and flapping frequency of sixteen species collected in the field using an anemometer and video data.Adapted from Pennycuick [75].602. Tool developmentviously only been applied to studies of gait in horses and dogs [63, 64], both ofwhich are large and easily trained. These traits were likely necessary to mountlarge, tethered sensors to different parts of the body while trying to elicit naturalbehaviours.The benefits of these tools over optical motion capture for systems other thanbird flight are not so immediately clear. The primary drive for using inertial motioncapture here was to overcome the challenges with optical motion capture that stemfrom the occlusion of marked points on the wing during flapping flight. In situationswhere markers can reliably be kept in view, optical motion capture will likely bepreferred if only for the availability of community support with many of these moreestablished tools, like DLTdv [9]. Newer iterations of these tools, like OptiTrack,can also do much of the work of digitizing the raw data, so long as markers remainsufficiently separated from one another and remain in view.Put another way, the value of inertial motion capture for recording animal kine-matics is primarily in recording behaviours that cannot easily be recorded usingcameras. This includes behaviours that occur in situations with limited visibility,such as in burrows, murky water, or at night. Behaviours that occur over spatialscales that cannot feasibly be recorded by camera arrays are also of interest, such asmigration, movement through forest canopy, and even prey capture in some species.Given the small size of the inertial data loggers, they could also be worked into exist-ing long-term tracking devices. Although the loggers are not designed for long-termtracking, recent advances in automated behaviour classification using body-mountedaccelerometer data [79\u00E2\u0080\u009381] could be leveraged to trigger detailed kinematic data col-lection during behaviours of interest in a natural context. This targetted approachto kinematic data collection in freely roaming animals would not be possible withoptical motion capture.61Chapter 3ConclusionDetailed wing kinematics are of broad interest to the study of bird flight [82\u00E2\u0080\u009384], butcurrent optical approaches to animal motion capture [8, 9] are ill-suited to collectthese data. The considerable morphing of the wings [10] and periodic occlusion offlight surfaces during flapping flight makes it difficult to automate the tracking offixed points on the wing using video data [85]. This necessitates manual digitizationof the raw videos, which can be a major bottleneck to data collection.In this thesis, I developed a series of tools with the goal of creating a high-throughput approach to the collection of wing kinematic data using body-mountedinertial data loggers. The physical hardware consisted of a modular design whichcan collect data from up to four sensors at 450Hz for 20min in a single session, allat around 4 grams including a battery. The accompanying orientation estimationalgorithm used a recursive Bayesian framework and could estimate angles betweensensors and simulated wing stroke parameters within 6\u00C2\u00B0 in situations where theangular velocities remained within the sensing range of the data loggers. An Rpackage compiling these functions for general use is currently in preparation forsubmission to the Comprehensive R Archive Network (CRAN). The inertial datalogger designs and firmware have similarly been made available online [86].623. ConclusionThe most significant short-coming of the current design is the gyroscope, whichdoes not have the sensing range to collect data from most bird species. However,as discussed in Section 2.6, the components needed to overcome this limitation arecommercially available today. The non-removable memory system posed anotherlimitation to the utility of the current design as the entire data logger needed tobe unmounted from the subject between trials to transfer the data to a computer.Removable flash memory, like microSD cards, may be preferable in future designsdespite their relatively slower speed and larger size for this reason. See Section 2.2.2for the details of this choice for the current design.Although I did not achieve my initial goal of producing a fully-realized high-throughput method of recording wing kinematics in birds, I believe that I havedemonstrated the potential of inertial motion capture to this end. To my knowledge,this is the first application of biological inertial motion capture outside of humans,horses, and dogs [60, 63, 64], enabled in part by the small size of the new dataloggers. I am also unaware of any other publicly available software for orientationestimation that uses the entire set of inertial measurements from a recording toestimate orientation at every time point post hoc [42, 43]. See Section 1.1.3 for themathematical basis of this approach and Section 2.4.1 for the rationale for using thisapproach here.3.1 Significance of high-throughput methodsThe value of high-throughput methods in other applications is most clearly seen bythe new sorts of studies these methods enable and the new questions these studiesallow us to address. High-throughput methods for genome sequencing, for example,have allowed for studies of population-level genomic variation [87] and the develop-ment of statistical approaches to identifying functional genetic polymorphisms [88].633. ConclusionSimilar advances in drug screening approaches for pharmacological studies have al-lowed researchers to validate newly proposed theoretical models for predicting drugsolubility and permeability [89].Although I was not able to apply the tools I developed in this thesis to address anovel question due to the limitations of the current hardware, we can still considerthe value of high-throughput methods to wing kinematic data collection using thisframework. In the following sections, I outline two studies that would be infeasiblewith current optical methods of recording wing kinematics in birds and the valuethese studies would provide to the field.3.1.1 Formalizing flight styleFlight style, or an analogue to gait for bird flight, is a concept that has been ap-proached at various times throughout the literature [90\u00E2\u0080\u009393]. Categorizing flight kine-matics in this way is useful as it helps summarize a complex set of traits into a muchsmaller number of ecologically or physiologically relevant groups, much like swim-ming styles for fish locomotion [78] and pollination syndromes for plant reproductivemorphology [94]. Concise and informative flight style schemes are particularly use-ful for comparative work as it makes flight kinematics more tractable for inclusionas an explanatory variable or covariate [90, 95, 96]. However, an uninformativecategorization scheme could be at best confusing or at worst misleading.The various takes on flight style generally split the variation in flight kinematicsinto discrete categories based on one or a few criteria, such as the frequency andlength of glides between bouts of flapping [91] or flapping frequency and expectedmetabolic costs [96]. However, it is not always clear what the appropriate numberof discrete groups should be, as different schemes can have as few as three [97] oras many as eight [91] different groups. The variation in flight kinematics across andwithin some species also might fill the space between flight styles that are commonly643. Conclusionthought of as distinct [91, 98]. Accordingly, it might be more appropriate to thinkof flight style as a continuous space varying along one or more principle axes, or asa few distinct styles with considerable variation within each group. One approachto addressing this ambiguity would be to use a k-means clustering algorithm withmodel selection to find the most informative number of groups within the totalspace of wing kinematic variation across species [99]. To quantify wing kinematicsfor this approach, we could take the discrete Fourier transformation of the shoulderand elbow angles along each axis over time for a given flight [100]. In addition tocapturing common parameters of interest like stroke amplitude and stroke planeinclination, this approach should also capture information about the length andfrequency of intermittent glides or bounding. This is because flight styles withintermittent behaviours can be thought of as piecewise periodic functions, whichFourier transformations can be used to reconstruct [101].Using this framework, we could also assess how informative the groupings pro-posed by existing flight style schemes are using an information criterion [102]. Thiscould then be used to compare existing flight schemes to one another as well as to themost informative model from the clustering algorithm. The ratio of flight kinematicvariation within groups to the variation across groups could also be interesting asit would give an indication of whether flight styles are very distinct and internallyuniform (much lower variation within groups than across) or only loosely separatedwith meaningful internal variation (comparable variation within and across groups).While there is nothing particularly novel about this statistical approach, it iscurrently completely infeasible to meaningfully sample the total variation in wingkinematics across species, even within a singular behaviour like level flight. Previousstudies have made use of summarized flight parameters collected from a diversity ofspecies, such as flight speed [75, 103], flapping frequency [75, 92], and the presence orabsence of specific behaviours [90]. However, a high-throughput means of collecting653. Conclusiondetailed wing kinematics would be needed to collect the scale of data needed for acomplete reassessment flight style schemes.3.1.2 Deconstructing flight controlAlthough it is well established that animals that use flapping flight primarily ma-neuver in the air using variation in their wing kinematics [1], it is less clear whichaspects of the wing stroke birds actually modify to elicit specific changes to theirflight path.Part of the difficulty with addressing this problem is that there often manydifferent ways a bird could theoretically accomplish a given change to their flightpath. Dakin et al [19], for example, found that hummingbirds could turn eitherby rotating in place while hovering, banking during forward flight to perform asmooth arc turn, or by pitching up to perform a tight pitch-roll turn to quicklychange direction. Lower-level maneuvers like changing pitch or yaw might similarlybe achieved in different ways or could even depend on context, such as the flightspeed of the bird. Accordingly, an analysis of the kinematics used to effect one ofthese behaviours (e.g., changes in elevation or speed) would require sufficient powerto distinguish between distinct kinematic modes and to catch potentially subtledifferences in the wing stroke. Characterizing the kinematic changes that underliemultiple types of maneuvers only compound this need for large sample sizes, andquickly becomes infeasible using current methods.The other key challenge to this sort of study would be motivating the birds toproduce the behaviours that we are interested in. One option would be to allowthe birds to fly freely in a space and use a correlational approach to explore theconnection between the wing kinematics and flight parameters of interest [19, 56].A more direct approach however could use artificial visual flow stimuli to motivatespecific changes to level flight. Visual flow is the global motion seen by an animal663. Conclusionas it moves through its environment [104], and these cues have been found to beimportant for course correction in a diversity of species [105\u00E2\u0080\u0093107]. Various studieshave also previously used these cues to experimentally manipulate behaviour bypresenting animals with artificial visual stimuli corresponding to different visualflow fields [108\u00E2\u0080\u0093110].Putting these ideas together, pigeons could be flown in flight tunnels with visualstimuli projected onto the walls as in Dakin et al [108]. A stationary, uniformstimulus would be used as a control treatment, while moving horizontal and verticalgratings could be used to induce vertical and horizontal visual flow, respectively.We expect from past work with other birds that pigeons will correct their altitudeto track the vertical visual flow in the horizontal grating treatments [108], andsimilarly may slow down or speed up to track the horizontal visual flow in thevertical grating treatments [111]. OptiTrack or similar optical tracking methodscould be used to track the position of the birds in the tunnel to test this expectation[19]. Wing kinematic data collected by body-mounted inertial sensors could then beused to test specific hypotheses about the wing kinematics the underlie the changesin trajectory. For example, we might hypothesize that changes in elevation areachieved by modifying the stroke amplitude to control lift production [112], whileflight speed is controlled by inclining the stroke plane to redirect aerodynamic forces[12].3.2 SummaryUnderstanding the variation in bird wing kinematics across different contexts andlevels of organization is an important step to furthering our understanding of thephysics and control of bird flight. However, current methods for animal motioncapture are ill-suited for this problem and require considerable time and effort to673. Conclusionemploy. A high-throughput alternative would allow for both broader comparativeand more detailed mechanistic studies of bird flight. To this end, I began devel-opment on a high-throughput, body-mounted inertial motion capture system forrecording wing kinematics in freely flying birds. Although the data loggers requirefurther modification to allow them to sense the range of angular rates experiencedduring flapping flight, I believe I have demonstrated the potential of this approachto implementing a cheap, portable, and automated means of animal motion capture.68References1. Chin, D. D. & Lentink, D. Flapping wing aerodynamics: from insects to ver-tebrates. The Journal of Experimental Biology 219, 920\u00E2\u0080\u0093932 (2016).2. Hedenstrom, A. & Johansson, L. C. Bat flight: aerodynamics, kinematics andflight morphology. Journal of Experimental Biology 218, 653\u00E2\u0080\u0093663 (2015).3. Tobalske, B. W. Biomechanics of bird flight. Journal of Experimental Biology210, 3135\u00E2\u0080\u00933146 (2007).4. Dickinson, M. H., Lehmann, F. & Pane, S. P. Wing rotation and the aerody-namic basis of insect flight. Science 284, 1954\u00E2\u0080\u00931960 (1999).5. Ravi, S. et al. Modulation of flight muscle recruitment and wing rotationenables hummingbirds to mitigate aerial roll perturbations. Current Biology30, 187\u00E2\u0080\u0093195.e4 (2020).6. Iriarte-Diaz, J. & Swartz, S. M. Kinematics of slow turn maneuvering in thefruit bat Cynopterus brachyotis. Journal of Experimental Biology 211, 3478\u00E2\u0080\u00933489 (2008).7. Lehmann, F. The aerodynamic effects of wing-wing interaction in flappinginsect wings. Journal of Experimental Biology 208, 3075\u00E2\u0080\u00933092 (2005).8. Mathis, A. et al. DeepLabCut: markerless pose estimation of user-definedbody parts with deep learning. Nature Neuroscience 21, 1281\u00E2\u0080\u00931289 (2018).69References9. Hedrick, T. L. Software techniques for two- and three-dimensional kine-matic measurements of biological and biomimetic systems. Bioinspiration &Biomimetics 3, 034001 (2008).10. Tobalske, B. & Dial, K. Flight kinematics of black-billed magpies and pigeonsover a wide range of speeds. Journal of Experimental Biology 199, 263\u00E2\u0080\u0093280(1996).11. Lapsansky, A. B., Igo, J. A. & Tobalske, B. W. Zebra finch (Taeniopygiaguttata) shift toward aerodynamically efficient flight kinematics in responseto an artificial load. Biology Open 8, bio042572 (2019).12. Chin, D. D. & Lentink, D. Birds repurpose the role of drag and lift to takeoff and land. Nature Communications 10 (2019).13. Tomotani, B. M. & Muijres, F. T. A songbird compensates for wing moltduring escape flights by reducing the molt gap and increasing angle of attack.The Journal of Experimental Biology 222, jeb195396 (2019).14. McFarlane, L., Altringham, J. D. & Askew, G. N. Intra-specific variationin wing morphology and its impact on take-off performance in blue tits(Cyanistes caeruleus) during escape flights. The Journal of ExperimentalBiology 219, 1369\u00E2\u0080\u00931377 (2016).15. Dickerson, B. H., de Souza, A. M., Huda, A. & Dickinson, M. H. Flies regulatewing motion via active control of a dual-function gyroscope. Current Biology29, 3517\u00E2\u0080\u00933524.e3 (2019).16. Chakraborty, S., Bartussek, J., Fry, S. N. & Zapotocky, M. Independentlycontrolled wing stroke patterns in the fruit fly Drosophila melanogaster. PLoSONE 10 (ed Gronenberg, W.) e0116813 (2015).70References17. Straw, A. D., Branson, K., Neumann, T. R. & Dickinson, M. H. Multi-camerareal-time three-dimensional tracking of multiple flying animals. Journal of TheRoyal Society Interface 8, 395\u00E2\u0080\u0093409 (2010).18. Lentink, D. Accurate fluid force measurement based on control surface inte-gration. Experiments in Fluids 59 (2017).19. Dakin, R., Segre, P. S., Straw, A. D. & Altshuler, D. L. Morphology, musclecapacity, skill, and maneuvering ability in hummingbirds. Science 359, 653\u00E2\u0080\u0093657 (2018).20. Ingersoll, R. & Lentink, D. How the hummingbird wingbeat is tuned for effi-cient hovering. The Journal of Experimental Biology 221, jeb178228 (2018).21. Sarkka, S. Bayesian filtering and smoothing (Cambridge University Press,2009).22. Gagniuc, P. A. Markov chains (John Wiley & Sons, Inc., 2017).23. Broemeling, L. D. Bayesian analysis of linear models (Routledge, 2017).24. Bajo, M. Improving storm surge forecast in Venice with a unidimensionalKalman filter. Estuarine, Coastal and Shelf Science 239, 106773 (2020).25. Durantin, G., Scannella, S., Gateau, T., Delorme, A. & Dehais, F. Processingfunctional near infrared spectroscopy signal with a Kalman filter to assessworking memory during simulated flight. Frontiers in Human Neuroscience 9(2016).26. Dorner, B., Peterman, R. M. & Haeseker, S. L. Historical trends in produc-tivity of 120 Pacific pink, chum, and sockeye salmon stocks reconstructed byusing a Kalman filter. Canadian Journal of Fisheries and Aquatic Sciences65, 1842\u00E2\u0080\u00931866 (2008).27. Kalman, R. E. A new approach to linear filtering and prediction problems.Journal of Basic Engineering 82, 35\u00E2\u0080\u009345 (1960).71References28. McGee, L. A. & Schmidt, S. F.Discovery of the Kalman filter as a practical toolfor aerospace and industry tech. rep. NASA-TM-86847, N86-13311 (NationalAeronautics and Space Administration, 1985).29. Kendall, G. Would your mobile phone be powerful enough to get you to themoon? https://theconversation.com/would-your-mobile-phone-be-powerful-enough-to-get-you-to-the-moon-115933 (2020).30. Kalman, R. E. & Bucy, R. S. New results in linear filtering and predictiontheory. Journal of Basic Engineering 83, 95\u00E2\u0080\u0093108 (1961).31. Wan, E. A. & van der Merwe, R. in Kalman filtering and neural networks221\u00E2\u0080\u0093280 (John Wiley & Sons, Inc., 2001).32. Kitagawa, G. Non-Gaussian state-space modeling of nonstationary time series.Journal of the American Statistical Association 82, 1032 (1987).33. Smith, R. & Miller, J. A non-Gaussian state-space model and applicationto prediction of records. Journal of the Royal Statistical Society Series B-Methodological 48, 79\u00E2\u0080\u009388 (1986).34. Aravkin, A., Burke, J. V., Ljung, L., Lozano, A. & Pillonetto, G. GeneralizedKalman smoothing: modeling and algorithms. Automatica 86, 63\u00E2\u0080\u009386 (2017).35. Sharia, T. Recursive parameter estimation: convergence. Statistical Inferencefor Stochastic Processes 11, 157\u00E2\u0080\u0093175 (2007).36. Kitagawa, G. A self-organizing state-space model. Journal of the AmericanStatistical Association 93, 1203 (1998).37. Liu, J. & West, M. in Sequential Monte Carlo methods in practice 197\u00E2\u0080\u0093223(Springer New York, 2001).38. Yang, B., Stroud, J. R. & Huerta, G. Sequential Monte Carlo smoothing withparameter estimation. Bayesian Analysis 13, 1137\u00E2\u0080\u00931161 (2018).72References39. Seel, T., Raisch, J. & Schauer, T. IMU-based joint angle measurement forgait analysis. Sensors 14, 6891\u00E2\u0080\u00936909 (2014).40. Burr, D. B. Why bones bend but don\u00E2\u0080\u0099t break. Journal of Musculoskeletal &Neuronal Interactions 11, 270\u00E2\u0080\u0093285 (2011).41. Paletta, G. A., Warner, J. J., Warren, R. F., Deutsch, A. & Altchek, D. W.Shoulder kinematics with two-plane x-ray evaluation in patients with anteriorinstability or rotator cuff tearing. Journal of Shoulder and Elbow Surgery 6,516\u00E2\u0080\u0093527 (1997).42. Madgwick, S. O. H., Harrison, A. J. L. & Vaidyanathan, R. Estimation ofIMU and MARG orientation using a gradient descent algorithm in 2011 IEEEInternational Conference on Rehabilitation Robotics (IEEE, 2011).43. Mahony, R., Hamel, T. & Pflimlin, J.-M. Nonlinear complementary filters onthe special orthogonal group. IEEE Transactions on Automatic Control 53,1203\u00E2\u0080\u00931218 (2008).44. Weisstein, E. W. Euler Angles. From MathWorld\u00E2\u0080\u0093A Wolfram Web Resourcehttps://mathworld.wolfram.com/EulerAngles.html (2020).45. Hemingway, E. G. & O\u00E2\u0080\u0099Reilly, O. M. Perspectives on Euler angle singularities,gimbal lock, and the orthogonality of applied forces and applied moments.Multibody System Dynamics 44, 31\u00E2\u0080\u009356 (2018).46. Duncan, M. Applied geometry for computer graphics and CAD (Springer Lon-don, 2005).47. Strang, G. Linear algebra and its applications (Thomson, Brooks/Cole, Bel-mont, CA, 2006).48. Weisstein, E. W. Vector space orientation. From MathWorld\u00E2\u0080\u0093A Wolfram WebResource https://mathworld.wolfram.com/VectorSpaceOrientation.html (2020).73References49. Lengyel, E. Mathematics for 3d game programming and computer graphics(Charles River Media, Inc., USA, 2001).50. Ma, K. F., Cui, X. M., Huang, G. P., Yuan, D. B. & Cai, Q. K. A solutionfor space resection based on angle-axis representation. Applied Mechanics andMaterials 638-640, 2123\u00E2\u0080\u00932128 (2014).51. Leon, S. J., Bj\u00C3\u00B6rck, \u00C3\u0085. & Gander, W. Gram-Schmidt orthogonalization: 100years and more. Numerical Linear Algebra with Applications 20, 492\u00E2\u0080\u0093532(2012).52. Hamilton, W. R. II. On quaternions; or on a new system of imaginaries in alge-bra. The London, Edinburgh, and Dublin Philosophical Magazine and Journalof Science 25, 10\u00E2\u0080\u009313 (1844).53. Clemente, C. J. et al.Moving in complex environments: a biomechanical anal-ysis of locomotion on inclined and narrow substrates. The Journal of Experi-mental Biology 222, jeb189654 (2019).54. Gellman, E. D., Tandler, T. R. & Ellerby, D. J. Swimming from coast to coast:a novel fixed-gear swimming gait in fish. Biology Letters 15, 20190270 (2019).55. Nasir, N., Lehmann, F. O., Sch\u00C3\u00BCtzner, P., Mat, S. & Mohd, N. N. Dynamicsof body kinematics of freely flying houseflies responding to air turbulence.Journal of Asia-Pacific Entomology 22, 1082\u00E2\u0080\u00931089 (2019).56. Walker, S. M., Thomas, A. L. & Taylor, G. K. Photogrammetric reconstruc-tion of high-resolution surface topographies and deformable wing kinematicsof tethered locusts and free-flying hoverflies. Journal of The Royal SocietyInterface 6, 351\u00E2\u0080\u0093366 (2008).57. Brackenbury, J. H. Kinematics of take-off and climbing flight in butterflies.Journal of Zoology 224, 251\u00E2\u0080\u0093270 (1991).74References58. Durston, N. E., Wan, X., Liu, J. G. & Windsor, S. P. Avian surface recon-struction in free flight with application to flight stability analysis of a barnowl and peregrine falcon. The Journal of Experimental Biology 222, jeb185488(2019).59. Deetjen, M. E., Biewener, A. A. & Lentink, D. High-speed surface recon-struction of a flying bird using structured light. The Journal of ExperimentalBiology 220, 1956\u00E2\u0080\u00931961 (2017).60. Robert-Lachaine, X., Parent, G., Fuentes, A., Hagemeister, N. & Aissaoui, R.Inertial motion capture validation of 3D knee kinematics at various gait speedon the treadmill with a double-pose calibration. Gait & Posture 77, 132\u00E2\u0080\u0093137(2020).61. Robert-Lachaine, X., Mecheri, H., Muller, A., Larue, C. & Plamondon, A.Validation of a low-cost inertial motion capture system for whole-body motionanalysis. Journal of Biomechanics 99, 109520 (2020).62. Abdelhady, M., van den Bogert, A. J. & Simon, D. A high-fidelity wearablesystem for measuring lower-limb kinetics and kinematics. IEEE Sensors Jour-nal 19, 12482\u00E2\u0080\u009312493 (2019).63. Egan, S., Brama, P. & McGrath, D. Research trends in equine movement anal-ysis, future opportunities and potential barriers in the digital age: A scopingreview from 1978 to 2018. Equine Veterinary Journal 51, 813\u00E2\u0080\u0093824 (2019).64. Duerr, F. et al. Evaluation of inertial measurement units as a novel method forkinematic gait evaluation in dogs. Veterinary and Comparative Orthopaedicsand Traumatology 29, 475\u00E2\u0080\u0093483 (2016).65. Godsill, S. J., Doucet, A. & West, M. Monte Carlo Smoothing for nonlin-ear time series. Journal of the American Statistical Association 99, 156\u00E2\u0080\u0093168(2004).75References66. Palermo, E., Rossi, S., Marini, F., Patan\u00C3\u00A8, F. & Cappa, P. Experimentalevaluation of accuracy and repeatability of a novel body-to-sensor calibrationprocedure for inertial sensor-based gait analysis. Measurement 52, 145\u00E2\u0080\u0093155(2014).67. Cariow, A., Cariowa, G. &Majorkowska-Mech, D. An algorithm for quaternion\u00E2\u0080\u0093based 3D rotation. International Journal of Applied Mathematics and Com-puter Science 30, 149\u00E2\u0080\u0093160 (2020).68. Julier, S. J. & Uhlmann, J. K. New extension of the Kalman filter to nonlinearsystems in Signal Processing, Sensor Fusion, and Target Recognition VI (edKadar, I.) (SPIE, 1997).69. Rose, D. Euler Angle Visualization Tool http://danceswithcode.net/engineeringnotes / rotations _ in _ 3d / demo3D / %20rotations _ in _ 3d _tool.html (2020).70. Altshuler, D. L., Quicazan-Rubio, E. M., Segre, P. S. & Middleton, K. M.Wingbeat kinematics and motor control of yaw turns in Anna\u00E2\u0080\u0099s hummingbirds(Calypte anna). Journal of Experimental Biology 215, 4070\u00E2\u0080\u00934084 (2012).71. Berg, A. M. & Biewener, A. A. Wing and body kinematics of takeoff andlanding flight in the pigeon (Columba livia). Journal of Experimental Biology213, 1651\u00E2\u0080\u00931658 (2010).72. For Guides in Metrology, J. C. JCGM 100: Evaluation of Measurement Data- Guide to the Expression of Uncertainty in Measurement tech. rep. (JointCommittee for Guides in Metrology, 2008).73. Dial, K. P. Activity patterns of the wing muscles of the pigeon (Columbalivia) during different modes of flight. Journal of Experimental Zoology 262,357\u00E2\u0080\u0093373 (1992).76References74. Robin, L. New dynamics of the MEMS inertial sensor market in ChinaSemiconductor Technology International Conference (Electrochemical Soci-ety, 2012).75. Pennycuick, C. Speeds and wingbeat frequencies of migrating birds comparedwith calculated benchmarks. Journal of Experimental Biology 204, 3283\u00E2\u0080\u00933294(2001).76. Hudson, P. E., Corr, S. A. & Wilson, A. M. High speed galloping in the chee-tah (Acinonyx jubatus) and the racing greyhound (Canis familiaris): spatio-temporal and kinetic characteristics. Journal of Experimental Biology 215,2425\u00E2\u0080\u00932434 (2012).77. Svendsen, M. B. S. et al. Maximum swimming speeds of sailfish and threeother large marine predatory fish species based on muscle contraction timeand stride length: a myth revisited. Biology Open 5, 1415\u00E2\u0080\u00931419 (2016).78. Sfakiotakis, M., Lane, D. & Davies, J. Review of fish swimming modesfor aquatic locomotion. IEEE Journal of Oceanic Engineering 24, 237\u00E2\u0080\u0093252(1999).79. Fuchs, N. T. & Caudill, C. C. Classifying and inferring behaviors using real-time acceleration biotelemetry in reproductive steelhead trout (Oncorhynchusmykiss). Ecology and Evolution 9, 11329\u00E2\u0080\u009311343 (2019).80. Chakravarty, P., Cozzi, G., Ozgul, A. & Aminian, K. A novel biomechanicalapproach for animal behaviour recognition using accelerometers. Methods inEcology and Evolution 10 (ed O\u00E2\u0080\u0099Hara, R. B.) 802\u00E2\u0080\u0093814 (2019).81. Studd, E. K. et al. Behavioral classification of low-frequency acceleration andtemperature data from a free-ranging small mammal. Ecology and Evolution9, 619\u00E2\u0080\u0093630 (2018).77References82. Clark, C. J. & Mistick, E. A. Kinematic control of male Allen\u00E2\u0080\u0099s hummingbirdwing trill over a range of flight speeds. The Journal of Experimental Biology221, jeb173625 (2018).83. Gurka, R. et al. Flow pattern similarities in the near wake of three bird speciessuggest a common role for unsteady aerodynamic effects in lift generation.Interface Focus 7, 20160090 (2017).84. Mahalingam, S. & Welch, K. C. Neuromuscular control of hovering wingbeatkinematics in response to distinct flight challenges in the ruby-throated hum-mingbird, Archilochus colubris. Journal of Experimental Biology 216, 4161\u00E2\u0080\u00934171 (2013).85. Smeulders, A. W. M. et al. Visual Tracking: An Experimental Survey. IEEETransactions on Pattern Analysis and Machine Intelligence 36, 1442\u00E2\u0080\u00931468(2014).86. Senthivasan, S. kinavr https : / / github . com / shreeramsenthi / kinavr.2020.87. Altshuler, D. et al. A map of human genome variation from population-scalesequencing. Nature 467, 1061\u00E2\u0080\u00931073 (2010).88. Wang, K., Li, M. & Hakonarson, H. ANNOVAR: functional annotation of ge-netic variants from high-throughput sequencing data. Nucleic Acids Research38, e164\u00E2\u0080\u0093e164 (2010).89. Lipinski, C. A., Lombardo, F., Dominy, B. W. & Feeney, P. J. Experimentaland computational approaches to estimate solubility and permeability in drugdiscovery and development settings. Advanced Drug Delivery Reviews 23, 3\u00E2\u0080\u009325 (1997).78References90. Baliga, V. B., Szabo, I. & Altshuler, D. L. Range of motion in the avian wingis strongly associated with flight behavior and body mass. Science Advances5, eaaw6670 (2019).91. Bruderer, B., Peter, D., Boldt, A. & Liechti, F. Wing-beat characteristics ofbirds recorded with tracking radar and cine camera. Ibis 152, 272\u00E2\u0080\u0093291 (2010).92. Pennycuick, C. Wingbeat frequency of birds in steady cruising flight: Newdata and improved predictions. Journal of Experimental Biology 199, 1613\u00E2\u0080\u00931618 (1996).93. Savile, O. B. O. Adaptive evolution in the avian wing. Evolution 11, 212\u00E2\u0080\u0093224(1957).94. Fenster, C. B., Armbruster, W. S., Wilson, P., Dudash, M. R. & Thomson,J. D. Pollination syndromes and floral specialization. Annual Review of Ecol-ogy, Evolution, and Systematics 35, 375\u00E2\u0080\u0093403 (2004).95. Shatkovska, O. V. & Ghazali, M. Relationship between developmental modes,flight styles, and wing morphology in birds. The European Zoological Journal84, 390\u00E2\u0080\u0093401 (2017).96. Viscor, G. & Fuster, J. Relationships between morphological parameters inbirds with different flying habits. Comparative Biochemistry and PhysiologyPart A: Physiology 87, 231\u00E2\u0080\u0093249 (1987).97. Lees, J. J., Dimitriadis, G. & Nudds, R. L. The influence of flight style onthe aerodynamic properties of avian wings as fixed lifting surfaces. PeerJ 4,e2495 (2016).98. Usherwood, J. R. Physiological, aerodynamic and geometric constraints offlapping account for bird gaits, and bounding and flap-gliding flight strategies.Journal of Theoretical Biology 408, 42\u00E2\u0080\u009352 (2016).79References99. Xu, R. & WunschII, D. Survey of clustering algorithms. IEEE Transactionson Neural Networks 16, 645\u00E2\u0080\u0093678 (2005).100. Shumway, R. H. & Stoffer, D. S. Time series analysis and its applications(Springer International Publishing, 2017).101. Batenkov, D. & Yomdin, Y. Algebraic Fourier reconstruction of piecewisesmooth functions. Mathematics of Computation 81, 277\u00E2\u0080\u0093318 (2012).102. Akogul, S. & Erisoglu, M. A comparison of information criteria in cluster-ing based on mixture of multivariate normal distributions. Mathematical andComputational Applications 21, 34 (2016).103. Alerstam, T., Ros\u00C3\u00A9n, M., B\u00C3\u00A4ckman, J., Ericson, P. G. P. & Hellgren, O. Flightspeeds among bird species: Allometric and phylogenetic effects. PLoS Biology5 (ed Sheldon, B.) e197 (2007).104. Lappe, M., Bremmer, F. & van den Berg, A. Perception of self-motion fromvisual flow. Trends in Cognitive Sciences 3, 329\u00E2\u0080\u0093336 (1999).105. Gl\u00C3\u00A4ser, N. et al. Harbor seals (Phoca vitulina) can perceive optic flow underwater. PLoS ONE 9 (ed Ropert-Coudert, Y.) e103555 (2014).106. Reiser, M. B. & Dickinson, M. H. Visual motion speed determines a behavioralswitch from forward flight to expansion avoidance in Drosophila. Journal ofExperimental Biology 216, 719\u00E2\u0080\u0093732 (2012).107. Mohler, B. J., Thompson, W. B., Creem-Regehr, S. H., Pick, H. L. & Warren,W. H. Visual flow influences gait transition speed and preferred walking speed.Experimental Brain Research 181, 221\u00E2\u0080\u0093228 (2007).108. Dakin, R., Fellows, T. K. & Altshuler, D. L. Visual guidance of forward flightin hummingbirds reveals control based on image features instead of patternvelocity. Proceedings of the National Academy of Sciences 113, 8849\u00E2\u0080\u00938854(2016).80References109. Saleem, A. B., Ayaz, A., Jeffery, K. J., Harris, K. D. & Carandini, M. In-tegration of visual motion and locomotion in mouse visual cortex. NatureNeuroscience 16, 1864\u00E2\u0080\u00931869 (2013).110. Mulavara, A. P. et al. Exposure to a rotating virtual environment duringtreadmill locomotion causes adaptation in heading direction. ExperimentalBrain Research 166, 210\u00E2\u0080\u0093219 (2005).111. Schiffner, I. & Srinivasan, M. V. Direct evidence for vision-based control offlight speed in budgerigars. Scientific Reports 5 (2015).112. Groom, D. J. E., Toledo, M. C. B. & Welch, K. C. Wingbeat kinematics andenergetics during weightlifting in hovering hummingbirds across an elevationalgradient. Journal of Comparative Physiology B 187, 165\u00E2\u0080\u0093182 (2016).81"@en . "Thesis/Dissertation"@en . "2020-11"@en . "10.14288/1.0392340"@en . "eng"@en . "Zoology"@en . "Vancouver : University of British Columbia Library"@en . "University of British Columbia"@en . "Attribution-NonCommercial-NoDerivatives 4.0 International"@* . "http://creativecommons.org/licenses/by-nc-nd/4.0/"@* . "Graduate"@en . "High-throughput kinematic tracking of bird wings using inertial sensor arrays"@en . "Text"@en . "http://hdl.handle.net/2429/75127"@en .