Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Inference of central nervous system input and its complexity for interactive arm movement Atsma, Willem Jentje 2006

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2007_spring_atsma_willem.pdf [ 1.47MB ]
Metadata
JSON: 831-1.0080769.json
JSON-LD: 831-1.0080769-ld.json
RDF/XML (Pretty): 831-1.0080769-rdf.xml
RDF/JSON: 831-1.0080769-rdf.json
Turtle: 831-1.0080769-turtle.txt
N-Triples: 831-1.0080769-rdf-ntriples.txt
Original Record: 831-1.0080769-source.json
Full Text
831-1.0080769-fulltext.txt
Citation
831-1.0080769.ris

Full Text

Inference of Central Nervous System Input and its Complexity for Interactive Arm Movement by Willem Jentje Atsma Ingenieur (B.Sc.), Hogeschool Enschede, 1994 M.Sc., The University of New Brunswick, 1997 A thesis submitted in the partial fulfilment of the requirements for the degree of Doctor of Philosophy in The Faculty of Graduate Studies (Mechanical Engineering)  The University of British Columbia October 2006 c Willem Jentje Atsma, 2006  Abstract This dissertation demonstrates a new method for inferring a representation of the motor command, generated by the central nervous system for interactive point-to-point movements. This new tool, the input inference neural network or IINN, allows estimation of the complexity of the motor command. The IINN was applied to experimental data gathered from 7 volunteer subjects who performed point-to-point tasks while interacting with a specially constructed haptic robot. The motor plan inference demonstrates that, for the point-to-point movement tasks executed during experiments, the motor command can be projected onto a low-dimensional manifold. This dimension is estimated to be 4 or 5 and far less than the degrees of freedom available in the arm. It is hypothesized that subjects simplify the problem of adapting to changing environments by projecting the motor control problem onto a motor manifold of low dimension. Reducing the dimension of the movement optimization problem through the development of a motor manifold can explain rapid adaptation to new motor tasks.  ii  Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  vi  List of Figures  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii  List of symbols and abbreviations  . . . . . . . . . . . . . . . . . . . . . . . . . .  ix  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  xi  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1  Acknowledgements 1. Introduction  1.1. Why research motor control and complexity . . . . . . . . . . . . . . . . . . . . .  1  1.2. About complexity and dimension . . . . . . . . . . . . . . . . . . . . . . . . . . .  2  1.3. Complexity of movement kinematics . . . . . . . . . . . . . . . . . . . . . . . . .  6  1.4. Compliance and movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  7  1.5. Internal models and movement adaptation . . . . . . . . . . . . . . . . . . . . . . 10 1.6. Experimental questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.7. Organization of this thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2. Input inference with neural networks  . . . . . . . . . . . . . . . . . . . . . .  14  2.1. The measurement problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2. Input inference with a neural net . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.1. Architecture  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20  2.2.2. Dimensioning and data processing . . . . . . . . . . . . . . . . . . . . . . 22 2.2.3. Other dimension extraction algorithms . . . . . . . . . . . . . . . . . . . . 24 2.3. A synthetic problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.4. Curve fitting of the error values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.4.1. Parameters extracted from the curve fit . . . . . . . . . . . . . . . . . . . 28 2.4.2. Worst case: equal power modes . . . . . . . . . . . . . . . . . . . . . . . . 29 2.4.3. Confidence intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.5. Fits and tabulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.5.1. Why fit an exponential function? . . . . . . . . . . . . . . . . . . . . . . . 33 2.5.2. Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.5.3. Quality of the curve fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 iii  Contents 2.6. Interpretation of modelling results . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.7. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3. Data collection  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  42  3.1. Experiment design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.2. The 5-bar manipulandum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.3. Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.4. Data collection and processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.5. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  47  4.1. Trajectories and forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.2. Motor command dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.2.1. Curve fitting results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.2.2. Dimension estimate analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2.3. Movement time and dimension . . . . . . . . . . . . . . . . . . . . . . . . 59 4.2.4. Force fields and dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.3. Prediction of movements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.4. Significance of inferred motor commands . . . . . . . . . . . . . . . . . . . . . . . 66 4.5. Motor command and task and trial parameters . . . . . . . . . . . . . . . . . . . 69 4.6. Evidence of learning in motor command adaptation . . . . . . . . . . . . . . . . . 74 5. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  77  5.1. The dimension of interactive motor behaviour . . . . . . . . . . . . . . . . . . . . 77 5.2. Movement time and dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 5.3. Dimension and motor control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.4. Dimension and movement learning . . . . . . . . . . . . . . . . . . . . . . . . . . 81 5.5. Interpretation of the motor command . . . . . . . . . . . . . . . . . . . . . . . . 82 5.6. Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.6.1. Absence of adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.6.2. A change of motor plan during an experiment . . . . . . . . . . . . . . . . 83 5.6.3. Insufficient richness in the data . . . . . . . . . . . . . . . . . . . . . . . . 84 5.6.4. Relationship between neural feedback and measured position and force . . 84 5.6.5. Coordinate system of planning . . . . . . . . . . . . . . . . . . . . . . . . 84 5.6.6. The low-pass nature of intrinsic dynamics . . . . . . . . . . . . . . . . . . 85 6. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  86  6.1. Main contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 6.2. Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 6.2.1. Motor manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 6.2.2. Movement segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 iv  Contents 6.2.3. Increased task complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 6.2.4. Alternative dimension estimation methods . . . . . . . . . . . . . . . . . . 88 6.2.5. Continuous estimation of the motor command . . . . . . . . . . . . . . . . 89 A. Design of a robust and simple impedance controller  . . . . . . . . . . . . .  90  A.1. The five bar manipulandum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 A.2. The control algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 A.3. Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 A.4. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 B. Link length calibration  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100  B.1. Precise link-length calibration is necessary . . . . . . . . . . . . . . . . . . . . . . 100 B.2. Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 B.2.1. Data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 B.2.2. Locating joint centres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 B.2.3. Combining multiple measurements . . . . . . . . . . . . . . . . . . . . . . 102 B.3. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Bibliography  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104  v  List of Tables 2.1. Effect of L1 and L2 size on curve fit parameters . . . . . . . . . . . . . . . . . . . 28 4.1. Metrics for subject A, Tm = 0.5 seconds . . . . . . . . . . . . . . . . . . . . . . . 53 4.2. Metrics for subject B, movement time 0.3 seconds  . . . . . . . . . . . . . . . . . 53  4.3. Metrics all subjects for bottleneck sizes of 1-7 . . . . . . . . . . . . . . . . . . . . 56 4.4. Metrics after adjusting bottleneck ranges . . . . . . . . . . . . . . . . . . . . . . . 57 4.5. Integer dimensions derived from dˆ of validation data for slow and fast movements 58 4.6. Wilcoxon test for assumed dimensions, slow and fast movement . . . . . . . . . . 59 4.7. Metrics all subjects for bottleneck sizes 1 to 9, pooled movement times . . . . . . 60 4.8. Metrics for B, C, E and G with adjusted range, pooled times . . . . . . . . . . . 60 4.9. Integer dimensions derived from dˆ of validation data for pooled movement times 61 4.10. Wilcoxon p-values for assumed dimensions, pooled times . . . . . . . . . . . . . . 61 4.11. Difference between fast and slow mean movement times and standard deviations  63  4.12. Metrics for null-field movements after adjusting bottleneck ranges . . . . . . . . . 64 ˆ to task parameters . . . . . . . . . . . . . . . . . . . 72 4.13. Result of regression of MC 4.14. Confusion matrices for classification of fields for subjects A(slow) and D(fast) . . 73 ˆ . . . . . . . . . . . . . . . . . . 73 4.15. Result of classification of field type based on MC ˆ . 74 4.16. Correspondence between previous and current field classification based on MC 4.17. P-values for testing H0: error increase is less than or equal zero . . . . . . . . . . 76 A.1. Values used for the analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 B.1. Link length estimation results for each trial . . . . . . . . . . . . . . . . . . . . . 103  vi  List of Figures 1.1. A linear spring-damper-mass system . . . . . . . . . . . . . . . . . . . . . . . . .  5  1.2. An example of a nonlinear system . . . . . . . . . . . . . . . . . . . . . . . . . .  6  1.3. A simple system with a forcing input . . . . . . . . . . . . . . . . . . . . . . . . .  8  1.4. The non-linear length-tension relationship of muscles allows independent control  10  1.5. Model of the motor control process . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1. Model of the motor control process.  . . . . . . . . . . . . . . . . . . . . . . . . . 15  2.2. Amplitude-frequency spectra of all collected position data. . . . . . . . . . . . . . 19 2.3. The Input Inference Neural Network (IINN). . . . . . . . . . . . . . . . . . . . . 21 2.4. Diagram of the system used to create synthetic data and sample results . . . . . 25 2.5. Error values and curve fits for synthetic data . . . . . . . . . . . . . . . . . . . . 26 2.6. Curve fitting procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.7. Effect of L1 and L2 size on IINN errors . . . . . . . . . . . . . . . . . . . . . . . 29 2.8. Worst case scenarios: all modes have equal contributions . . . . . . . . . . . . . . 30 2.9. Investigation of the dimension estimate from curve fits for worst-case scenarios . 31 2.10. Error values and curve fits for synthetic data with dˆ . . . . . . . . . . . . . . . . 32 2.11. Residual normalized error from simulation . . . . . . . . . . . . . . . . . . . . . . 35 2.12. The mean and standard deviations of simulated power modes . . . . . . . . . . . 35 2.13. The effect of increased noise on dˆ for 10,000 simulations . . . . . . . . . . . . . . 37 2.14. Curve fits to IINN errors with a random target value . . . . . . . . . . . . . . . . 38 ˆ for synthetic data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.15. Sample MC 2.16. Correlations between actual and regressed control parameters . . . . . . . . . . . 40 3.1. Experimental setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.2. Diagram of the manipulandum, subject, and movement space. . . . . . . . . . . . 45 4.1. Three movement trajectories from subject A (slow) . . . . . . . . . . . . . . . . . 48 4.2. Three movement paths for subject A (slow) with force perturbations . . . . . . . 49 4.3. Three movement paths for subject A (slow) with field forces . . . . . . . . . . . . 50 4.4. Three movement paths for subject A (slow) with hand forces . . . . . . . . . . . 50 4.5. Three movement paths for subject E (fast) with force perturbations . . . . . . . 51 4.6. Three movement paths for subject E (fast) with field forces . . . . . . . . . . . . 51 4.7. Three movement paths for subject E (fast) with hand forces . . . . . . . . . . . . 52 vii  List of Figures 4.8. MSE dependence on IINN configuration for subject A (slow) . . . . . . . . . . . 54 4.9. MSE dependence on IINN configuration for subject B (fast) . . . . . . . . . . . . 55 4.10. Dimension estimates with confidence intervals for slow and fast movements . . . 58 4.11. Dimension estimates with confidence intervals for pooled movements . . . . . . . 61 4.12. Representative examples of movement times during the experiments . . . . . . . 62 4.13. Dimension estimates and confidence intervals for changing field and null-field . . 64 4.14. Movements predicted for subject A, Tm = 0.5 seconds and with nb = 4 . . . . . . 67 4.15. Movements predicted for subject D, Tm = 0.3 seconds and with nb = 4 . . . . . . 68 ˆ inputs set to zero . . 69 4.16. Movements predicted for subject A as above, but with MC ˆ inputs set to zero . . 70 4.17. Movements predicted for subject D as above, but with MC ˆ input is removed . . . . . . . . . . . . . . 71 4.18. Percentage change in MSE when MC ˆ 4.19. Regression result of movement start and target location from MC . . . . . . . . 72 4.20. Scatter plots comparing IINN performance with and without incremental learning 76 A.1. The 5 bar parallel manipulandum has 2 passive elbow joints . . . . . . . . . . . . 91 A.2. Schematic of the control system hardware. . . . . . . . . . . . . . . . . . . . . . . 92 A.3. Model of the motor, force sensor and the environment. . . . . . . . . . . . . . . . 92 A.4. Impedance controller diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 A.5. Root-locus plot for Kp = 1 → 104 . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 A.6. Detail of the root-locus plot for bd = 1 → 40 N s/m, Kp = 500.  . . . . . . . . . . 96  A.7. Bode diagram for the closed-loop system with the ideal behaviour as a reference.  96  A.8. Block diagram of the 2 DOF implementation of the RSIC . . . . . . . . . . . . . 97 A.9. Ideal and actual performance of the RSIC . . . . . . . . . . . . . . . . . . . . . . 98 B.1. Locating the joint centres with respect to the link frame . . . . . . . . . . . . . . 101 B.2. Convergence of the link length estimates to the final value . . . . . . . . . . . . . 103  viii  List of symbols and abbreviations B Damping [N s/m] BCa Bias-Corrected, accellerated: a method to calculate confidence intervals through a resampling method. CI Confidence Interval d True intrinsic dimension dˆ Estimated intrinsic dimension ∆ The delay between an event in the arm/hand resulting in sensory feedback and a brainmediated response in the arm motor behaviour (long loop reflexes and voluntary control). δ The delay between an event in the arm/hand resulting in sensory feedback and a spinally mediated response in the arm motor behaviour (spinal reflexes). DOF Degrees Of Freedom FM Forward Model IINN Input Inference Neural Network IM Inverse Model K Stiffness [N/m] k Sample index kp Sample index to perturbation onset for movement p ˆ from motor behaviour data L1 Level 1 of the IINN: infer MC L2 Level 2 of the IINN: reconstruction of motor behaviour m Delay in samples applied to position input to L2, and length-1 in samples of the force input to L2. Note that m is also used as a unit (metres). M Mass [kg] ix  List of symbols and abbreviations MC Motor Command - Command descending down the spine to control the muscles of the arm. MCp Motor command of movement p ˆ Motor Command estimate - Parameters produced in the bottleneck of the IINN. MC MNP Motor Neuron Pool - Group of neurons in the spinal cord that innervate the muscle fibres. MP Motor Plan - A high-level description of the intended movement or motor output in the brain. MSE Mean-Squared Error - SSE divided by the number of patterns. nb Natural number indicating bottleneck size nb5 Rational number indicating the value at which the curve fit to the minimum SSE values assumes a 5% of the normalized curve. SSE Sum-Squared Error - Error function used to optimize the IINN. t Time Tm Movement time - Movement time is either 0.5 (slow) or 0.3 seconds (fast). Tsample Sample time x(t) Position [m] x(k) Sampled position [m]  x  Acknowledgements I suppose there are two types of theses, and this is of the kind that took a very long time. If it wasn’t for tenacious support from my love Madeleine, my parents, and all my good friends, I would have disengaged some time ago. So having this finished now is at least 51% yours because I was not allowed to get away with anything but the book you have in your hands. You all deserve much credit and I hope to be repaying you for your kindness for years to come. I am grateful to my thesis adviser Dr. Antony Hodgson who has been supportive and constructive throughout. I clearly am not a minimally constraint student and promise to never become one. I enjoyed financial support from the University of British Columbia as a recipient of the University Graduate Fellowship as well as the National Research Council and I would like to thank them for their support. Also included should be the stimulating environment of Green College and its residents, where we were fortunate to live for a good while.  xi  for Madeleine  xii  1. Introduction “Look - what are these?” “They’re fingers, Arthur.” “Yes, but they’re also pincers. Watch this.” He picked up a pencil, twirled it around a couple of times. “Clever, huh? People are amazed by robots, but never by themselves. They see a robot do this and it freaks them out, but they do it themselves all the time without giving it a second thought.” Cees Nooteboom, All Souls Day (Nooteboom [2001])  1.1. Why research motor control and complexity This dissertation quantifies the complexity of interactive motor behaviour of the human hand. This complexity is expressed as the estimated dimension of the motor command (MC) that travels from the brain down the spinal cord. The central finding of this research is that the complexity of interactive motor behaviour is relatively low, even when performing a completely new task. Perhaps paradoxically, this characteristic may be key to our capacity to rapidly adapt to new situations. We should be amazed at our own ability to deal with a large variety of motor tasks, an ability that after decades of continuing advancement remains a source for new research. At the same time robotic devices, sometimes inspired by human motor behaviour research, have yet to attain a level of competence comparable to our own. An improved understanding of human motor control has a number of benefits. The design of human-machine interfaces should take into account the impedance of the operator, which is task and position dependent (Burdet et al. [2001], Gomi and Kawato [1996], Gomi and Osu [1998], Hodgson [1994]). For appropriate consideration of operator impedance in the design of haptic interfaces it is therefore necessary to investigate the nature of human mechanical impedance and how it relates to particular tasks. A robotic prosthetic limb could, or perhaps should be conceived of as a special haptic device, mimicking the dynamic behaviour of the natural limb. Benefits from neuromotor control research also exist in the field of robotics. Examples of inspired development are the use of series elastic elements in actuators to improve actuator performance (Pratt and Williamson [1995]), and impedance control (Hogan [1993]). Finally, the diagnosis and treatment of neuromuscular disorders can be further developed with a deeper 1  1. Introduction understanding of the processes that underlie human motor control. A better understanding of the relationship between motor function and the underlying neuromuscular structure will aid in developing clear links between measurable changes in motor behaviour and neuromuscular disorders (Gentile [1998], Smith et al. [2000]). Much recent research investigates the application of robotic devices for the rehabilitation of limb motor control for stroke and hemiplegia (Aisen et al. [1997], Krebs et al. [2004]). All these applications rely on an understanding of motor control that is typically expressed in a computational model of some kind. Motor behaviour complexity is relevant for any such application as it indicates the complexity a candidate model should embody. In the next section of this chapter the concept of the intrinsic dimension of system dynamics is introduced, first for a linear system and then for a simple nonlinear system. The intrinsic dimension of the system dynamics is equated to complexity of that system. Section 1.3 discusses research in the literature that investigates intrinsic dimension of motor control. In Section 1.4 it is shown that a characterization of trajectories or forces is insufficient for interactive motor tasks where the body is in contact with a dynamic environment. No study has been made that quantifies the intrinsic dimension of interactive motor behaviour, which takes into account both movement trajectory and interaction forces simultaneously. This is the main objective of the current work. An overview of the current understanding of the motor control process in humans is presented in Section 1.5. This chapter concludes in Section 1.6 with the experimental questions that will be addressed and the organization of the dissertation in Section 1.7. This research focuses on the human arm and interaction of the hand with a robotic device. Unless otherwise noted, references to research by other authors also pertains to the motor behaviour and control of the human arm.  1.2. About complexity and dimension The concepts of dimension and intrinsic dimension are of particular importance and are explained in this section. Linear systems are characterized in part by their order. If a linear system is described with state space equations such as Equations 1.1 and 1.2 then the state vector x(t) will have dimensions n × 1 where n is equal to the system order. ˙ x(t) = Ax(t) + Bu(t)  (1.1)  y(t) = Cx(t) + Du(t)  (1.2)  When the system input u(t) is constant, i.e. u(t) = u, the system evolves in a space with a dimension equal to the order of the system. This is perhaps easier to see when considering how the system evolves from time t to time t + δ, which can be derived from the state transition in 2  1. Introduction Equation 1.1: t+δ  x(t + δ) = eAδ x(t) + eAδ  e−Aτ Budτ  (1.3)  t  Here input u is constant so for a given time delay δ the integral is a constant as well. The matrix eAδ has the same n × n size as matrix A. This means that the transition from the state at time t to the state at time t + δ is a linear transform that occurs in an n-dimensional space. Since t and δ can assume any value the entire system state evolves in an n-dimensional space.  In a linear system the output y(t) is a linear transformation C of the system state (for convenience we will assume that the system input u does not directly influence the system output, i.e. D = 0). This transformation can project the system state into a space with a different dimension. For example if C is a n × 1 column vector then the output is a scalar. However, assuming the system is observable, its intrinsic dimension remains equal to the system order. The output y(t) then can be used to estimate the system order by estimating its intrinsic dimension. Equation 1.3 shows that you need to sample the output with some sampling interval δ to obtain such a representation. To show this, consider the state transition in Equation 1.3. The delay δ can be any value, so if we take a multiple rδ as delay then we can write: t+rδ  x(t + rδ) = eArδ x(t) +  e−Aτ Budτ  (1.4)  t  =  eAδ  t+δ  r  x(t) + r  e−Aτ Budτ  (1.5)  t  So consecutive discrete states which are a time interval δ apart, can be written as a linear transformation of the state at time t. For convenience and without loss of generality we will set u = 0. Using Equations 1.2 and 1.5 we can write: y(t + δ) y(t + 2δ) · · ·  y(t + rδ) = C =  x(t + δ) x(t + 2δ) · · ·  C eAδ C eAδ  =  2  ···  x(t + rδ)  C eAδ  r  x(t)  (1.6) (1.7)  Px(t)  (1.8)  r  (1.9)  where: P = C eAδ C eAδ  2  ···  C eAδ  The system is observable if, for r ≥ n, matrix P has rank n (the order of the system). With r ≥ n and observability the state can be determined from the series of output measurements: x(t) = (PT P)−1 PT  y(t + δ) y(t + 2δ) · · ·  3  y(t + rδ)  (1.10)  1. Introduction Pre-multiplying this with the output matrix C and using Equation 1.2 we get: y(t) = C(PT P)−1 PT  y(t + δ) y(t + 2δ) · · ·  y(t + rδ)  (1.11)  This last equation defines a linear surface in (r + 1)-dimensional space. If we want to know the order of a linear observable the system, we need to know the dimension of the linear surface on which the output y(t) evolves over time while the input is constant. Consider for example the simple second order spring-mass-damper system in Figure 1.1(a). The state equations for this system are: ˙ x(t) = = y(t) =  x ¨(t)  (1.12)  x(t) ˙ B −M  K −M  x(t) ˙  1  0  x(t)  0 1  x(t)  +  1 M  0  f (t)  (1.13) (1.14)  Here K is the spring constant, B is the damping and M the mass, and the system output y(t) is the mass position. Figure 1.1(b) shows the unit step response y(t) of this linear system and Figure 1.1(c) shows the phase plot in 2 dimensions for the step response. In a phase plot of y(t) time-delayed versions of the output are plotted against each other, offering a different way to observe the evolution of system dynamics. Because the step response y(t) resulted from a second order system the system dynamics evolve on a 2-dimensional plane. If a third axis y(t − 0.5) were to be added to Figure 1.1(c) the spiral would still exist in a 2-dimensional plane, even if the system dynamics would be embedded in 3 dimensions. To demonstrate this last point the mean was removed from the step response and it was embedded in a 10-dimensional space. First a matrix Y with in each column a time-delayed version of the step response was created: Y=  y(t) y(t − δ) y(t − 2δ) . . . y(t − 9δ  − 0.1  (1.15)  with δ equal 0.25 seconds and a mean of 0.1. To discover the plane in which the data exists the eigenvectors and eigenvalues of the auto-correlation matrix  1 T NY Y  are used. The eigenvectors  indicate the linear axes upon which data is projected to find its principal components and the eigenvalues indicate the standard deviation of the data along each eigenvector. Figure 1.1(d) shows the first 5 eigenvalues. Because the third and higher eigenvalues are zero (less than the computer precision) it demonstrates that the step response indeed occupies a linear 2-dimensional plane when embedded in a 10-dimensional space. The arrows in Figure 1.1(c) show the direction and relative scale of the first two eigenvectors. The same principle applies to non-linear systems. A sufficiently long vector of sampled system output may be used to represent nonlinear system dynamics (Takens [1981], Kay [1988]). This technique uses the phase plots of system output to represent the evolution of the system. The 4  1. Introduction (b)  (a)  (d)  (c)  0.15  0.2 y(t) [m]  m  y [m]  f,v 0.1 0.05 0  0.1  eigen value [mm]  1.5  1  0.5  0 0  0 0.1 y(t−0.25) [m]  5 10 time [s]  0  1 2 3 4 5 index  Figure 1.1.: A linear spring-damper-mass system with M = 1 [kg], B = 1 [ Nms ], K = 10 [ N m ] (a) and the unit-force step response of the mass position (b). A phase plot of the output is shown in (c). This is a 2nd order system and its output can be projected onto a 2-dimensional surface. In (d) the first 5 eigenvalues of the step response (after removing the mean) embedded in 10 dimensions are shown; the third eigenvalue is below the precision of the computer that was used to generate the results. The arrows in (c) indicate the direction and relative sizes of the two eigenvectors. dimension of the surface upon which the system output exists can be estimated with for example the correlation dimension (Theiler [1987]). This intrinsic dimension is analogous to the order of a linear system or the system dimension. The dynamics of linear systems exist in a linear space which can be described with a set of vectors which span that space. For nonlinear systems such as the human arm the dynamics exist on a surface that is not flat but is some kind of curved manifold. A set of curves can be found to serve as a coordinate system on such a manifold in a manner analogous to how eigenvectors can be used to map a linear subspace. To create a simple nonlinear system, the linear system above was used again, but after 12 seconds the system mass, damping and spring constant were modified. The results are shown in Figure 1.2. The dynamics of this system are 3-dimensional: the 2 dimensions of the secondorder system of Figure 1.1(a) plus one extra for switching to different system parameters. Figure 1.2(c) shows that the number of non-zero eigenvalues is larger than 2, demonstrating that the system dynamics cannot be projected in a 2-dimensional linear space. The number of eigenvalues is larger than 3 as well because the transition from one set of parameters to another is nonlinear, and the dynamics cannot be efficiently described in the linear space created by the eigenvectors. The projection of the dynamics in 3 dimensions in Figure 1.2(d) shows that the system dynamics are represented with two spirals at an angle to each other. This section introduced the concept of intrinsic dimension of a system’s dynamics and its application to both linear and nonlinear systems. The relationship between the order of a linear system and the intrinsic dimension of its dynamics was made clear. The dynamics of the human arm are in part of inertial and in part of neuromuscular origin and both are nonlinear. In Chapter 2 a new method to investigate the intrinsic dimension of interactive arm movements 5  1. Introduction (a)  (c)  (b)  (d)  1.5  10 20 time [s]  0  0 0.1 y(t−0.25) [m]  1  5 10 index  .25)  0 0  0.5  0  y(t−0  0.05  0.1  1  y(t−0.5)  y(t) [m]  y [m]  0.1  eigen value [mm]  0.2  0.15  y(t)  Figure 1.2.: An example of a nonlinear system: here the step response is shown for a system as in Figure 1.1(a). Initially M = 1 [kg], B = 1 [ Nms ], K = 10 [ N m ], but at t = 12 seconds Ns the values are set to M = 0.5 [kg], B = 0.5 [ m ], K = 20 [ N m ]. In (a) the step response is shown, (b) is the 2-dimensional phase plot. In (c) the eigenvalues are shown, clearly more than 2. In (d) the phase-plot of (b) is extended to 3 dimensions, with the view along the second eigenvector. The time delay between the axes is 0.25 seconds and units were omitted for clarity. will be presented and it is based on the observation that a system’s dynamics can be represented with a discrete representation of its output.  1.3. Complexity of movement kinematics Study of movement complexity has thus far focused on the complexity of movement trajectories. This section provides a brief overview of the complexity of trajectories during movement. Complexity of a trajectory is not the same as complexity of interactive motor behaviour, a distinction that will be elaborated in Section 1.4. Nonetheless the complexity of movement trajectories will be used as a starting point. The human body has many more joints than strictly necessary to address the kinematic needs. The human arm for example has at least 9 Degrees Of Freedom (DOF): flexion/extension and radial/ulnar deviation in the wrist; rotation in the forearm; rotation at the elbow; rotation, flexion/extension and abduction/adduction in the shoulder; and movement of the scapula and the sternoclavicular joint, which allow forward/backward and upward/downward movement of the shoulder. This while only 6 DOF are needed to position the hand anywhere and with any orientation within the arm’s workspace. Having an abundance in DOF is good in that it allows a range of strategies to solve movement problems with. On the other hand, the mapping of task requirements to the DOF is generally ill-posed in the sense that no unique solution exists based on movement kinematics alone. An abundance of DOF also requires special strategies when learning a new task quickly, because the amount of information necessary to properly constrain a solution increases with the degrees 6  1. Introduction of freedom involved. This is easy to see when imagining a task requiring two degrees of freedom (reach for a point in a plane) performed with an arm that has three joints: only two joints are needed to solve the task; the additional degree of freedom needs to be constrained with additional criteria. Based on his observations Bernstein, perhaps the most influential early motor control researcher, suggested a 3-stage process for learning that addresses this problem: 1. reduce the number of degrees of freedom by stiffening peripheral joints; 2. gradually relax the degrees of freedom; 3. internally model and exploit new dynamics. Bernstein [1967] recognized that movement has an intrinsic dimension (which in the translated text is named “order of the topology”) which is lower than the degrees of freedom involved, but notes that at that time the tools were not available to analyze them. In later work by Newell and Vaillancourt [2001] analysis of the dimension of motor output shows that during movement the physical degrees of freedom are mapped onto a manifold of lower dimension. To achieve this they used phase plots and the correlation dimension. For a constrained locomotion task the number of principal components of movement kinematics reduces from 15 to 1 with practice (Haken [1996]). This shows that dimension and the coordinates of the (sub-) space of movement dynamics do not necessarily correspond to the physical DOF available. It also shows that when learning a novel task movement trajectories tend to become less complex with increased skill. If we return to our simple example of solving a two-dimensional problem with a three DOF arm: during learning movement will occupy a three-dimensional space corresponding to the available DOF, but once a suitable strategy is found the observed movement trajectory should collapse onto a two- or one-dimensional surface. Flash and Hogan [1985] showed that when making unconstrained point-to-point movements with the hand while the arm is constrained to the horizontal plane results in hand trajectories that match a minimum-jerk transition. Such a trajectory has a symmetric bell-shaped tangential velocity curve. To fully determine a minimum-jerk transition one needs to specify 5 parameters: begin location ( x and y), end location (x and y) and transition time, or some equivalent set. The results of Flash and Hogan [1985] therefore suggest a movement dimension for this type of planar hand task of 5. If the same point-to-point task is repeated over and over the same minimum-jerk transition would result, leading to a dimension of 1.  1.4. Compliance and movement The studies described above provide evidence that the representation in the brain of movement is less complex than the motor apparatus which executes it. However, kinematic measures alone are insufficient to describe motor behaviour during interaction tasks. An interactive hand task is a task where energy is exchanged through the contact of the hand with the environment. 7  1. Introduction (a)  (c)  (d)  1  y [m]  0  (b)  0.3 0.2 0.1 0  eigen value [mm]  0.4 y(t) [m]  f [N]  2  0.2 0  −0.2 0  6 time [s]  12  20  0 0 0.2 y(t−0.25) [m]  1  5 index  10  Figure 1.3.: A simple system with a forcing input f . In (a) a random pulse sequence is shown that is applied to the system of Figure 1.1(a) in order to produce the output of (b). In (c) the phase diagram of the output is shown and (d) shows the 10 eigenvalues of the 10-dimensional embedding.  During an interaction task movement kinematics are dictated both by commands descending from the brain and forces applied to the subject’s hand by the environment. The intrinsic dimension of movement kinematics tells us little about the relationship between applied forces and the movement trajectory. When force is applied to a physical object with the causality of inertia, it will respond with a displacement. The displacement is characterized by the mechanical compliance of the object in question. For example, the compliance of the system in Figure 1.1(a) in equation form is: v(s) 1 = f (s) Ms + B +  K s  (1.16)  Here v is the velocity, f is the applied force, M , B, K are mass, damping and stiffness, and s is the Laplace variable. The inverse of compliance (or admittance) is the impedance which has velocity as input and responds with a force. The human arm is an inertial object and hence has, from the perspective of our environment, a compliance. Compliance couples interaction forces and trajectory during movement. The linear system of Figure 1.1(a) is used to demonstrate this in Figure 1.3. The random input force pulse sequence of Figure 1.3(a) results in an output that, in theory, has an infinite intrinsic dimension. The phase plot of Figure 1.3(c) does not fit in a 2-dimensional plane as evidenced by the eigenvalues shown in Figure 1.3(d). This is because the random force input adds its dimensionality to that of the system. The low-pass nature of the system limits the eigenvalues for high-frequency eigenvectors, effectively limiting the dimension of the system kinematics. Unlike the system of Figures 1.1 and 1.3 the compliance of the human arm changes during movement. It is modulated by the neuromotor system to suit the task that is being performed (Burdet et al. [2001], Franklin and Milner [2003], Gomi and Osu [1998]). A characterization 8  1. Introduction of interactive motor behaviour should therefore include the evolution over time of both compliance and position. Such characterizations of hand compliance during movement have been made (Gomi and Kawato [1997, 1996], Latash and Gottlieb [1991]) but the estimation methods have to assume a model for the compliance before it can be estimated. Gribble et al. [1998] show that a realistic model of the neuromuscular system can predict observed motor behaviour that other researchers used to investigate the nature of the motor command generated by the brain. The motor command originally inferred from the observations and those used in the neuromuscular model was fundamentally different. This difference was produced by assumptions made regarding the nature of mechanical compliance at the hand. So in order to characterize the complexity of motor behaviour we have to include the evolution of compliance over time, but we should avoid making assumptions regarding its functional form.  When we discussed the analysis of the complexity of movement kinematics above, we started with the observation that the number of degrees of freedom for movement of the arm is larger than strictly necessary. Similarly, the number of independent contributions to compliance at the hand seems larger than perhaps needed. Typically, more muscles cross a joint than there are degrees of freedom for movement. The human elbow for example has four muscles to produce flexion and extension (biceps brachii, brachialis, brachioradialis, triceps brachii). A muscle has nonlinear damping and stiffness properties (Hill [1938]). Because the relationship between force and muscle length is nonlinear, joint stiffness and joint torque can be controlled independently (Figure 1.4, see also Feldman [1966] and other work by the same author). Reflexes associated with arm movement rely on position, velocity and force feedback from each muscle. Reflexes also contribute to arm compliance and this contribution can be modulated by the central nervous system (Hoffer and Andreassen [1981], Bennett [1994], Wang et al. [2001]). Finally, arm posture during movement influences the inertia at the hand, but also influences the moment arms and resting length of muscles, thus influencing the ability to generate force and stiffness (Buneo et al. [1997], Milner [2002]).  Given that each muscle of the arm contributes at least one degree of freedom to the compliance at the hand and that there are 25 muscles crossing the shoulder, elbow or wrist joint, the degrees of freedom for the hand compliance for a given arm configuration can be estimated at 25. This should be adjusted to a lower value because some muscles have overlapping mechanical output, but it is nonetheless much higher than would appear necessary. Assuming that modulation of reflexes adds one extra DOF per muscle we arrive at 50 DOF for the compliance of the hand. The arm has 9 kinematic DOF which leaves 3 to vary the arm configuration without changing hand position or orientation. This makes a total of 53 DOF for the hand compliance. When ad hoc trying to account for the overlap between muscles from a visual appraisal of the arm anatomy, the number of muscle groups might be reduced to 14, which gives 31 DOF. 9  1. Introduction 1  force  force  1  0  −1 −1  0 length  1  0  −1 −1  0 length  1  Figure 1.4.: The non-linear length-tension relationship of muscles allows independent control of stiffness and torque at the joint. The dashed line indicates the mean of the agonist (top) and antagonist muscle and the dash-dotted line. On the left the stiffness is relatively low. In (b) there is a stronger co-contraction resulting in higher stiffness. The vertical lines indicate different points of equilibrium. The units are arbitrary.  1.5. Internal models and movement adaptation The ability to learn a new task is generally attributed to the adaptation of an internal model in the CNS. An internal model stores knowledge about the environment and translates the desired motor output in an appropriate set of motor commands. Figure 1.5 shows how an internal model might function to allow adaptive motor behaviour. In Figure 1.5(a) a desired motor output is represented in the brain as a motor plan (MP). The controller (C) translates the MP into a motor command (MC) which descends down the spinal cord onto the motor-neuron pool (MNP). The MNP integrates spinal feedback from the arm and the MC to produce muscle activations. The spinal feedback delay is indicated with δ and the loop delay to and from the brain with ∆ (see also Section 2.1). Adaptation and learning in motor behaviour consists of optimizing the internal model. Bhushan and Shadmehr [1999] found that experimental results are better predicted when a computational model includes a forward model (Figure 1.5(b)). One role of the forward model may be that it aids in the selection of the most suitable internal model from a larger set. In such a scheme the contribution of each internal model is weighted by the predictive ability of its associated forward model (Wolpert et al. [1998], Kawato [1999]). The same principle was put forward by Gottlieb et al. [1996] who argued that “synergies”, patterns of activation that can be combined or scaled in amplitude and/or time to suit a range of tasks, explain how during pointing tasks torque at the elbow and shoulder joints show a linear relationship throughout a movement, independent of load or speed. Devanne et al. [2002] demonstrated evidence of such synergies in the motor cortex for pointing tasks. The combining of internal models can 10  1. Introduction  (a)  (b)  MP  MP − −  ∆  Cfb  MC MNP  feedback  feedback  C  + Σ  δ  IM FM MC  Figure 1.5.: Model of the motor control process. (a) The global structure is shown with the short-loop or spinal reflex delay δ and the long-loop delay ∆ associated with responses mediated by the brain. Indicated are the motor plan (MP), the controller (C), the motor command (MC) and the motor neuron pool in the spinal cord (MNP). (b) The control block ”C” has been expanded to demonstrate how internal models may be involved. A feedback controller (Cfb) produces a control action which is translated into a feed-forward motor command (MC) by the inverse model (IM). The forward model (FM) predicts the response of the arm to improve the overall response time.  11  1. Introduction introduce complexity much like what was presented in Section 1.2 and Figure 1.2. The internal models available to address a motor task can be considered “modes” of the controller that are combined for optimal performance and hence can be considered eigenfunctions of sorts of the nonlinear adaptive controller. Donchin et al. [2003], Shadmehr et al. [2005] show that the adaptation of forces measured at the hand can be interpreted as resulting from basis functions that adapt and generalize based on the trajectory error of the previous movement. This adaptation based on the previous movement is evident even when the type of force field is random from one movement to the next (Scheidt et al. [2001]). The concept of basis functions for movement adaptation is compatible with computational models for other parts of the brain such as the visual cortex (see e.g. Wurtz and Kandel [2000] for a review). It is also compatible with adapting inverse and forward internal models; the models may well be composed of such basis functions. These basis functions may also be considered as modes of the controller. Indeed, perhaps each neuron in the motor cortex has an associated forward model to estimate its relevance on-line.  1.6. Experimental questions The main question we would like to answer is this: “What is the intrinsic dimension of interactive motor behaviour?” It is expected that this dimension is low with respect to the number of degrees of freedom available, analogous to the findings for unconstrained and passive interaction tasks. This would be an important finding because it has implications regarding the organization of adaptation in motor behaviour; a low dimension or number of modes available to the CNS provides a basis for fast adaptation to new environments without having to address the ill-posed problem associated with adaptation of a system in which the number of parameters is larger than the number of DOF required. To discover the intrinsic dimension of motor behaviour the MC will be estimated from recorded motor behaviour. Once identified, the MC will be related to task parameters such as movement time and distance to estimate the relative importance of these factors in movement adaptation. These findings will indicate whether Bernstein’s hypotheses regarding the intrinsic dimension of motor control hold when movement trajectory and compliance are both accounted for. Together with the relative importance of the task parameters this will provide a basis for further refinement of a human motor control model.  1.7. Organization of this thesis In the next chapter the methods used to analyze data are presented. In Chapter 3 the methods used to collect data are presented. Chapter 4 describes the results, which are discussed in Chapter 5. Finally, Chapter 6 concludes, listing the main contributions and commenting on future perspectives for this study. The appendices detail much of the work that was done on 12  1. Introduction the robotic platform that was developed for this research.  13  2. Input inference with neural networks I asked him where he had it made, he said he made it himself, and when I asked him where he got his tools said he made them himself and laughing added if I had staid for other people to make my tools and things for me, I had never made anything... Isaac Newton (epigraph by Gleick [2003])  In the previous chapter the concept of dimension of interactive movement was introduced. The aim of this research is to estimate this dimension for experimental data collected from subjects. To achieve this goal we need an experimental design that elicits responses from human subjects that allow dimension estimation, and a tool with which to analyze the collected data that captures this response. Chapter 3 will discuss the experimental design. In this chapter the tool developed for dimension estimation is presented. The dimension estimation tool can be applied to any type of system, but its development anticipates its application to experimental data, presented in Chapter 3, so it is useful to have some idea at this point about how the experiments with human subjects are performed. Subjects will be seated in a chair grasping with their dominant arm the end-effector of a robotic device. The robotic device is used to provide the interactive environment. Subjects are seated with their elbow suspended and the wrist in a restraint so that all movement occurs in a horizontal plane through the elbow and shoulder joints. The movements are therefore limited to two spatial dimensions. Section 2.1 presents the measurement problem and provides the theoretical basis, both for the experiment design of Chapter 3, and the development of a new method to infer a representation of unknown inputs of nonlinear systems. The latter is the Input Inference Neural Network (IINN) and is presented in Section 2.2. Section 2.3 presents a synthetic interactive movement problem for which the dimension is known. In Section 2.4 the IINN is applied to the synthetic data and a curve fitting technique is developed to produce an estimate of the dimension of the underlying process. The curve fitting used to estimate the dimension is further examined and supported in Section 2.5. The IINN produces a representation of the inputs of the system under study; Section 2.6 interprets these inputs for the synthetic data and demonstrates that the IINN correctly maps the dependency, expressed in the synthetic data, of the output on hidden inputs. 14  2. Input inference with neural networks  task  motivation MP  vision  feedback  C ∆  MC MNP δ  Figure 2.1.: Model of the motor control process.  2.1. The measurement problem Figure 2.1 is a general diagram of the motor control process. A highly abstracted description of a movement is thought to exist in the brain as a Motor Plan (MP). The MP is informed by internal and external influences. An internal influence may, for example, be motivation or proprioceptive feedback. External influences are, for example, task descriptions and obstacles perceived through visual input. The level of detail in the MP varies. If a subject is instructed to move his or her hand while making the arm as stiff as possible then stiffness has entered the MP as a factor where it otherwise might not. In simple terms, the MP expresses what a person wants to do with the arm. The MP sets the target performance for the controller (C). The controller considers the feedback from previous movements and uses it to produce a Motor Command (MC), which travels down the spinal cord to the Motor Neuron Pool (MNP) and finally the muscles. Feedback from position and velocity sensors in the muscles (muscle spindles) and force sensors in the tendons (Golgi organs) goes to both the MNP in the spine (short-loop reflexes) and higher motor functions in the brain. The loop-delay of the short-loop reflexes (δ) is approximately 30 milliseconds and the loop-delay of feedback to higher levels (the long-loop reflexes and cognitive responses) is 155 milliseconds and longer (∆).; the reaction time to visual cues is longer still: at least 180 milliseconds (Brebner and Welford [1980]). These delays are due to finite conduction velocities in the nerves, delays in synapses, and the delay between electrical activation of muscle and the actual generation of force. In our analysis below the short-loop reflexes and the intrinsic properties of the arm will be lumped together and considered a single subsystem. This is justified because the short-loop reflexes act autonomously. The arm and reflexes together will be considered as a compliance that is modulated by a supra-spinal controller. 15  2. Input inference with neural networks For a brief period immediately after a sudden perturbation is presented, say an unexpected push on the hand, we can assume a feed-forward control process during which no voluntary adaptations can be made to the MC. This brief period is equal to the duration of the longloop delay. The short-term response to the force pulses is therefore a measure of the hand compliance programmed by the CNS. If we consider Figure 2.1 then it can be observed that the behaviour that might be recorded at different feed-forward levels of the neuromuscular system is equivalent, i.e. the MC is equivalent to the motor output at the hand, in the sense that a mapping exists from the MC to motor output and that the MC and the motor output have the same intrinsic dimension. This equivalence can be seen more easily when considering two complementary types of tasks. The first type of task is unconstrained movement, i.e. the hand and arm are moving freely. For a practised unconstrained movement the MC produces a repeatable hand trajectory over time. The second type of task is to generate a particular force level and direction with the hand on a rigid object (no movement). In this case the MC is translated to an evolution of force over time. Neither of these types of tasks is considered interactive here, since there is no exchange of energy through the hand. Instead, we will focus on movements where both the force exerted with the hand and the hand trajectory are modulated during task execution. For interactive tasks, the evolution in time of both position and force is needed to fully capture the motor behaviour. During many daily activities, and also during the experiments that were performed for this research, the hand is coupled to an external dynamic system which imposes dynamic constraints on the relationship between force and velocity at the interface. Even so, given the external system dynamics, the MC is equivalent to the observed motor behaviour. In other words: the external system dynamics and the MC together determine the motor behaviour as expressed in the time-evolution of the hand interaction forces and position. In the following the equivalence of the MC and the resulting motor behaviour is analyzed in equation form. This analysis will be used in Section 2.2 to support the development of a new tool to exploit the equivalence in order to determine MC complexity. Because the arm configuration is restricted to two DOF during experiments all forces and arm kinematics can be expressed in workspace coordinates. The dynamics due to inertia and passive damping and stiffness can be written as: Fpassive = M(x)¨ x + B(x, x) ˙ x˙ + K(x)(x − x0 )  (2.1)  where x is the hand position vector, M(x) is the position-dependent inertia matrix, B(x, x) ˙ is the passive damping in the arm, K(x) is the passive elasticity in the arm, x0 is the reference position for the spring component, and Fpassive is the force at the hand resulting from arm inertia and passive stiffness and damping in the arm. The force contributed by muscles depends on the MC and the current state of the arm (x, x), ˙ as well as on feedback from muscle spindles 16  2. Input inference with neural networks (xδ , x˙ δ , where δ indicates the short-loop reflex delay) and the Golgi tendon organs (Fδ )1 : Fmuscle = Φ(MC, x, x, ˙ xδ , x˙ δ , Fδ )  (2.2)  The precise nature of MC is not important but it is a representation of the signal that descends down the spinal cord to execute a movement. Function Φ is some unknown map that translates input into the muscle force Fmucscle . Below the letter Φ will be used as an arbitrary symbol for a mapping. The passive dynamics of Equation 2.1 can be subsumed in the same functional form: Fext = Fpassive + Fmuscle = Φ(MC, x, x, ˙ x ¨, xδ , x˙ δ , Fδ )  (2.3) (2.4)  where Fext is the external force applied to the hand to balance the forces. The reference position x0 was dropped for convenience and without loss of generality, since it is a constant. When the arm is moving freely Equation 2.4 describes its dynamics: Φ(MC, x, x, ˙ x ¨, xδ , x˙ δ , Fδ ) = 0  (2.5)  When an external interaction force is present the dynamics have an added force input: Φ(MC, x, x, ˙ x ¨, xδ , x˙ δ , Fδ ) − Fext = 0  (2.6)  or, adding Fext into the left-hand function: Φ(MC, x, x, ˙ x ¨, xδ , x˙ δ , Fδ , Fext ) = 0  (2.7)  A sensor mounted in the handle of the manipulandum measures the external force Fext and the encoders on the motors provide x, x˙ and x ¨. The feedback from Golgi tendon organs (Fδ ) reflects the total force generated by the muscles, which is: Fmuscle = −Fpassive − Fext  (2.8)  so Fδ can be written as Fmuscle with a delay δ applied: Fδ = −Fδpassive − Fδext  (2.9)  The passive forces of Equation 2.1 are determined by the movement kinematics and the arm  1  Golgi tendon organ feedback does not accurately represent force exerted by the muscle. For simplicity its feedback is represented as force Fδ . This has no implications for the derivations, since the dynamics of Golgi tendon organs are subsumed on the overall arm and short-loop reflex dynamics.  17  2. Input inference with neural networks characteristics. Substituting Equation 2.9 in Equation 2.7 we obtain: Φ(MC, x, x, ˙ x ¨, xδ , x˙ δ , −Fδpassive − Fδext , Fext ) = 0  (2.10)  The delayed passive forces Fδpassive can be obtained by using the delayed versions of position, velocity and acceleration in Equation 2.1, so the above is equivalent to: ¨ δ , Fδext , Fext ) = 0 Φ(MC, x, x, ˙ x ¨, xδ , x˙ δ , x  (2.11)  For analysis, the data will be represented as discretized values sampled at 100 Hz. This sampling speed is sufficient to allow reliable estimation of velocity and acceleration from position data, as human hand movements occur at a bandwidth less than 10 Hz (see Figure 2.2). Filters to estimate velocity and acceleration are linear functions of a sampled position data vector and can be subsumed in the model. Using this Equation 2.11 can be simplified to: Φ(MC(k), x+ (k), xδ+ (k), Fδext (k), Fext (k)) = 0 where k is the sample index. The superscript  +  (2.12)  indicates that these quantities are extended to  contain a suitable length time-history, i.e.: x+ (k) = [ x(k) x(k − 1) x(k − 2) . . . ]  (2.13)  xδ+ (k) = [ x(k − δ) x(k − 1 − δ) x(k − 2 − δ) . . . ]  (2.14)  A final simplification of Equation 2.12 can be made by combining x+ and xδ+ in a single vector since they both are time-history records of position. The sample time used for data analysis is 10 milliseconds, so the spinal reflex delay δ is only a few samples long (3 samples for 30 milliseconds). This yields the following dynamics equation: Φ(MC(k), x+ (k), Fδext (k), Fext (k)) = 0  (2.15)  This expresses the equivalence between MC(k) and the combined record of movement kinematics and interaction forces. It is possible to create a function that infers MC(k) from x+ (k) and F+ ext (k). Such a model inversion has two components: inertial dynamics and neuromuscular dynamics. The partial inversion of the inertial contribution involves the inversion of the mass matrix M(x) which is a 2 × 2 matrix due to the constraints on the arm. It is full rank because the arm never assumes a singular configuration (the elbow is always bent). Inversion of the neuromuscular dynamics assumes a unique relationship between the trajectory and the MC given the feedback and the current state. Let us assume for the sake of argument that such a unique relationship does not exist. The CNS would then not be able to create an inverse map that allows the generation of a MC based on a MP. Since such a functional relationship certainly does exists, the inversion 18  normalized position [m]  2. Input inference with neural networks  1  0.5  0  0  2  4  6  8  10  f [Hz]  Figure 2.2.: Amplitude-frequency spectra of all collected position data. also exists (see also Bhushan and Shadmehr [1999], Wolpert et al. [1998]). The input MC has to be observable just as hidden states must be in order to allow a state observer Ogata [1996]. The trajectory is not the output of the neuromotor process. The output is the trajectory and the arm compliance together, which are recorded here as movement trajectory and interaction forces. The following map is what we wish to find: ˆ MC(k) = Φ(x+ (k), Fδext (k), Fext (k))  (2.16)  ˆ is the motor command estimate. The trajectory is an explicit input to this map where MC and the impedance of the arm is implied in the relationship between x+ (k) and Fext (k). In the data analysis a further simplification is made. It is assumed that during the time segment of data used for analysis the MC is constant. This reflects the observation that due to the large delay time for feedback acting through the brain the motor control process has to behave like a hierarchical controller and supply presets for the periphery rather than perform on-line control. Because of the delay in communication from the arm to the brain and back, fast arm movements have to be preplanned. The analysis window that is used is 170 milliseconds long which is too short for long-loop feedback to excite a change in the MC (see Section 2.2.2 for further elaboration). Incorporating this final simplification the desired mapping assumes the following form: ˆ = Φ(x+ , F+ ) MC ext  (2.17)  where F+ ext is a time-history of the external forces. This last simplification is justified because ˆ the MC is taken as constant for a sequence of samples. It is not assumed that the signals that descend down the spinal cord are constant, but rather that their evolution over time can be parametrized in a way that captures their dimension. This is similar in concept to the minimum-jerk transitions that were found to represent hand movement in experiments by Flash and Hogan [1985]. These trajectories are in themselves not constant but a single movement can be characterized completely by specifying begin and end ˆ is a representation of the actual MC, which point and the movement time. Similarly, the MC 19  2. Input inference with neural networks may or may not vary over time.  2.2. Input inference with a neural net In order to obtain a representation of the MC a new algorithm was developed. The goal of ˆ of Equation 2.17 using a model that attempts to duplicate motor this algorithm is to infer MC behaviour. The algorithm is implemented using a neural network and, since it attempts to infer the hidden input MC, is called the Input Inference Neural Network (IINN). This section presents the IINN architecture and the method used for its adaptation.  2.2.1. Architecture Figure 2.3 shows a diagram of the IINN architecture. The model is composed of two levels: ˆ and level 2 (L2) models the motor behaviour. The connection from level 1 (L1) estimates MC L1 to L2 is a “bottleneck”; the number of connections between the two levels in the IINN is ˆ controlled to impose a dimension on the MC. The modelling error in L2 will increase when the bottleneck size is reduced from the true MC dimension, and hence provides a basis for estimating this dimension. This will be elaborated more below. L1 implements Equation 2.17 and its input vectors are: F+ p = [ Fp (kp ) Fp (kp − 1) . . . Fp (kp − n) ]  (2.18)  x+ p = [ xp (kp ) xp (kp − 1) . . . xp (kp − n) ]  (2.19)  The index p indicates the pth movement in the experiment. Note that for each movement p the input to L1 is fixed, i.e. during the window of analysis MC is assumed to be constant, as explained above in Section 2.1. For each movement the time index kp is set to the moment when the force perturbation is applied. The selection of a suitable value for n will be discussed in Section 2.2.2. L2 reconstitutes the motor output by implementing a model of the motor behaviour of the arm, including the spinal reflex loops. The L2 model is a nonlinear auto-regressive moving average model: ˆ˙ p (k) = L2(F∗p (k), xp (k − m), MC ˆ p) x  (2.20)  F∗p (k) = [ Fp (k) Fp (k − 1) . . . Fp (k − m) ]  (2.21)  where:  Note that the inputs and outputs of L2 are not fixed for each movement. The model in L2 has to integrate the force input from position xp (k − m) onward to estimate the velocity x˙ p (k). The selection of a value for m will be discussed in Section 2.2.2. Neural networks are used to implement L1 and L2. Neural networks are computational devices which allow regression of arbitrary nonlinear data. I will briefly outline the details 20  2. Input inference with neural networks F*p(k)  ^. xp(k)  L2  xp(k−m)  MCp L1 x+ p  F+ p  Figure 2.3.: The Input Inference Neural Network (IINN).  of the implementation used; for a more in-depth treatment see Bishop [1995]. Both L1 and L2 networks have 2 layers. The first layer is nonlinear and its nodes implement the following function: (1)  yi (1)  Wi  (1)  is the weight vector and yi  (1)T  = tanh(Wi  ·x(1) )  (2.22)  is the output of node i in layer 1. Vector x(1) is the input to  the network with a bias constant added. From Equations 2.18 and 2.19 the input to L1 is: + 1 ]T xL1(1) (k) = [ F+ ext,p xp  (2.23)  and from Equations 2.20 and 2.21 the input to L2 is: ˆ p 1 ]T xL2(1) (k) = [ F∗ext,p (k) xp (k − m) MC  (2.24)  The second layer in both L1 and L2 is linear: (2)  yi  (2)T  = Wi  · x(2)  (2.25)  The input to layer 2 is the propagated output from layer 1 and a bias: x(2) = [ y(1) 1 ]  (2.26)  ˆ p The connection between L1 and L2 is a bottleneck where the L2 model has to draw on MC to reproduce the observed motor behaviour. The bottleneck size determines the dimensionality of the MC representation the IINN produces. If the bottleneck size is too small the model error will be relatively large. When the bottleneck size is increased the model error decreases. At some point the dimension of the bottleneck is correct and further increase in size will have little effect on the model error. By making the bottleneck too small, the model is over constrained and a bias will be present in the error which expresses the inability of the model to represent the data. The IINN resembles non-linear principal component analysis (NLPCA) with neural networks introduced by Kramer [1991]. NLPCA is used to find nonlinear equivalents of the principal components of a data set by compressing the data onto a manifold in L1 and recreating the same data in L2. The difference between the IINN and the NLPCA is that with the IINN the L2 21  2. Input inference with neural networks layer is used to reproduce a system behaviour rather than a data set. The NLPCA compresses and recreates a given data set, while the IINN attempts to recreate a system behaviour using a minimal representation of the input to L1. When applying NLPCA L2 does not have additional inputs and its target output is the same as the input to L1. When the bottleneck is too large it is possible that over fitting occurs. Over fitting means that the model adapts to noise in the data, resulting in a lower error without the model being better able to explain the underlying process. Here this is avoided by using a technique called early stopping. With early stopping the available data is divided in two sets. A training set is used to adapt the weights and a validation set is used to verify the model’s generalizing ability. Typically during training the error for the validation set will first decrease with the training set error and then increase as the model starts to specialize in features present only in the training set. The available data was equally divided between a training set and a validation set through random selection. This maximizes the likelihood that the process characteristics are equally well represented in both and hence maximizes the effectiveness of early stopping. The weights in the IINN were adapted using the Levenberg-Marquardt algorithm to minimize the Sum-Squared-Error (SSE) [Bishop, 1995, page 290 and on]. The error of the IINN is: p  ˆ˙ p (k) x˙ p (k) − x  =  (2.27)  k  where  p  is the scalar error for movement p. The cost function is: J=  1 2  ( p )2  (2.28)  p  Before adaptation the weights were initialized with small pseudo-random values drawn from a uniform distribution on [−0.3, 0.3]. All input and target data was normalized to a [−1, 1] range to avoid scaling issues. Training was terminated when the validation set error stopped decreasing. Training was repeated several times with a different initial weight vector to avoid local minima. The algorithm was implemented in C++ and was evaluated on the Von Neumann computing cluster at the U.B.C. Physics department.  2.2.2. Dimensioning and data processing ˆ p , starting at the Of each movement only a 140 millisecond segment of data is used to infer MC onset of the force pulse. As discussed above the input to L1 is fixed for a given movement p. Each vector of 140 milliseconds long requires n = 14 (see Equations 2.18 and 2.19) since the sampling time is 10 milliseconds. ˆ p . Index k assumes 17 The inputs to L2 are F∗p (k) (Equation 2.21), xp (k − m) and MC consecutive values per movement starting with kp , so 170 milliseconds starting from the start of the force pulse. This time span is slightly larger than the minimum delay for long-loop reflexes but shorter than the reaction time to visual cues (see Section 2.1). The long-loop reflexes are 22  2. Input inference with neural networks not voluntary modifications, so the amount of voluntary input present in the few extra samples will be minimal. A slightly larger number of samples was used to maximize the data set size. The value of m (Equation 2.21) has to be sufficiently large to capture all relevant dynamics of the peripheral neuromotor system and to reduce the correlation between xp (k) and xp (k − m). The latter, because if m were to be small the prediction of vp (k) at the output would be much simplified and reduce the dependency of L2 on input from L1. A value of m = 9 has been used, corresponding to a 90 milliseconds gap between xp (k) and xp (k − m), which is much larger than the spinal reflex time. Since m represents a delay larger than the spinal reflex time but shorter than the delay in any feedback from the brain L2 will model the intrinsic dynamics of the arm as well as dynamics due to reflexes. L2 is forced to integrate the force input from xp (k − m) ˆ p as additional input. onward to produce an estimate of the current velocity vp (k), using MC ˆ p is created through back-propagation of the error at the output of L2, which is why the MC force perturbations need to be impulses; they need to reduce the predictability of the movement trajectories. ˆ p can assume. Another choice that must be made is an upper bound on the dimension that MC Both L1 and L2 are implemented as neural nets with two layers. The number of nonlinear nodes in L1 is limited; determining its size is discussed in the next section. The maximum number of meaningful bottleneck connections that can be made is equal to or less than the number of nonlinear nodes in L1. It is also limited by the number of training patterns per movement (17). These constraints are satisfied in the application of the IINN below and in Chapter 4. Vectors xp (k) and Fp (k) are both 2-dimensional, so the total number of inputs to L1 is ˆ p for L2 is 2(m + 2) = 22. The bottleneck 4(n + 1) = 60 and the number of inputs without MC size is limited by the amount of meaningful input to L1, which is 60 in the extreme but is reduced in practise by correlation between the samples. In the results (Section 2.3 and Chapter 4) we will see that the bottleneck size is well below both the number of nonlinear nodes and the number of inputs. The total number of patterns (combination of input and output values to train network weights) per movement available for training the model is 17 (see Section 2.2.2). With 200 movements per experiment this makes for 3400 patterns, half of which are used to adapt the weights. The other half is used for the validation set in order to avoid over fitting with early stopping. The advantage of using the same number of patterns in the training and the validation set is that it is more likely that the statistical properties of the two data sets are similar, resulting in a better training result. The number of training patterns (1700) was generally much larger than the number of weights in the IINN. The early stopping, however, is equivalent to weight regularization, meaning that the validation set provides an extra constraint on the weights thereby reducing the effective number of DOF that exist within the model to a minimum Bishop [1995]. 23  2. Input inference with neural networks  2.2.3. Other dimension extraction algorithms Other algorithms exist for dimension estimation of system dynamics. One is the correlation dimension (Theiler [1987]), which is a geometric method that infers the intrinsic dimension of a data set using the distance between data points. The data is first embedded in a phase space with a dimension known to be larger than the intrinsic dimension. Next the average number of points that fall within a radius r is counted. The logarithm of this number of points is plotted against r and its slope represents the dimension. As a simple example, if all points are on a line, then the number of points will increase linearly with r. If all points are on a 2-dimensional surface, then the number of points will increase with the square of r. This method is not applicable here, since we do not have a free system (a free system has a constant or absent input). The MC is an unknown input that influences the dimension, and we also have force perturbations and changing fields. In Section 1.2 a linear system was used as an example, and it was demonstrated that the dynamics embedded in a phase space exist on a linear surface with a dimension equal to the order of the system. For a nonlinear system the same dynamics occupy a curved surface. The correlation dimension can be used to estimate the dimension of such a surface. Although the motor behaviour is reflected in the recorded force and trajectory information, the motor planning is better expressed in terms of for example a desired trajectory and a hand compliance. These together determine the relationship between interaction force and hand trajectory, and can be expressed in a much simpler way than force and trajectory. This was explained in Section 1.4 where a random, highly complex force sequence is applied to a simple compliance. The observed force and trajectory are complex, but there relationship is simple. This is why correlation dimension or, for the same reason, NLPCA (Kramer [1991]), cannot be applied.  2.3. A synthetic problem To demonstrate how the IINN works a synthetic data set was created by simulating the linear mass-damper-spring system of Figure 2.4(a). Inputs representing the CNS are reference position x0 , stiffness K and mass M , so the MC dimension for this simulation is 3. Input trajectory x0 is a constant shift from 0 at 0 seconds to the target position at 0.2 seconds. The target position is a pseudo-random value from [0.3, 1] m with a uniform distribution and with a random sign. Stiffness K is a pseudo-random value from [3, 30] N m with a uniform distribution and mass M is a pseudo-random value from [0.15, 0.3] kg with a uniform distribution. Damping is fixed at 2 Nms . A “force field” is present as a time-dependent force: Fext (t) = a  1 − cos(x) 2 24  (2.29)  2. Input inference with neural networks (b)  (a)  (c)  1 5  Fext,x  m  0.6  F [N]  x0  x [m]  0.8  0.4  1  0.2 0  3  0  0.2 0.4 0.6 0.8 time [seconds]  1  −1  0  0.2 0.4 0.6 0.8 time [seconds]  1  Figure 2.4.: (a) Block diagram of the system used to create synthetic data. (b) Example of a simulated movement - the dashed line is with Fext = 0. (c) Field (solid) and perturbation (dashed) forces used to produce the solid curve in (b). with a pseudo-random amplitude a from [0, 5] N with a uniform distribution. A 100 millisecond force perturbation is also applied with a pseudo-random amplitude from [−10, 10] N with a uniform distribution. Simulated values were stored for 200 movements sampled at 100 Hz. Figure 2.4(b) and (c) show examples for K = 15 N m and M = 0.2 kg, a field amplitude of 2.5 N and a perturbation amplitude of 5 N . All IINN dimensions were the same as presented in Section 2.2.2, but since this example is 1-dimensional the input and output vectors are half the size.  2.4. Curve fitting of the error values The IINN was retrained 10 times for each bottleneck value. Figure 2.5 shows the results for 8 nodes in the first layers of both L1 and L2. The model error values for a given bottleneck size are random outcomes of the adaptation process, that depend on the initial weights selected for the IINN and the distribution of available data over training and validation set. Weight initialization and data distribution is done randomly before each retraining, and retraining increases the probability of finding a solution near the global optimum. The “true” error for a given bottleneck size, which is the error we would find for the global optimum, is therefore between zero and the minimum error found. To estimate the dimension from the decreasing error with increasing bottleneck size we need to find the “inflection point”, or the point at which the error stops decreasing. In principle a threshold error value could be used, but for this to work well the IINN would need to be retrained a very large number of times in order to achieve a set of minimum errors that approaches the optimum value. A simple threshold also ignores the information that is present in the minimum error values for bottleneck sizes away from the data dimension. A curve fit of the minimum IINN error values is therefore used to create metrics suitable for analysis and to reduce the influence of variability in the data. From the curve a parameter will be derived to reflect the dimension (below). The curve is restricted to assume positive values 25  2. Input inference with neural networks −4  −4  (a)  x10 3 model error [(m/s)2]  model error [(m/s)2]  x10 3  2  1  0  0  2 4 6 bottleneck size  2  1  0  8  (b)  0  2 4 6 bottleneck size  8  Figure 2.5.: Error values and curve fits for synthetic data from Section 2.3 with in (a) training and (b) validation results for L1 = 8 and L2 = 8. The dashed vertical lines indicate the locations of nb5 . The correct dimension d is 3.  less then or equal to the minimum error values. An exponential curve is used because it was found to fit the observed minimum IINN error as function of bottleneck size well. The use of an exponential curve will be further supported in Section 2.5.1. Figure 2.5 shows curves fitted to the minimum error found for each bottleneck size nb that was evaluated for the synthetic data. Because the curve fit averages the contribution of minimum errors for all bottleneck sizes, it improves the computational efficiency of the IINN algorithm; less evaluations of the IINN are necessary to achieve a reliable dimension estimate. The following exponential curve is fit to the minimum errors: ˆ b ) = b1 eb2 (nb −1) + a E(n  (2.30)  ˆ b ) is the error estimate for bottleneck size nb and b1 , b2 and Here, nb is the bottleneck size, E(n a are the curve parameters. Equation 2.30 is fit to the minimum errors using the following cost function: FE =  1 N  N nb =1  ˆ b) E(nb ) − E(n ˆ b) E(n  (2.31)  ˆ b ) are restricted where F E is the “fit error”. In the optimization of the curve fit the values of E(n to being less than the minimum errors, so each contribution to the sum in Equation 2.31 is ˆ b ) normalizes the cost function so that the minimum positive or zero. The division by E(n error for each value of nb receives equal weight. A bounded line search algorithm is used to optimize the curve fit parameter a. The parameter a must be 0 ≤ a < min(E(nb )). Figure 2.6 demonstrates how the parameters b1 and b2 are inferred from a given value of a. Even though Equation 2.30 has three parameters, two are redundant, making the optimization a single DOF problem. 26  2. Input inference with neural networks  (a)  1.6  (b)  1  1.4 0 log(MSE−A) [2log(m/s)]  1.2 2 MSE [(m/s)]  1 0.8 0.6 0.4  −1  −2  −3  0.2 0  0  2  4 6 bottleneck size  8  −4  0  2  4 6 bottleneck size  8  Figure 2.6.: Curve fitting procedure. (a) The solid line is a curve according to 2.30 with b1 = 1, b2 = −1 and a = 0.2. The “x” marks indicate points E(nb ) along this curve with a small random component. (b) The “x” marks indicate log(E(nb ) − a) for a = 0.15. The dashed lines indicate linearized candidate curves. Each candidate curve is created by selecting a pair of points determining a straight line which has all remaining points above it. The straight lines translate directly to values for b1 and b2 . By selecting the candidate curve with the lowest cost, making the optimization becomes a single parameter (a) search.  27  2. Input inference with neural networks Table 2.1.: Effect of L1 and L2 size on curve fit parameters  IINN size 4 8 12  training nb5 F E 3.43 0.09 2.54 0.28 2.65 0.23  validation nb5 F E 2.59 0.02 2.73 0.22 2.80 0.27  2.4.1. Parameters extracted from the curve fit To be able to tabulate the results and allow analysis, representative metrics are needed that describe the curve shape and the quality of the fit. The quality of fit will be indicated with the cost of Equation 2.31. The second metric is the point on the fit curve where the error is reduced to 5% on the normalized curve, obtained by setting b1 = 1 and a = 0 in Equation 2.30: ˆ ∗ (nb ) = eb2 (nb −1) E  (2.32)  ˆ ∗ (nb ) to 5% is given by: The reduction of E nb5 = 1 +  ln(5/100) b2  (2.33)  The parameter nb5 reflects the shape of the exponential curve without being affected by its scale (b1 ) or offset (a). Figure 2.5 shows the results of retraining the IINN 10 times for each bottleneck size for L1 = 8 and L2 = 82 . For the training set nb5 = 2.54 and fitting error F E = 0.28 (m/s)2 . For the validation set these values are 2.73 and 0.22 (m/s)2 respectively. Figure 2.7 demonstrates how the IINN is dimensioned. The graph shows the errors after training the IINN 10 times for each bottleneck size, for three different numbers of nodes in the first layers of L1 and L2. Table 2.1 shows the associated values for nb5 and F E. We want ˆ but as small as possible to keep to select a size that is sufficiently large to capture the MC, the number of adaptable parameters and the associated computational cost low. The results in Table 2.1 show that the shape, indicated by nb5 , does not change much when the size is increased from 8 to 12, indicating that 8 nodes is a suitable choice. Both graphs in Figure 2.7 and the results of Table 2.1 show that increasing the bottleneck size beyond 3 or perhaps 4 does not improve model performance, from which we can correctly infer that the dimension of the MC is 3 (in agreement with the 3 parameters that were varied to create the synthetic data in Section 2.3). 2  Expressions L1 = q and L2 = r will be used to indicate that there are q nodes in layer L1 and r nodes in layer L2.  28  2. Input inference with neural networks (a)  (b)  3  2  1  0  L1=4,L2=4 L1=8,L2=8 L1=12,L2=12  4  model error [(m/s)2]  model error [(m/s)2]  4  3  2  1  0  2  4 6 bottleneck size  0  8  0  2  4 6 bottleneck size  8  Figure 2.7.: Effect of L1 and L2 size on IINN errors. Training (a) and validation set (b) errors with the exponential fit to the minimum values for each bottleneck size. The smallest size (4) produces a large bias when compared to L1 = 8 and L1 = 12, while the latter perform similarly.  2.4.2. Worst case: equal power modes A worst case scenario occurs when the power is equally distributed over the modes. In such a case the error decreases at a constant rate with increasing nb while nb ≥ d, and remains constant thereafter. This is a worst case, because any other distribution of power or varianceaccounted-for by the IINN will result in a set of minimum errors that resembles an exponential decay somewhat better. The power attributed by the IINN to each identified mode is a function of both the system under investigation and the particular structure selected for the IINN. The worst-case scenario for the minimum error as a function of the bottleneck size will now be used to refine estimation of the dimension. Figures 2.8 (a) and (b) show two examples, one where d = 3 and one where d = 5. The simulations show that nb5 is not quite the same as the dimension d. Figure 2.8(b) shows that the curve fit changes when the highest value for nb is increased. The cost function of Equation 2.31 is normalized with the estimated minimum error, so it is natural to assume that the number of bottleneck sizes below and above the actual dimension should be equal to ensure the curve fit results in a dimension estimate near d. For example if the actual dimension is 4 then the evaluated bottleneck sizes could be 1 to 7 or 2 to 6. To demonstrate this point a worst-case with d = 5 was evaluated where the total number of points included for the curve fit was increased from 6 to 20. In other words, the exponential curve was fit to nb values of 1-6, 1-7, 1-8, etc., up to 1-20. Figure 2.9(a) shows the result with nb5 (marked with “x”) versus the number of points. The vertical dashed line indicates the case where bottleneck sizes of 1 to 9 were used and the horizontal dashed line indicates d = 5. The 29  2. Input inference with neural networks (a)  (b)  2  1.5  1  0.5  0  0  2 4 6 8 bottleneck size  MSE [(m/s)]  2  MSE [(m/s)]  1.5  1  0.5  0  0 5 10 bottleneck size  Figure 2.8.: Worst case scenarios: all modes have equal contributions. (a) Linearly decreasing MSE for a system requiring nb = 3 with nb5 = 2.44, F E = 0.23 (m/s)2 . (b) Same for nb = 5. The solid line shows the fit based on bottleneck sizes of 1 to 7 while the dashed line shows the fit for 10 points (1-10).  dependency of curve shape on the number of bottleneck sizes above and below d is clear from the decrease of nb5 with an increasing number of points included for the curve fit. Even though nb5 is close to the actual dimension, it has a bias which is visible in Figure 2.9(a) where the nb5 values are slightly off. To investigate this bias the worst-cases for d varying from 3 to 20 were evaluated. In each case the number of bottleneck sizes varied from 1 to 2d − 1, making the actual dimension the middle value of the included bottleneck sizes. Figure 2.9(c) shows the result with nb5 . The result shows that nb5 is slightly biased: for an actual dimension of 5 nb5 is 4.6. A linear regression was performed to provide the dimension estimate: dˆ = 1.062 · nb5 + 0.125  (2.34)  The values for dˆ are marked with “o” in Figure 2.9(a) and (c) and demonstrate an excellent fit. The simulation results are repeated in Figure 2.10 with dˆ marked with the solid vertical line. For d = 3 the results for the training and validation set are dˆ = 2.82 and dˆ = 3.00 respectively. The results in Figure 2.9(a) show that some care must be taken to ensure that the dˆ values are properly “bracketed” by observations, i.e. that the number of observations is the same on either side of the actual dimension. Proper bracketing allows identification of the correct values even for worst-case mode distribution. Note that in general the MSE versus bottleneck size will be less like the worst-cases and more like an exponential form. Consequently the dependence on proper bracketing by observations is expected to be less important in practise. Nonetheless we will attempt to maintain proper bracketing in our analysis. The quality of bracketing is expressed as the distance of dˆ from the middle bottleneck value considered: nb,min + nb,max nbracket = dˆ − 2 30  (2.35)  2. Input inference with neural networks  (a)  (b)  10  0.4  FE [(m/s)2 ]  nb estimates  8 6 4  0.2  2 0  0  0 5 10 15 20 number of points  (c)  0  5 10 15 20 number of points  (d)  (e)  0.4  1  20  12 8  0.5 nbracket  FE [(m/s)2 ]  nb estimates  16  0.2  −0.5  4 0  0  0 5 10 15 20 actual dimension  0  0  5 10 15 20 actual dimension  −1  0 5 10 15 20 actual dimension  Figure 2.9.: Investigation of the dimension estimate from curve fits for worst-case scenarios. (a) Worst case with d = 5 with the dimension estimate as a function of the number of bottleneck sizes used for the curve fit (from 1-6 to 1-20). Indicated are nb5 (“x”) and dˆ (“o”) . The dashed lines indicate the correct bottleneck size (5) and the number of points for which d is the centre value. (b) The F E values associated with (a). (c) Estimated dimension (nb5 (“x”) and dˆ (“o”)) as a function of the actual dimension when the same number of bottleneck sizes is used above and below d. The line indicates the ideal where the estimate equals the actual dimension. (d) The F E values associated with (c). (e) The value of nbracket plotted as a function of d; shown are the values for when there is one less (diamond), the same (circle), and one more (plus) observation above d as compared to below.  31  2. Input inference with neural networks x10−4 3  (a) model error [(m/s)2]  model error [(m/s)2]  x10−4 3  2  1  0  0  2 4 6 bottleneck size  8  (b)  2  1  0  0  2 4 6 bottleneck size  8  Figure 2.10.: Error values and curve fits for synthetic data from Section 2.3 with in (a) training and (b) validation results for L1 = 8 and L2 = 8. The dashed vertical lines ˆ The indicate the locations of nb5 and the solid vertical lines the locations of d. correct dimension d is 3.  Here nb,min and nb,max are the smallest and the largest bottleneck size evaluated. In Equation 2.35 it is assumed that the evaluated bottleneck sizes always include each integer value between the smallest and the largest value. Figure 2.9(e) shows the value for nbracket as a function of the actual dimension for when bracketing is off by 1 and when it is correct. This demonstrates that nbracket can be used to adjust the number of bottleneck sizes included to ensure the dimension estimate is unbiased; when nbracket deviates too much from zero the number of bottleneck sizes has to be adjusted. From Figure 2.9(e) it can be seen that when bracketing is off by 1 nbracket is larger than 0.7 or less than -0.7. Acceptable values of nbracket are therefore: |nbracket | < 0.7  (2.36)  The sign of nbracket indicates whether there are too few (nbracket > 0) or too many (nbracket < 0) points included above d. Bracketing values for the results in Figure 2.10 are -1.18 and -1.00 for the training and validation set respectively, indicating correctly that the dimension does not have an equal number of bottleneck sizes below and above. When repeating the curve fit with nb from 1 to 5 the values for dˆ are 2.69 and 2.77 and for nbracket -0.31 and -0.23. Finally, the values for F E plotted in Figures 2.9(b) and (d) show that the best estimate does not minimize the fitting error with respect to the number of points included. F E is used as the cost with which the exponential curve fit is optimized and as such is an indicator of the quality of fit, but it is not a good indicator of the correct number of bottleneck sizes or bracketing. Its purpose is for optimization of the exponential curve fit and as a general indicator of fit quality in order to detect any unusual outcomes. 32  2. Input inference with neural networks  2.4.3. Confidence intervals The dimension estimate dˆ needs an indicator of its variability. To this end the Bias-Correctedaccelerated (BCa, Davison and Hinkley [1997], DiCiccio and Efron [1996]) confidence intervals will be calculated. The curve fit procedure that yields dˆ is in effect a point estimation of a random process. The random process outcome depends on the initial weight values selected for the IINN and the distribution of data between the training and the validation set, both of which are random. The confidence intervals are calculated based on a bootstrapped representation of the disˆ The bootstrap distribution is created by re-sampling with replacement from tribution of d. the observations, in our case the estimation errors of the IINN. The re-sampling is done per bottleneck size. Each bottleneck size is treated as a separate unpaired category. After the re-sampling the curve fit is applied to estimate the dimension. This is repeated R times to yield a bootstrap distribution. The 90% confidence intervals will be reported for the validation data, using R = 2000 bootstrap repetitions. This number is sufficient for a 90% confidence interval (DiCiccio and Efron [1996]). For a detailed discussion on the bootstrap method please refer to the literature (Davison and Hinkley [1997], DiCiccio and Efron [1996]). The confidence interval expresses the variability in the observations. This variability is directly related to the number of times the IINN is retrained for each bottleneck size. The goal of repeating the IINN training to increase the likelihood the nonlinear model converges on a solution closer to the global optimum. With more retraining we will therefore have minimum error values closer to the global IINN minimum error, and the variability in the dimension estimate will decrease. This means that if the BCa confidence interval is too large, it can be reduced by increasing the number of times the IINN is retrained for each bottleneck value. In the analysis of the experimental data the initial number of times the IINN is retrained is set to 10, and then increased in steps of 5 until the confidence interval is less than 1 in length. A confidence interval length less than 1 allows separation of integer dimension estimates.  2.5. Fits and tabulations 2.5.1. Why fit an exponential function? Why is the exponential function of Equation 2.30 appropriate for fitting the modelling error as a function of bottleneck size? Apart from the apparent good fit, for example in Figure 2.7, the following observations support the assumption. The IINN output for a bottleneck size of 1 can be considered as operating on the principal mode of the MC, i.e. the mode which explains the largest portion of the data. The weights are adapted to reduce the error at the output as much as possible but the information flowing from L1 to L2 is always restricted to a single dimension. When the bottleneck size is increased, an extra mode is allowed and the error will decrease. As the number of allowed modes is further increased the rate of error reduction will slow down since 33  2. Input inference with neural networks the principal modes account for most of the variance in the output. The same happens when performing principal component analysis on a data set; the eigenvalues are directly related to the variance of the data explained by the associated eigenvector. The decreasing trend with a slowing rate is a basic characteristic of the exponential decay of Equation 2.30. To further support that the exponential curve is appropriate, the following will demonstrate that, given certain assumptions, in the limiting case the power distribution in the modes results in an exponential decay of error with the size of the bottleneck. Consider the following example where the modes are N statistically independent signals xi (t) with a zero-mean and unit variance amplitude distribution. The process output y(t) is the weighted sum of the modes: N  y(t) =  ai xi (t)  (2.37)  i=1  The weights ai are drawn from a N (0, 1) distribution. The average power of the output is: N  y 2 (t) =  a2i  (2.38)  i=1  where y 2 (t) is the expected value of the power. The principal mode is the largest a2i value, then the next largest and so on. For convenience we will assume that all a2i are sorted in decreasing order. If we remove the principal mode the remaining power is the error for a bottleneck of size 1: N  a2i  E(1) =  (2.39)  i=2  or in general: N  a2i  E(nb ) =  (2.40)  i=nb +1  Figure 2.11 shows that the resulting curve for a simulation of 1000 weight factors matches an exponential decay well. So certainly in the limiting case of a normal distribution of the parameters, an exponential decrease in the error is reasonable. The argument at the start of this section means to say that no matter what the precise distribution of power in the modes, the result will likely resemble an exponential decay. To further support this Equation 2.40 was evaluated 500 times for N = 6 and for 4 different distributions of ai . Each of the 500 power mode sets of 6 points was generated by generating 6 random values, sorting them in descending order, and applying Equation 2.40. The results are shown in Figure 2.12 and demonstrate that the exponential curve is a good fit for the mean of each of the 4 distributions of ai . Both these limit cases were made with a large number of weights or samples, but it should be noted that they were also created without any underlying process, just with random numbers. 34  2. Input inference with neural networks  1 simulation exp(−x/0.18)  normalized power  0.8  0.6  0.4  0.2  0  0  0.2  0.4 0.6 normalized mode index  0.8  1  Figure 2.11.: Residual normalized error as a function of normalized mode index from a simulation with coefficients ai from a N (0, 1) distribution. The second curve shows the fit of e−x/0.18 as a reference.  (c)  (b)  (d) 1  0.8  0.8  0.8  0.8  0.6  0.6  0.4  0.4  0.2 0  0.2  0 2 4 6 bottleneck size  0  0  0.6  0.6 0.4  0.4  0.2  0.2 0  2 4 6 bottleneck size  normalized power  1 normalized power  1 normalized power  normalized power  (a) 1  0 2 4 6 bottleneck size  0  0  2 4 6 bottleneck size  Figure 2.12.: For 6 consecutive points the mean and standard deviations of the power modes are shown based on 500 simulations. The continuous curve is the exponential fit to the mean data points. The distributions used for ai are: (a) normal N (0, 1), (b) normal-squared N 2 (0, 1), (c) uniform on [0, 1], and uniform-squared on [0, 1].  35  2. Input inference with neural networks These simulations demonstrate that any decrease that has some random component will converge towards an exponential shape. The central limit theorem supports the assumption that mode contributions have a normal amplitude distribution. According to this theorem the mean of n random processes with identical distributions tends to a normal distribution as n → ∞. Muscle force is the summation of contributions from many motor units. The activity of motor units can be modelled as a statistical process known as a renewal process Clamann [1969]. Such combining of stochastic entities occurs throughout the neuromuscular system. The results presented in Chapter 4 show a good fit of the exponential curve with the minimum error values.  2.5.2. Robustness The minimum MSE values for a data set can be quite erratic, see for example Figure 2.7. This variability is reduced in part by repetition of IINN training, because with repetition the minimum error for a given bottleneck size approaches the true minimal error. It is also reduced by an ensemble average effect resulting from using multiple values to fit an exponential curve with a single adjustable parameter. To demonstrate the effect of noisy error values, the worstcase mode distribution with the dimension set at 4 was used. This was then corrupted by adding noise of the form: E (nb ) = E(nb ) + n2σ  (2.41)  Here E(nb ) is the uncorrupted error for bottleneck size nb and nσ is a random value drawn from the normal distribution with zero mean and variance σ 2 . Figure 2.13 shows the results. The top row shows that despite the perturbation the most likely values remain very close to 4. Note also that the inevitable “smearing” of the distribution with increased noise results in a higher average estimate, so it is more likely to over-estimate the dimension when noise is present. This is also evident from the fact that the lower bound on the 90% confidence interval remains relatively constant. The confidence interval was calculated by taking the 5th and the 95th percentile. Figure 2.13(f) shows how the curve fit makes the dimension estimate more robust. The outlier for nb = 7 has little influence on the shape of the curve.  2.5.3. Quality of the curve fit An indicator is needed to express to what extent the fit curve represents all the errors, not just the minimum errors for each value of nb . This is necessary because the IINN may simply not find any structure in the data while the curve fit nonetheless produces some sort of result. Consider Figure 2.14 where the distribution of errors do not demonstrate a real trend. The data in Figure 2.14 was created by applying the IINN to the data from Section 2.3 with the target data replaced with normally distributed pseudo-random values with the same variance. Figure 2.14 shows the results for this exercise, with L1 = 8 and L2 = 8. The F E values are 0.005 and 0.009 respectively. These values are small because each contribution is normalized by 36  2. Input inference with neural networks (a) σ2=0.05  0.6 0.4  0.6 0.4  10  ^d  (d)  15  1.5 1 0.5  0  2  nb  4  6  8  0  5  10  ^d  (e)  2  0.4 0  15  1.5  0  5  10  ^d  (f)  2  15  1.5  1  0  0.6 0.2  0.5 0  0.8  MSE [(m/s)2]  5  MSE [(m/s)2]  MSE [(m/s)2]  0  2  0  0.8  0.2  0.2  (c) σ2=0.2  1 frequency  0.8  0  (b) σ2=0.1  1 frequency  frequency  1  1  0.5 0  2  nb  4  6  8  0  0  2  nb  4  6  8  Figure 2.13.: The effect of increased noise on dˆ for 10,000 simulations. The top row (a,b,c) shows the distribution of the dimension estimate for increasing noise levels with the vertical dashed line indicating the actual dimension of 4. The solid vertical lines indicate the 95% confidence intervals, which are [3.7, 6.1] for (a), [3.6, 7.0] for (b), and [3.4, 8.7] for (c). The bottom row (d,e,f) shows associated representative examples with the curve fit, giving an indication of the additive noise levels.  the estimated error, which is large here (see Equation 2.31). The dˆ values are 12.41 and 3.52, but these values are without meaning as it is clear from the distribution of errors that there is no real structure to be modelled here. When there is no structure in the data, the IINN errors for different values of nb should tend to the same mean, assuming that the IINN results will not be biased by the increase in the number of weights with an increase in nb . Figure 2.14 indicates that such a bias is not noticeably present. In such a case the ideal curve fit is a horizontal line through the smallest error value. An indicator of “fit quality” is now proposed which is the ratio of the normalized distance of all errors for all bottleneck sizes considered with respect to the horizontal line and to the exponential curve. The normalized distance for the exponential curve resembles the cost function of Equation 2.31 but is taken over all errors, not just the minimum ones: F Eexp  1 = NM  N  M  nb =1  i=1  ˆ b) Ei (nb ) − E(n ˆ b) E(n  (2.42)  where F Eexp is the fit error for all observations for the exponential curve, Ei (nb ) is the ith observation for bottleneck size nb and M is the number of observations per bottleneck size. 37  2. Input inference with neural networks (b) validation errors 88  86  86  84  84  2  model error [ (m/s) ]  2  model error [ (m/s) ]  (a) training errors 88  82 80 78  0  82 80 78  2 4 6 bottleneck size  8  0  2 4 6 bottleneck size  8  Figure 2.14.: Curve fits to IINN errors with a random target value. The training (a) and validation (b) set errors, with the exponential curve fit to the minimum values. Similarly a normalized distance for the horizontal line can be formulated: F Ehor =  1 NM  N nb =1  M i=1  Ei (nb ) − mini,nb Ei (nb ) mini,nb Ei (nb )  (2.43)  where F Ehor is the fit error for all observations for a horizontal line through the minimum error and mini,nb Ei (nb ) is the overall minimum error value. The fit quality F Q is given by: FQ =  F Ehor F Eexp  (2.44)  F Q expresses how many times the exponential fit matches the data better than the horizontal line. For F Q > 1 the exponential fit is better and for F Q < 1 the horizontal fit is more appropriate. Since the exponential curve is optimized to fit the minimum errors it is expected that it will generally fit somewhat better, and so typically F Q will be larger than 1. The values for F Q for the results in Figure 2.14 are 1.03 for the training set and 0.98 for the validation set, indicating that the fit might as well be a straight line. In contrast, the results from Figure 2.10 have F Q values of 4.86 and 4.34 respectively.  2.6. Interpretation of modelling results ˆ values the IINN model produces in the bottleneck to actual The final step is to relate the MC parameters. This interpretation will be done using results for the example with synthetic data developed in Section 2.3 and after. The IINN with L1 = 8 and L2 = 8 is selected, based on Occam’s razor (selecting the simplest solution that is sufficient), with a bottleneck size of 3. The IINN with L1 = 12 and L2 = 12 does not improve on the smaller and therefore simpler model 38  2. Input inference with neural networks (b) normalized control  (a)  MC  0.2 0 −0.2 1  2  3  4  5 6 7 movement index  8  9  10  1 0.5 0 −0.5 −1 1  2  3  4  5 6 7 movement index  8  9  10  ˆ for synthetic data. (a) The MC ˆ for the first 10 movements. (b) Figure 2.15.: Sample MC The corresponding normalized control parameters K, M and target position. The ˆ does not correspond to the control parameters in any obvious way nor is MC it expected to, since the IINN makes a blind inference to create some equivalent representation.  (see Figure 2.1). The IINN with the lowest validation error is used. Figure 2.15 shows a sample ˆ produced by the selected IINN as well as the corresponding control parameters (K, M of MC ˆ and the control parameters do and target position) used when generating the data. The MC not have any obvious correlation. ˆ is an equivalent representation of the control parameters that To demonstrate that the MC were used to generate the data, a nonlinear regression was performed. For this regression a 2-layer neural net was used with the first layer nonlinear and the second layer linear, identical to L1 in Equations 2.22 and 2.25. The neural net had 8 nodes in the first layer, 3 inputs ˆ (MC) and 3 outputs (targets are the control parameters K, M and target position). The total number of weights was 123. The model was optimized using the Levenberg-Marquardt optimization algorithm. Only 200 patterns were available, so no validation set was used since the number of patterns should at least be larger than the number of weights (200 equations with 123 unknowns). Training was terminated when the training set error did not decrease for 200 iterations or when 5000 iterations had been performed. For those cases where 5000 iterations were reached the training errors were inspected visually to ensure the error was asymptotically close to the final value. Training was repeated 10 times with pseudo-random initial weights and the regression with the lowest error was selected. The results of this nonlinear regression are presented in Figure 2.16. The stiffness and target ˆ values, but the prediction of mass is poor. An explanavalues are well predicted from the MC tion is that for a relatively high stiffness the sensitivity to the mass value is small. This does not, however, invalidate the estimated value for the bottleneck size, because for some movement ˆ and mass, indicating that even if trials Figure 2.16 does show a clear correlation between MC mass is not needed to explain some of the data, it is needed to explain the complete set. 39  2. Input inference with neural networks  K−M, r2=0.05  K−K, r2=0.84  K−X, r2=0.051  1  1  1  0  0  0  −1  −1  −1  −1 0 1 M−K, r2=0.02  −1 0 1 M−M, r2=0.43  −1 0 1 M−X, r2=2.1e−06  1  1  1  0  0  0  −1  −1  −1  −1  −1 0 1 X−X, r2=0.97  −1 0 1 X−M, r2=0.00038  0 1 X−K, r2=0.071  1  1  1  0  0  0  −1  −1  −1 −1  0  1  −1  0  1  −1  0  1  Figure 2.16.: Correlations between actual (horizontal axes) and regressed (vertical axes) control parameters, all normalized. Control parameters are stiffness (K), mass (M ) and target position (X). The correlation r2 is given above each graph. The main diagˆ onal shows how well the control parameters can be predicted from the MCvalues produced with the IINN. The off-diagonal graphs show that this correlation is low for unrelated parameters, as it should be.  40  2. Input inference with neural networks  2.7. Summary In this chapter the Input Inference Neural Network (IINN) was introduced as a tool to infer the dimension of the Motor Command (MC). The MC dimension is found by manipulating the IINN model complexity by controlling the bottleneck size. The use of an exponential curve fit to the minimum errors for each bottleneck size was supported and its robustness to variable error values demonstrated. The exponential curve fit provides the final MC dimension estimate ˆ The IINN was validated using synthetic data and it was shown that the motor command d. ˆ correlates with the control parameters that were varied to generate the synthetic estimate MC data.  41  3. Data collection It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts. Sir Arthur Conan Doyle In this chapter the experimental setup used to gather data is explained. Section 3.1 explains the rationale for the experiment design. In Section 3.2 the 5-bar manipulandum is introduced, the haptic robot used to investigate neuromotor control. Section 3.3 presents the experimental setup and data processing is presented in Section 3.4.  3.1. Experiment design The development in Section 2.1 produced Equation 2.17 which is repeated here: ˆ = Φ(x+ , F+ ) MC ext  (3.1)  This expression demonstrates that the time-histories of hand position and contact force capture the motor behaviour and provide an equivalent expression of the MC. Therefore, during experiments, both force and position need to be recorded. Note that instead of position, velocity or acceleration could be used. Like many previous experiments we investigate hand movement in the horizontal plane, with the hand and elbow at shoulder level (see e.g. Bhushan and Shadmehr [1999], Burdet et al. [2001], Gomi and Kawato [1996], Hodgson and Hogan [2000], Scheidt et al. [2001]). When a subject is asked to move from a starting location to a given end-point, the hand trajectory typically follows a near-straight line between the two points. If a force field is introduced that causes a subject to deviate from the straight hand path he or she will adapt to compensate by generating force during movement so that the hand path returns to a straight line. Adaptation of interactive movement for novel tasks is done through adaptation of both force and mechanical compliance at the hand (Franklin et al. [2003], Hogan [1985]). The experiment was designed to elicit a set of responses that explores all degrees of freedom available for the expression of the MC. The IINN performs a type of system identification and its success depends on having a sufficiently rich data set. Two types of challenges to the motor adaptation process were incorporated in the experiments: force fields and force pulse. The 42  3. Data collection force fields were introduced to elicit adaptation of the neuromotor system. The force pulses were introduced to excite the mechanical compliance of the hand. The type of force field was switched after a random number of movements. The variation of force fields avoids complete adaptation of the subject, but because the subject remains in a given field for a number of consecutive movements, partial adaptation can occur. Scheidt et al. [2001] demonstrated that adaptation occurs even when force fields are switched from one movement to the next. Allowing partial adaptation is intentional, because subjects might apply motor behaviours that would otherwise not be utilized. The varying force fields were introduced to create a sufficient richness in strategies with which subjects handle the environment. The varying field was one of three types: the null field, positive curl, and negative curl. The null field allowed free movement with very little effort. The curl field generated a velocitydependent force at 90 degrees to the movement direction: Ff ield (t) = a  0  13 Nms  −13 Nms  0  v(t)  (3.2)  For a positive curl field a > 0 and for a negative curl field a < 0. Depending on the strength of a subject, |a| was set to either 1 or 0.5. The number of consecutive movements in the same field was a pseudo-random value from a uniform distribution from 3 up to and including 7. To allow investigation of the compliance of the arm, force pulses were introduced. The square force pulses were 100 milliseconds long with a pseudo random amplitude drawn from a uniform distribution on [−20, 20] N and a pseudo-random direction from a uniform distribution on [−π, π] radians. The random disturbances had a high bandwidth assumed to be sufficiently rich to allow modelling of the hand compliance. The perturbations were applied at a location along the straight line connecting origin and target between the 10% and 90% distance locations. The perturbation location was chosen as a pseudo-random value from a uniform distribution. The analysis does not directly estimate the hand compliance. Rather, its effect on motor behaviour is modelled by the IINN.  3.2. The 5-bar manipulandum A 5-bar parallel manipulandum was developed for this research1 . Its dimensions were optimized to produce an optimal workspace (Hodgson [1996]). As part of the current research a controller was developed using Matlab R Real-Time Workshop (Mathworks Inc.) with a digital signal processing (DSP) board (DS1102, dSpace Inc.). The controller uses a reference model to generate the ideal behaviour and the control loop is closed with a single velocity gain. Root-locus plots were developed for varying control gain, as well as desired inertia and damping of the reference model. The combination of a reference model and a single tuning parameter allowed for the straightforward design of a robust impedance controller. Details of the robust and simple 1  The 5 bars refer to the 5 linkages in the robot, see Figure A.1.  43  3. Data collection  Figure 3.1.: Experimental setup. impedance controller (RSIC) development are included in Appendix A. The reference model of the impedance controller is expressed in Cartesian workspace coordinates while position feedback comes from the resolvers of the two motors. Accurate knowledge of link lengths is therefore required. In Appendix B a method developed by O’Brien et al. [2000] was adapted to calibrate the link lengths.  3.3. Experimental setup Seven subjects participated in this experiment and are labelled A to G in the text. All participants were healthy without any known neuromotor dysfunction past or present. All participants were right-handed and performed the experiments with their dominant hand. Subjects were fully informed about the nature of the experiments and no deception was involved or required. The research was approved by the University of British Columbia ethics board. During experiments subjects were seated in a chair, grasping the manipulandum handle with their dominant hand. The elbow was supported in a sling suspended from a high ceiling to avoid fatigue in the shoulder muscles. The chair was adjustable in height and the shoulder was positioned so that the entire arm moved in a horizontal plane. The wrist was fixed with an orthopaedic wrist support (see Figures 3.1, 3.2 and A.2). Subjects were instructed to remain with their backs against the seat during the experiments. Above and behind the manipulandum a 17 inch flat screen monitor was used to display the position of the hand with a filled red circle. The screen position was updated at 20 Hz. Subjects were given as much time as desired to familiarize themselves with the device. During familiarization the manipulandum implemented the null field, which allowed free movement with very little effort. During experiments each target was displayed on the screen as a blue square. As soon as subjects started to move the square would fill from the centre outwards at a constant rate with green. Subjects were instructed to make a single movement and stop on the target just as it filled completely and to then remain at this location until a new target appeared. Movement was 44  3. Data collection  30cm  Figure 3.2.: The diagram shows the manipulandum above, with the subject below, holding the handle approximately in the centre of the workspace. The vertical black bar indicates the 30 cm straight-line segment along which movement targets were selected. considered to have started when the tangential hand speed was above 15.6 mm/s. A target was considered acquired when the subject remained within a radius of 33 mm of the correct location for at least 100 milliseconds. After some practise subjects were able to complete movements with consistent timing. To encourage and entertain participants, a score counter was displayed on the screen; 3, 2 and 1 points were awarded for being within 10, 50 and 100 milliseconds of the target time respectively. Movement sequences were generated with a pseudo-random number generator such that the targets were uniformly distributed while subject to the constraint that movement distance be at least 13 cm. Movement targets were all located on a straight line of 30 cm in the middle of the workspace at a 90◦ angle with the axis through subjects’ shoulders (see Figure 3.2). Movement times (Tm ) of 0.5 seconds (“slow”) and 0.3 seconds (“fast”) were used in two experiments. Note that although fast movements will on average result in a higher velocity, the distance varies in each experiment, so peak velocity varies as well. After subjects had familiarized themselves with the equipment experiments were started. Each experiment consisted of 20 practise movements in the null field and 200 movements where the field was varied. Between the 20 practise and the 200 varying field movements and between experiments, there were rest periods of at least 2 minutes and as long as desired, to avoid fatigue.  3.4. Data collection and processing Force in the horizontal plane was measured with a force sensor (“Mini” sensor, 12 bit resolution, ATI Industrial Automation Inc.) embedded in the handle of the manipulandum. Position was derived from the encoder feedback of the direct-drive motors (Dynaserv DR1060B, ParkerCompumotor). The encoders with 507904 pulses/revolution were used to calculate Cartesian hand coordinates using the forward kinematic equations. In the middle of the workspace this 45  3. Data collection gives a theoretical Cartesian resolution better than 10µm. Hand position and force, as well as the forces generated by the field and used for perturbation, were sampled at 1 kHz and stored. Hand velocity was then derived from hand position and all data was re-sampled at 100 Hz for the analysis. The spectra of sampled data were inspected to ensure there was no aliasing. The IINN uses hand position, hand velocity, and the force recorded by the force sensor (see Section 2.2). An interactive tool was used to identify when each movement began and ended, and to mark the moment in time where the perturbation pulse was applied. These processed movement segments were then used for the IINN analysis, as described in Section 2.2.2 and later sections of Chapter 2.  3.5. Summary This chapter presented the methods for acquisition and processing of data describing human motor behaviour during a point-to-point task in a changing environment. The use of changing force fields and force perturbations was supported.  46  4. Results Nothing is as simple as we hope it will be. Jim Horning (Horning [2005]) The results of data collection and analysis are presented in this chapter. Section 4.1 provides samples of recorded trajectories and forces and serves to provide a qualitative impression of the collected data. The remaining sections present the results of application of the analysis methods presented in Chapter 2. In Section 4.2 the IINN is applied in order to estimate the MC dimension. In the same section the influence of target movement time is investigated. The ability of the IINN to predict trajectories is quantified in Section 4.3 and in Section 4.4 it is shown that the inferred MC contribution to trajectory prediction is statistically significant. The correlation between the estimated MC and parameters describing each movement task, as well as parameters describing the force field during the trial, are investigated in Section 4.5. Section 4.6 investigates whether the MC estimate supports a movement-to-movement type of learning process.  4.1. Trajectories and forces Figure 4.1 shows the position and tangential velocity of the hand during three consecutive movements for subject A with Tm = 0.5 seconds (slow movement). The first movement shown is in the null-field and is the last movement before the first field switch. This movement in the null-field shows only a minor deviation in the x-direction and the tangential velocity has the bell-shaped curve typical for planar hand movements. The second and third movement are in the negative curl field and their velocity profiles show evidence of segmentation near the end, where velocity first drops to zero after which a second corrective movement is added to reach the target. Figures 4.2, 4.3 and 4.4 show the associated force perturbations, force field and the force exerted by the subject’s hand on the manipulandum, respectively. Figure 4.4(a) shows the force required to move in the null-field. From comparison of the forces in the null-field to forces in the curl field in Figures 4.4(b) and 4.4(c) it can be observed that the curl field and perturbation force dominate force generation at the hand. In particular Figure 4.4(c) shows how the application of a force perturbation results in a reaction force from the subject’s hand, causing an instant realignment of the force vectors to oppose the external force. Figures 4.5, 4.6 and 4.7 show additional sample trajectories for subject E, but with Tm = 0.3 seconds (fast movements). Again the effect of the perturbation is clearly visible in Figure 47  4. Results (a)  Xx [m]  0.25 0.175  0.1  (b)  Xy [m]  1  0.5 (c)  Vt [m/s]  1 0.5 0 29  30  31  32  33  34  time [s]  Figure 4.1.: Three movement trajectories of subject A (slow), the first in the null-field and the second and third in the negative curl field (see Section 3.3). (a) Shows movement in the x-direction, (b) movement in the y-direction, and (c) the tangential velocity. 4.5. In Figures 4.5(b), 4.6(b) and 4.7(b) the force perturbation momentarily cancels out both the force generated by the field and the force generated by the subject’s hand. When comparing Figures 4.3 and 4.6 the difference between the negative and the positive curl field can be seen. The movements of Figure 4.3 are in the negative curl field, which causes a force to the left for an upward movement and a force to the right for a downward movement. The movements of Figure 4.6 are in the positive curl field, and the deviating forces are in the opposite direction. The forces generated at the hand of the subjects show the same general characteristics observed by other researchers (e.g. Bhushan and Shadmehr [1999], Shadmehr and Mussa-Ivaldi [1994]). The forces point towards an imaginary straight-line path between start and end-point. In the curl force fields and when not significantly perturbed by a force pulse, the trajectories follow a path deviating from a straight line and end in a hook-shaped path when approaching the target (e.g. Figure 4.7(c)).  4.2. Motor command dimension 4.2.1. Curve fitting results Based on preliminary investigations, the initial range of bottleneck sizes was set to 1 to 7 and the IINN was retrained 10 times for each bottleneck size (see Section 2.4.3). The range of bottleneck sizes and the amount of retraining were adjusted as needed for the final results below. Before applying the IINN to all the data, the number of nodes in the input layers of L1 and 48  4. Results  (b)  (c)  0.9  0.9  0.9  0.8  0.8  0.8  0.7  0.7  0.6  0.6  0.6  0.5  0.5  0.5  10 N  Xy [m]  (a)  0.7  0.1  0.15  0.2  0.25  0.1  0.15  0.2  0.25  0.1  0.15  0.2  0.25  Xx [m]  Figure 4.2.: The same 3 trajectories for slow movement of subject A as Figure 4.1. The solid line is the movement trajectory and the dotted line indicates the other two trajectories. Each movement start is indicated with a small circle and each end point with a “x”. The target area is indicated with the larger circle. Each movement is highlighted with the 100 millisecond force pulse indicated with arrows. In (a) the last movement in the first null-field, (b) next movement and the first in the negative curl field, (c) second movement in the negative curl field. The influence of the perturbation can be seen in (c) where the movement is slowed near movement start. This is also visible in Figure 4.1(c) where the tangential velocity profile is bimodal.  49  4. Results (a)  (b)  (c)  0.9  0.9  0.9  0.8  0.8  10 N  Xy [m]  0.8  0.7  0.7  0.7  0.6  0.6  0.6  0.5  0.5  0.5 0.1  0.15  0.2  0.25  0.1  0.15  0.2  0.1  0.25  0.15  0.2  0.25  Xx [m]  Figure 4.3.: The same 3 trajectories for slow movement of subject A as Figures 4.1 and 4.2. The arrows indicate the force generated by the negative curl field. The first movement shown is in the null-field. In (b) and (c) the divergent forces due to the negative curl field can be seen. (b)  (c)  0.9  0.9  0.9  0.8  0.8  0.8  0.7  0.7  0.6  0.6  0.6  0.5  0.5  0.5  10 N  Xy [m]  (a)  0.7  0.1  0.15  0.2  0.25  0.1  0.15  0.2  0.25  0.1  0.15  0.2  0.25  Xx [m]  Figure 4.4.: The same 3 trajectories for slow movement of subject A as Figures 4.1, 4.2 and 4.3. The arrows indicate the force produced by subject A as measured with the force sensor. When comparing to Figures 4.2 and 4.3 the measured force opposes the forces due to the field and the perturbation. The forces required to move in the null-field are shown in (a).  50  4. Results (b)  (c)  0.9  0.9  0.8  0.8  0.8  0.7  0.7  0.6  0.6  10 N  Xy [m]  (a) 0.9  0.7  0.6  0.5 0.1  0.175  0.25  0.5 0.1  0.175  0.25  0.5 0.1  0.175  0.25  Xx [m]  Figure 4.5.: Three movements for subject E (fast). In (a) the last movement in the first nullfield, (b) next movement and the first in the positive curl field, (c) second movement in the positive curl field. The arrows indicate the force pulse perturbations. The movement in the null-field is a nice straight-line path. (b)  (c)  0.9  0.9  0.8  0.8  0.8  0.7  0.7  0.6  0.6  10 N  Xy [m]  (a) 0.9  0.7  0.6  0.5 0.1  0.175  0.25  0.5 0.1  0.175  0.25  0.5 0.1  0.175  0.25  Xx [m]  Figure 4.6.: The same 3 trajectories for fast movement of subject E as in Figure 4.5. The arrows indicate the force generated by the positive curl field. The force field strength was set to half the level used for other subjects to accommodate the subject. The divergent forces due to the positive curl field are visible in (b) and (c). Comparing to the negative curl field of Figure 4.3, the forces are in the opposite direction for a movement in the same direction.  51  4. Results (b)  (c)  0.9  0.9  0.9  0.8  0.8  0.8  0.7  0.7  0.6  0.6  10 N  Xy [m]  (a)  0.7  0.6  0.5 0.1  0.175  0.25  0.5 0.1  0.175  0.25  0.5 0.1  0.175  0.25  Xx [m]  Figure 4.7.: The same 3 trajectories for fast movement of subject E as in Figures 4.5 and 4.6. The arrows indicate the force produced by subject E as measured with the force sensor. During the second movement (b) the force pulse appears to momentarily cancel the force field in the latter half of the movement. The third movement (c) shows how the forces at the hand re-align when the force pulse is applied.  L2 have to be selected. Figure 4.8(a) and (b) show the MSE and curve fits for 3 different IINN configurations for subject A during slow movements and Table 4.1 lists the associated metrics. Only the results for the validation data are reported here and used for further analysis; these are considered to reflect the structure in the data more accurately than the training data since they were not used (directly) to adapt the IINN weights. The results for the training data were similar to those for the validation data. When comparing the performance of the three IINN ˆ This metric configurations the only indicator that is important is the dimension estimate d. is uniquely associated with the shape of the exponential curve. At a first glance the curves in Figure 4.8 appear quite similar for different numbers of nodes in the first layer of L1 and L2. As expected the error depends on the number of adaptive parameters in the IINN and hence there is a slight decrease in overall error when L1 and L2 are increased. Figure 4.8(c) shows the dimension estimates for the different numbers of nodes with the 90% confidence interval (CI in Table 4.1). This shows that while the dimension estimate with 8 nodes is significantly different from the other two, the estimates based on an IINN with 10 and 12 nodes are not. Significant here means an estimate which is outside the confidence interval, for example in Figure 4.8(c), the dimension estimates for 10 and 12 nodes are outside the confidence interval of the estimate for 8 nodes. For a one-sided test this would correspond to a p-value of 0.05. This analysis was repeated for subject B and fast movements, and the results are shown in Figure 4.9 and Table 4.2. These results are very similar to those of the previous paragraph and 52  4. Results Table 4.1.: Metrics for subject A, Tm = 0.5 seconds IINN L1 = 8, L2 = 8 L1 = 10, L2 = 10 L1 = 12, L2 = 12  nb5 3.37 3.76 3.99  dˆ 3.70 4.11 4.36  CI [3.49, 3.86] [3.79, 4.32] [3.86, 4.76]  nbracket -0.30 0.11 0.36  FE 0.07 0.14 0.08  FQ 3.8 3.4 3.7  Table 4.2.: Metrics for subject B, movement time 0.3 seconds IINN L1 = 8, L2 = 8 L1 = 10, L2 = 10 L1 = 12, L2 = 12  nb5 3.28 4.48 3.70  dˆ 3.61 4.89 4.05  CI [3.32, 3.84] [3.71, 6.70] [3.65, 4.59]  nbracket -0.39 0.89 0.05  FE 0.10 0.12 0.06  FQ 3.1 3.7 4.3  agree with the findings for slow movements of subject A. The confidence intervals in Figure 4.9(c) show a large uncertainty for 10 nodes. Inspection of the data indicated that this is due to an outlier (not shown) with a particularly low error value for nb = 7. This also causes the large value for nbracket in Table 4.2, but this is corrected in the further analysis below by increasing the number of bottleneck sizes and evaluations, with little change in dˆ (see Tables 4.3 and 4.4). Based on these results it was decided to use an IINN with L1 and L2 having 10 nodes in the input layer, since a larger network does not appear to improve the results despite the increased number of adaptable parameters and computational effort. As stated above, the initial range of bottleneck sizes was set to 1 to 7 and the IINN was retrained 10 times for each bottleneck size. If the confidence interval for the dimension estimate was found to be larger than 1 then the IINN was retrained an additional 5 times for each bottleneck size (see Section 2.4.3). This was repeated until the confidence interval was less than 1 or until the IINN was retrained 30 times. The latter limit was imposed in order to limit the amount of computations needed. The 35 IINN evaluations (5 times retraining for 7 bottleneck sizes) typically took 2 to 3 days on the computing cluster to complete. The penalty for stopping IINN retraining after 30 evaluations per bottleneck size is a larger confidence interval on the final dimension estimate. The results for all subjects are presented in Table 4.3(a) for slow, and in Table 4.3(b) for fast movements. Only the results with a confidence interval spanning less than 1 or with the maximum n = 30 retraining of the IINN are included. In the results bracketing values that fall outside the range defined in Equation 2.36 are marked with “∗ ”. The column on the right indicates the number of times (n) the IINN needed retraining to achieve an acceptable confidence interval. The results in Table 4.3 show some bracketing values that are outside the accepted range. For these subjects the bottleneck range was adjusted to achieve an acceptable value. Initially the number of bottleneck values included in the analysis was simply extended to 9, but this 53  4. Results  (a) training errors  (b) validation errors 0.02  0.02  0.016 model error [ (m/s)2 ]  model error [ (m/s)2 ]  0.016  0.012  0.008  0.004  0  L1=8,L2=8 L1=10,L2=10 L1=12,L2=12  0.012  0.008  0.004  0  2  4 6 bottleneck size  0  8  0  2  4 6 bottleneck size  8  (c)  dimension estimate  5  4  3  0  8  10 IINN size  12  Figure 4.8.: MSE dependence on IINN configuration for subject A (slow). Training (a) and validation (b) error for variation of L1 and L2 size. The dimension estimates and their 90% confidence intervals are shown in (c).  54  4. Results  (a) training errors  (b) validation errors  0.016  0.016  0.012  model error [ (m/s)2 ]  model error [ (m/s)2 ]  L1=8,L2=8 L1=10,L2=10 L1=12,L2=12  0.008  0.004  0  0.012  0.008  0.004  0  2  4  6  0  8  0  bottleneck size  2  4 6 bottleneck size  8  (c) 7  dimension estimate  6  5  4  3  0  8  10 IINN size  12  Figure 4.9.: MSE dependence on IINN configuration for subject B (fast). Training (a) and validation (b) error for variation of L1 and L2 size. The dimension estimates and their 90% confidence intervals are shown in (c).  55  4. Results Table 4.3.: Metrics of all subjects for bottleneck sizes of 1-7 (values of nbracket marked outside the acceptable range) (a) Slow movements subject A B C D E F G  nb5 3.76 5.77 3.85 4.55 3.36 3.67 5.17  dˆ 4.11 6.26 4.21 4.95 3.69 4.02 5.62  CI [3.79, 4.32] [3.62, 6.62] [3.92, 4.45] [3.71, 6.02] [3.42, 4.13] [3.45, 4.41] [4.16, 6.61]  nbracket 0.11 2.26* 0.21 0.95* -0.31 0.02 1.62*  FE 0.14 0.09 0.08 0.08 0.09 0.04 0.07  FQ 3.4 4.0 3.3 4.2 3.5 3.4 4.3  n 10 30 15 30 10 20 30  FE 0.08 0.08 0.08 0.13 0.08 0.04 0.05  FQ 3.5 3.8 3.4 3.9 4.7 3.2 4.2  n 15 30 10 30 10 15 10  *  are  (b) Fast movements subject A B C D E F G  nb5 3.94 4.48 3.95 4.03 3.86 3.98 4.01  dˆ 4.31 4.88 4.32 4.41 4.22 4.35 4.39  CI [3.83, 4.31] [3.88, 5.47] [4.07, 4.58] [3.62, 4.65] [3.73, 4.52] [4.10, 5.00] [3.86, 4.47]  nbracket 0.31 0.88* 0.32 0.41 0.22 0.35 0.39  still resulted in one value for dˆ that was bracketed incorrectly (B, slow movement). To correct the bracketing the range was limited by removing lower values from the included bottleneck values. The final results are shown in Tables 4.4(a) and 4.4(b). The extra column on the right indicates the range of bottleneck values used. It is interesting to note that the estimated dimension changes little with adjustment of the range of bottleneck values, indicating that, as anticipated, the sensitivity of dˆ to proper bracketing is less of a factor when compared to the equal power mode cases evaluated in Section 2.4.2. When the range of bottleneck sizes is reduced to [3, 9] in Table 4.4 F Q is diminished. This is expected because the largest power modes are represented by the error decrease from nb = 1 to nb = 2 and from nb = 2 to nb = 3. When these are removed from the curve fit the relative error F Eexp of Equation 2.44 increases. F Q remains well over 1 however for all results, indicating that the exponential curve remains a better fit than a straight line fit.  The results suggest a relationship between the bracketing value and the confidence interval. For those data with an acceptable nbracket value a limited amount of retraining appears sufficient to achieve a confidence interval spanning less than 1. In retrospect it would probably have been more efficient to adjust the range of bottleneck values to achieve proper bracketing before increasing the number of times the IINN is retrained. This means that in some cases the current 56  4. Results Table 4.4.: Metrics after adjusting bottleneck ranges (a) Slow movements subject B D G  nb5 6.07 4.49 5.12  dˆ 6.57 4.89 5.57  CI [5.54, 7.11] [3.47, 5.40] [4.04, 6.23]  nbracket 0.57 -0.11 0.57  FE 0.05 0.08 0.09  FQ 1.6 3.3 3.2  n 30 30 30  nb nb ∈ [3, 9] nb ∈ [1, 9] nb ∈ [1, 9]  FQ 3.1  n 30  nb nb ∈ [1, 9]  (b) Fast movements subject B  nb5 4.48  dˆ 4.89  CI [4.59, 5.28]  nbracket -0.11  FE 0.07  results may have more IINN evaluations than necessary, which does not affect the result in a negative way.  4.2.2. Dimension estimate analysis The results for the validation data with acceptable bracketing values from Tables 4.3 and 4.4 are used for further analysis below. Keeping in mind that the dimension can only have integer values, Table 4.5 summarizes the final estimates for dˆ based on the results for validation data as well as the integer values above and below the dimension estimate. In bold are the values ˆ This shows that it may be reasonable to assume that the MC dimension dˆ is the closest to d. same or similar for different subjects. We can also see that the true dimension of the MC is 4, 5 or 6 and perhaps even 7. Subject B is a notable outlier, presenting a higher dimension estimate for both slow and fast movements. Figure 4.10 also shows the dimension estimates and confidence intervals for all subjects. For fast movements the confidence bounds are generally much tighter and the dimension estimate falls consistently between 4 and 5. For slow movement, when the confidence interval is small the associated dimension estimate is also less than 5 (subjects A, C, E and F). The remaining show a confidence interval that includes 4 and 5, except for subject B. For subject B the dimension estimate for fast movements also appears higher than for the other subjects. The dimension estimates dˆ are quite consistent across subjects, with the exception of subject B. We will therefore assume that the MC dimension is the same for each subject, which seems justified based on the results of Table 4.5. The Wilcoxon test is a statistical tool that can be applied to test the difference in the mean of a series of observations from a set value. It is a non-parametric test, i.e. it does not assume a specific distribution for the data. The test does assume a symmetrical distribution. Table 4.6 shows the p-values produced by this test for each of the three null hypotheses H0: “the median of the dˆ values from validation data is equal to 57  4. Results  Table 4.5.: Integer dimensions derived from dˆ of validation data for slow and fast movements (bold values indicate the closest integer values)  subject A B C D E F G  dˆ 4.11 6.57 4.21 4.89 3.69 4.02 5.57  0.5 sec. CI below [3.79, 4.32] 4 [5.54, 7.11] 6 [3.92, 4.45] 4 [3.47, 5.40] 4 [3.42, 4.13] 3 [3.45, 4.41] 4 [4.04, 6.23] 5  above 5 7 5 5 4 5 6  0.3 sec. CI below [3.83, 4.31] 4 [4.59, 5.28] 4 [4.07, 4.58] 4 [3.62, 4.65] 4 [3.73, 4.52] 4 [4.10, 5.00] 4 [3.86, 4.47] 4  dˆ 4.31 4.89 4.32 4.41 4.22 4.35 4.39  above 5 5 5 5 5 5 5  8 slow fast  dimension estimate  7  6  5  4  0  A  B  C  D subject  E  F  G  Figure 4.10.: Dimension estimates with confidence intervals for all subjects and slow and fast movement data.  58  4. Results Table 4.6.: Wilcoxon one-sample test p-values for assumed dimensions, slow and fast movement hypothetical dimension 4 5 6  slow 0.11 0.47 0.05  fast 0.02 0.02 0.02  4, 5 or 6”1 . The p-value expresses the likelihood of collecting the given data assuming H0 is true. Since the mean of the dimension estimates for slow movements is 4.72 it is not surprising that a dimension of 5 gets the largest p-value for slow movement, but Table 4.6 indicates that 4 also has to be considered as a possible dimension. For fast movement all values fall between 4 and 5, and the Wilcoxon test is meaningless since all data is either to one side or the other of the hypothetical mean. The mean for fast movements is 4.41 so if we assume a symmetric probability distribution 4 will be the most likely, followed by 5 and 6. Also of interest is the difference within subjects between slow and fast movement. A paired Wilcoxon test was performed to evaluate the dimension difference between slow and fast movements. With H0: ”the means are the same for slow and fast movement” a p-value of 0.69 was found, indicating that the data supports the notion that complexity is the same for slow and fast movement data. Non-rounded data was used for all statistical tests. In summary, the results support that the MC dimension for the observed motor behaviour is 4 or 5. The data supports the hypothesis that MC dimension is the same for different subjects and for different target movement times. Subject B does not follow the same pattern as other subjects, suggesting that the dimension may be different for different subjects. This will be discussed further in Section 5.1.  4.2.3. Movement time and dimension To determine whether or not movement time influences the MC dimension the data for slow and fast movement were combined in a single set for each subject. Table 4.7 shows the results of the same analysis as was done above for each movement time separately. None of the results for bottleneck sizes nb ∈ [1, 7] provided acceptable values for nbracket so all values in Table 4.7 are for nb ∈ [1, 9]. Subjects B, C, E and G show values for nbracket outside the acceptable range, so the bottleneck range was adjusted, producing the results in Table 4.8. ˆ confidence intervals, and the associated integer Table 4.9 summarizes the final values for d, dimensions. Figure 4.11 repeats the dimension estimates from Figure 4.10 and adds those for the pooled movement times for direct comparison. The dimension estimates are generally higher for pooled movement times. The only exceptions are subjects B and D, where the estimates for slow movement are higher. The mean dˆ across subjects is 5.58 with a standard deviation of 0.74. The p-values for assumed mean values of 4, 5, 6 and 7 are presented in Table 4.10 1  The p-values were not corrected for multiple comparisons.  59  4. Results Table 4.7.: Metrics all subjects for bottleneck sizes 1 to 9, pooled movement times (values of nbracket marked * are outside the acceptable range) subject A B C D E F G  nb5 4.31 5.46 5.99 4.38 5.25 4.68 5.65  dˆ 4.70 5.92 6.49 4.77 5.70 5.10 6.13  CI [4.52, 4.92] [4.25, 6.63] [5.02, 7.23] [4.32, 5.76] [5.48, 6.79] [4.92, 5.78] [5.86, 7.51]  nbracket -0.30 0.92* 1.49* -0.23 0.70* 0.10 1.13*  FE 0.13 0.12 0.14 0.16 0.06 0.04 0.09  FQ 4.1 3.1 3.0 4.7 4.7 3.1 5.1  n 10 30 30 30 30 10 30  Table 4.8.: Metrics for B, C, E and G with adjusted bottleneck range, pooled movement times subject B C E G  nb5 5.46 6.16 5.30 5.66  dˆ 5.92 6.66 5.75 6.13  CI [5.33, 6.56] [4.43, 7.89] [5.69, 6.40] [5.83, 7.36]  nbracket 0.42 0.66 0.25 0.63  FE 0.07 0.11 0.03 0.05  FQ 2.1 1.5 2.9 3.2  n 30 30 30 30  nb nb nb nb  nb ∈ [2, 9] ∈ [3, 9] ∈ [2, 9] ∈ [2, 9]  and suggest that the most likely dimension is 6. For subject D the dimension estimate for slow movement shows a large confidence interval, indicating that the higher dimension estimate may be a random effect. For subject B, the dimension estimate for slow movement is clearly higher than for pooled movements, which is not possible. A possible explanation is that the dimension estimate for slow movements of subject B is a true outlier, and that the actual dimension should be lower. Subject B will be discussed in more detail in Section 5.1. The question is whether the movement dimension increases by 1 for the pooled data to allow for the extra task parameter (time). The paired Wilcoxon test was evaluated for the pooled movement times versus slow and fast movement with H0: “the mean difference is +1”. The p-value of the comparison of dˆ values with those for both slow and fast movements was 0.69. For comparison, for H0: “the means are the same for pooled and non-pooled data”, the p-values are 0.16 for slow and 0.02 for fast movement. This supports hypothesis H0 that adding movement time as a task parameter increases the MC complexity by 1. To what extent were subjects able to produce the desired movement times? To verify that the movement times are indeed different for the slow and fast movement tasks, the start and end point for each movement during practice and during alternating fields was determined. Movement onset was determined by the moment where the velocity was larger than 15.6 mm/s and movement completion was set as the moment when the trajectory was within 33 mm of the target location for at least 100 milliseconds. This reflects the method used in the experiment but does not necessarily reflect the true movement speed with accuracy, although it should be precise. The results for three subjects are shown in Figure 4.12 and demonstrate that there 60  4. Results  Table 4.9.: Integer dimensions derived from dˆ of validation data for pooled movement times subject A B C D E F G  dˆ 4.70 5.92 6.66 4.77 5.75 5.10 6.13  CI [4.52, 4.92] [5.33, 6.56] [4.43, 7.89] [4.32, 5.76] [5.69, 6.40] [4.92, 5.78] [5.83, 7.36]  below 4 5 6 4 5 5 6  above 5 6 7 5 6 6 7  slow fast pooled  8  dimension estimate  7  6  5  4  0  A  B  C  D subject  E  F  G  Figure 4.11.: Dimension estimates with confidence intervals for all subjects, showing results for both separate and pooled movement.  Table 4.10.: Wilcoxon one-sample test p-values for assumed dimensions, pooled movement times hypothetical dimension 4 5 6 7  61  p-value 0.02 0.16 0.23 0.02  4. Results (a)  (b)  80  0.5s 0.3s 50 100 150 200 movement (d)  0.2 0.4 0.6 0.8 Tm [s]  0.3 0 0  50  1  0.3 50  100 150 200 movement (f)  80  40  0  0.6  0 0  100 150 200 movement (e)  80  0.5s 0.3s  40  0  0.6  count  0.3  0.9 Tm [s]  Tm [s]  0.6  0 0  count  0.9  count  Tm [s]  0.9  (c)  0.2 0.4 0.6 0.8 Tm [s]  1  40  0  0.2 0.4 0.6 0.8 Tm [s]  1  Figure 4.12.: Representative examples of movement times during the experiments. In (a), (b) and (c) a moving average filter with 5 taps was applied to the movement times for subject A, subject C and subject F respectively. The two traces indicate slow and fast movement times (see legend in (a)). The vertical line indicates where the first 20 practice trials in the null field end. The bottom row shows the movement time histograms for the last 200 movements of the same subjects A, C and F. Note that the distribution for fast movement (0.3 seconds) in (d) and (f) are clearly skewed towards the left, indicating that the movement time of 0.3 seconds is challenging for subjects. This was confirmed by feedback during the experiment. It appears Subject C was not able to produce a different distribution of times for the fast movements.  is a substantial overlap in the distribution of movement times. The Wilcoxon test (unpaired) applied to the movement times with hypothesis H0: “the mean movement times are the same for slow and fast movements” results in very small p-values (p < 0.01) for all subjects. Table 4.11 shows the difference between the mean fast and slow movement times (∆T ) for all subjects as well as the standard deviations for slow and fast movement times, indicating that on average subjects moved faster consistently when performing the fast movement task. The analysis was also performed using peak velocity as an indicator of movement speed, producing similar results. Of all subjects C showed most overlap between the movement times for slow and fast movements, shown in Figures 4.12(b) and (e). The overlap between movement times for slow and fast movements is readily explained with the presence of the changing force fields and the force perturbations which increase the variability in actual movement times. What is interesting is that even for subject C there is an increase of slightly more than 1 in the dimension estimate from slow or fast movement to pooled movements. This can only be explained by assuming that even if a subject is not able to match 62  4. Results Table 4.11.: Difference between fast and slow mean movement times and standard deviations subject A B C D E F G  ∆Tm 0.10 0.09 0.06 0.08 0.09 0.14 0.11  SD slow 0.12 0.23 0.20 0.17 0.18 0.13 0.13  SD fast 0.13 0.17 0.18 0.16 0.14 0.14 0.12  the movement time demanded in the task description, the MP nonetheless integrates this input resulting in a modified MC. In other words, even if a subject is not able to achieve the set movement time, the execution of movement is nonetheless affected.  4.2.4. Force fields and dimension The motor complexity is expected to be more complex in the presence of changing force fields, simply because the neuromotor system is challenged more. The 20 practice movements made in the null-field without force perturbations were also analyzed. Since only 20 movements were made 10 segments were taken from each movement trial, equally spaced between the 10% and 90% location along the movement path. This resulted in a data set about the same size as for the analysis of slow and fast movement. As in Section 4.2.1 the evaluated bottleneck sizes were 1 to 7 initially and this range was adjusted to achieve proper bracketing. Table 4.12 shows the results. In Figure 4.13 the dimension estimates are directly compared to those found for slow and fast movements in the presence of changing force fields and force perturbations. The expectation was that the dimension would be lower for null-field movements. The mean for slow movement in the null-field is 4.18 and the mean for fast movements is 3.88, less than respectively 4.72 and 4.41 for movements in changing fields. The paired Wilcoxon test was applied to the dimension estimates with H0: “the mean is less for null-field than for changing field conditions”. The p-values were 0.23 for slow movements and 0.08 for fast movements and do not indicate a significant difference in dimension. The practise movements were initially not intended for analysis and so only 20 were made, enough for subjects to learn the movement time. Variability in these initial 20 movements is likely to be higher as subjects adjust their timing. On the other hand, the null-field data is lacking richness relative to the changing field data which comprises 200 movements. It is believed that this explains the lack of a clear decrease in dimension. In some cases the dimension estimates in Figure 4.13 do have clearly distinct and reduced values (subject B and C slow, subject E, F and G fast). These three cases are consistent with each other. If more data were available, possibly the dimension for the null-field condition would be 2.5 to 3. 63  4. Results  Table 4.12.: Metrics for null-field movements after adjusting bottleneck ranges (a) Slow movements subject A B C D E F G  nb5 3.79 3.32 2.17 3.73 4.03 5.13 4.54  dˆ 4.15 3.65 2.43 4.09 4.41 5.57 4.95  CI [4.06, 4.77] [3.09, 3.70] [2.00, 2.75] [3.86, 4.82] [3.44, 8.82] [4.36, 6.80] [4.81, 5.17]  nbracket 0.15 -0.35 -0.57 0.09 0.41 0.57 -0.05  FE 0.03 0.02 0.03 0.13 0.04 0.07 0.08  FQ 2.2 2.3 2.2 1.7 1.6 1.6 2.2  n 10 20 10 15 30 30 30  nb nb = [1, 7] nb = [1, 7] nb ∈ [1, 5] nb = [1, 7] nb = [1, 7] nb ∈ [1, 9] nb ∈ [1, 9]  FQ 1.9 1.8 1.6 1.9 1.8 1.9 1.7  n 30 30 10 30 30 15 30  nb = [1, 9] = [1, 7] = [1, 7] = [1, 7] = [1, 5] = [1, 5] = [1, 5]  (b) Fast movements subject A B C D E F G  nb5 5.11 3.64 3.79 4.09 2.17 2.24 2.78  dˆ 5.55 3.99 4.15 4.47 2.43 2.51 3.07  CI [4.48, 6.63] [3.48, 5.07] [3.84, 4.34] [3.88, 5.87] [1.97, 3.03] [2.40, 2.99] [2.99, 3.68]  nbracket 0.55 -0.01 0.15 0.47 -0.57 -0.49 0.07  FE 0.05 0.05 0.16 0.03 0.01 0.02 0.07  nb nb nb nb nb nb nb  9 slow slow−null fast fast−null  dimension estimate  8 7 6 5 4 3 2 1  A  B  C  D subject  E  F  G  Figure 4.13.: Dimension estimates with confidence intervals for all subjects, showing results for both movements in a changing field and in a null-field.  64  4. Results  4.3. Prediction of movements We can now say with some confidence that the correct size of the bottleneck and therefore the MC dimension is 4 or 5 for movements where the target movement time was not pooled, and that this is true for either movement time and all subjects. The results presented in the remainder of this chapter will be based on estimates of the MC produced by the IINN for the non-pooled target movement times. In this section the ability of the IINN to predict movement trajectories is assessed for slow movements of subject A and fast movements of subject D. The ˆ values for the best results (lowest validation error) for nb = 4 and nb = 5 were used as MC input to a neural net. This is essentially an IINN with L1 frozen to the learned weights. The L2 network was retrained on a larger segment of each movement. Data from movement onset to the end of the force perturbation were used to train the model, based on the assumption that the MC is constant for an unperturbed movement. Data from every odd numbered movement were used for the training set and those with an even index for the validation set. Training and validation set each contained 3,500 to 4,000 patterns. In addition the input to L2 was expanded to improve trajectory prediction; the input (Equation 2.20) was modified to: ˆ p) v ˆp (k) = L2(F∗p (k), x∗p (k − m), MC  (4.1)  x∗p (k − m) = [ x(k − m) xp (k − m − 2) . . . xp (k − m − 26) ]  (4.2)  where:  This makes for a total of 40 + nb inputs to L2. The expanded input position vector improves trajectory prediction because it allows the model to better exploit the low-pass nature of the data. The total number of weights for the L2 network was 472 for nb = 4 and 482 for nb = 5. The output of the IINN was estimated velocity as before, which was integrated from movement start for a visual comparison with the actual movement trajectories. The best result from retraining 10 times was selected for each configuration and is presented below. Figure 4.14 shows the results for the last 5 movements in the null field before the first field change and the subsequent 5 movements in the negative curl field for slow movements of subject A. In Figure 4.15 the same is done for fast movements of subject D. First it should be noted that the SSE optimization places stronger emphasis on the larger errors, so it is expected that the relative error for low-speed movement will be higher. This can be seen for example in the onset of the downward movement of Figure 4.14(c) where it causes an offset between estimated and actual trajectory. The IINN predicts the trajectories well up to shortly after perturbation end (big dots in Figures 4.14 and 4.15), but deteriorates after. Three possible explanations are: 1. the prediction suffers because the velocity is low near the end of the movement; 2. the model is incapable to generalize well beyond the data that was used for training; 3. the descending command has been modified to properly complete the movement. 65  4. Results The first factor certainly contributes to the error but its magnitude should be the same for movement start where the errors are relatively small. The second point argues that since only data from movement onset to perturbation end is used to adapt the IINN the lack of generalization near the end of movement is simply due to lack of ability of the model to generalize. The results look the same for the trajectories used for training and validation however, indicating that generalization is actually quite good. This argument is weakened somewhat because the validation set is used during IINN training to determine when to stop. Strictly speaking a third “test set” is needed to evaluate the model performance. This was not done here to ensure that sufficient samples of each field were present in the data sets. Not having a third test set is not expected to have had a large impact, also because training and validation set were chosen to be of equal size. It is assumed therefore that the deviations seen near the end of movement are primarily due to the third reason: the MC is adjusted near the end of movement to achieve the set objective. The predicted trajectories show significant deviations, but the tangential speed seems to be well matched. This is evident from Figures 4.14 and 4.15 where the black dots line up well with the end of the force perturbations indicated with arrows. If more past force and position samples are used as input to L2 the trajectory prediction can be improved, but this would ˆ input from L1. reduce the reliance of L2 on the MC  4.4. Significance of inferred motor commands The activation dynamics of muscle fibres in motor units and inertial dynamics of the arm have low-pass characteristics. This means that the auto-correlation of movement trajectories will typically be high for small time delays. This in turn implies that trajectories can be predicted quite well without any knowledge of the MC. Another way to say the same is that the low-pass nature of the intrinsic arm dynamics makes an inverse model such as the IINN quite difficult to apply. The data has to be sufficiently rich to perform such a model inversion. ˆ the IINN adaptation of Section 4.3 was repeated but the To verify the significance of MC ˆ values were replaced with zeros, effectively removing the MC ˆ input without changing the MC number of weights in the L2 model. The trained L2 neural nets were retrained with the modified training data, so that a direct comparison could be made between the unmodified trajectory ˆ removed. prediction and the prediction of the same L2 neural net retrained with MC Figure 4.16 shows the predicted trajectories for slow movements of subject A and Figure 4.17 for fast movements of subject D. When comparing to Figures 4.14 and 4.15 it is clear ˆ the trajectory predictions are quite good that even without the benefit of the identified MC for some movements. The percentage change in MSE was calculated for both subjects for nb = 4 and nb = 5, and are presented as scatter plots in Figure 4.18. All results are positive, ˆ was removed as input for trajectory indicating that in all cases the error was larger when MC ˆ may therefore be small compared prediction. The contribution to trajectory prediction of MC 66  4. Results  (b) 10V0  (c) 11T0  (d) 12V0  (e) 13T0  (i) 17T−  (j) 18V−  Xy [m]  5cm,20N  (a) 9T0  Xx [m] (g) 15T−  (h) 16V−  5cm,20N  Xy [m]  (f) 14V−  X [m] x  Figure 4.14.: Movements predicted for subject A, Tm = 0.5 seconds and with nb = 4. The captions above each sub-figure indicate the movement index (out of 200), whether it was used for training (“T”) or validation (“V”) , and whether the movement was in the null field (“0”) or in the negative curl field (“-”). The wide line is the actual movement, the thin line the integration of the velocity estimate. The arrows indicate size and direction of the force perturbation. The filled circle on the thin line curves corresponds to perturbation end on the model estimate as well as to the end of data used for training and validation. Beginning of movement is indicated with a circle, the end with an “X”.  67  4. Results  (b) 6V0  (c) 7T0  (d) 8V0  (e) 9T0  (i) 13T−  (j) 14V−  Xy [m]  5cm,20N  (a) 5T0  X [m] x  (g) 11T−  (h) 12V−  5cm,20N  Xy [m]  (f) 10V−  X [m] x  Figure 4.15.: Movements predicted for subject D, Tm = 0.3 seconds and with nb = 4. The same format as in Figure 4.14 is used.  68  4. Results (b) 10V0  (c) 11T0  (d) 12V0  (e) 13T0  (i) 17T−  (j) 18V−  Xy [m]  5cm,20N  (a) 9T0  X [m] x  (g) 15T−  (h) 16V−  5cm,20N  Xy [m]  (f) 14V−  X [m] x  ˆ inputs set to Figure 4.16.: The same data as Figure 4.14, but the model was adapted with MC zero. to the contribution of correlation between samples, but it is significant. The p-value resulting ˆ is removed”, from a one-sided paired Wilcoxon test with H0: “The error was smaller when MC was less than 0.001 for all cases.  4.5. Motor command and task and trial parameters The IINN operates on data on a per-movement basis. The algorithm attempts to capture what is common for all movements in L1 which “parametrizes” the movement in the bottleneck. ˆ of the command descending down The bottleneck parameters are a compact expression MC ˆ makes a significant the spinal cord from the brain. The previous section has shown that MC contribution to trajectory prediction. In this section a multiple nonlinear regression with a 69  4. Results  (b) 6V0  (c) 7T0  (d) 8V0  (e) 9T0  (i) 13T−  (j) 14V−  Xy [m]  5cm,20N  (a) 5T0  X [m] x  (g) 11T−  (h) 12V−  5cm,20N  Xy [m]  (f) 10V−  X [m] x  ˆ set to zero. Figure 4.17.: The same data as Figure 4.15, but the model was adapted with MC  70  4. Results  50  train  150  100  valid  50  0  train  150  100  50  0  valid  (d) D,fast,nb=5 percentage error change  100  (c) D,fast,nb=4 percentage error change  150  0  (b) A,slow,nb=5 percentage error change  percentage error change  (a) A,slow,nb=4  train  valid  150  100  50  0  train  valid  Figure 4.18.: Scatter plot comparisons of the percentage increase in MSE for IINN trajectory ˆ input is removed from L2. Positive values indicate a prediction when the MC ˆ larger error without MC. In (a) and (c) the values for nb = 4 and in (b) and (d) the values for nb = 5 for subjects A(slow) and D(fast) are shown. The left column in each plot indicates the increase in training error and the right the increase in validation error. neural network is done to investigate the relevance of task and trial parameters. The task parameters describe the instruction to the subject, which are start and target locations and movement time. The parameters describing a trial are the type of field and the force pulse start, amplitude and direction. The force pulses were random, so no correlation is expected ˆ because they cannot be anticipated by subjects. They are therefore between them and the MC, not included here. The regression was performed with the estimated MC as input and start and target locations (task parameters). The nonlinear regression was performed with a neural net with a single nonlinear layer, similar to L1 presented in Section 2.2 with 4 nodes in the first layer, resulting in 30 weights for nb = 4 and 34 for nb = 5. Since only 200 patterns were available all were used to train the neural net. Training was terminated when the error did not reduce more than 1/100,000 during 20 updates, or after 2,000 iterations. Training was repeated 10 times and the best result selected. Figure 4.19 shows results for subjects A and D. The lower agreement for start location is likely because the previous target location was used as the current start location. In practice subjects stopped somewhat off target, which introduces an error in the data used to perform the regression. This extra error does not occur in the target location, since the target does not depend on accuracy of subjects. Table 4.13 shows the r2 values for all subjects and speeds, ˆ therefore embeds information regarding the demonstrating very good agreement. The MC desired movement trajectory. ˆ was also investigated. Because field type is The relationship between the field type and MC ˆ as input and one output for each of the 3 categorical data a classifier was constructed with MC fields. The target output for the active field was set to 1 and the others to -1. The same neural net structure was used with 6 nodes in the first layer, resulting in 51 weights for nb = 4 and 71  4. Results  regressed  (a) A,slow,nb=4,start,r2=0.80  (c) D,fast,nb=4,start,r2=0.88  (d) D,fast,nb=5,start,r2=0.86  1  1  1  1  0.5  0.5  0.5  0.5  0  0  0  0  −0.5  −0.5  −0.5  −0.5  −1  −1  −1  −1  0  1  (e) A,slow,nb=4,target,r2=0.95 regressed  (b) A,slow,nb=5,start,r2=0.85  −1  0  −1 −1  1  (f) A,slow,nb=5,target,r2=0.96  0  1  (g) D,fast,nb=4,target,r2=0.95  −1  1  1  1  1  0.5  0.5  0.5  0.5  0  0  0  0  −0.5  −0.5  −0.5  −0.5  −1  −1  −1 −1  0 task  1  −1  0 task  1  −1 −1  1  0  (h) D,fast,nb=5,target,r2=0.96  0 task  1  −1  0 task  1  Figure 4.19.: Regression result of movement start (top row) and target (bottom row) location ˆ for subjects A(slow) and D(fast) for nb = 4 and nb = 5. The dashed from MC line indicates the ideal one-to-one correspondence.  ˆ to task parameters Table 4.13.: Result of regression of MC  subject A B C D E F G  slow nb = 4 nb = 5 0.93 0.92 0.91 0.90 0.91 0.92 0.94 0.94 0.93 0.91 0.93 0.93 0.89 0.90  72  fast nb = 4 nb = 5 0.95 0.95 0.94 0.91 0.92 0.89 0.96 0.95 0.94 0.91 0.94 0.94 0.95 0.94  4. Results Table 4.14.: Confusion matrices for classification of fields for subjects A(slow) and D(fast)  actual Null PC NC  A,slow,nb = 4 Null PC NC 24 18 17 8 45 16 12 14 45  A,slow,nb = 5 Null PC NC 49 5 5 9 57 3 13 1 57  D,fast,nb = 4 Null PC NC 47 17 16 23 27 9 17 7 37  D,fast,nb = 5 Null PC NC 49 16 15 30 20 9 11 9 41  ˆ Table 4.15.: Result of classification of field type based on MC  subject A B C D E F G  slow nb = 4 nb = 5 57 82 68 75 63 65 68 69 63 72 60 74 65 66  fast nb = 4 nb = 5 59 68 69 75 64 64 56 55 75 69 60 75 60 60  57 weights for nb = 5. All 200 data points were used for training and training was terminated when the error did not reduce more than 1/100,000 during 20 updates, or after 5,000 iterations. The best result of 10 times retraining was used. The classification results for subjects A(slow) and D(fast) are shown in Table 4.14 as confusion matrices. As an example on how to interpret the table, for subject A, slow speed with nb = 4, of all movements in the null-field 24 were classified correctly, 18 as movements in a positive curl field and 17 as movements in a negative curl field. The overall classification rate is the sum of all correct classifications as a percentage of the total number of movements. Table 4.15 shows the classification rates for all data. Classification rates are all at least twice the “guess rate” of 30%, but are not particularly high. When assuming the same field for the current movement as for the previous for example, the classification rate would be on average 81% for each data set, based on an average of 5 ˆ is specifically tailored to movements per field. The data therefore does not suggest that MC the presented field. Classification results are generally better for nb = 5 than they are for nb = 4. In Table 4.16 the percentage of classifications that agreed with the field of the previous movement are shown. Although the percentages appear to follow the same trend as those in Table 4.15 the success rate is rather low and certainly does not suggest that the previous field was used as a basis for estimating the current. 73  4. Results ˆ Table 4.16.: Correspondence between previous and current field classification based on MC  subject A B C D E F G  slow nb = 4 nb = 5 52 73 63 67 58 62 64 65 62 65 57 66 62 63  fast nb = 4 nb = 5 54 64 61 65 56 58 51 51 67 62 57 70 58 54  4.6. Evidence of learning in motor command adaptation Studies of motor learning in new fields show that it is plausible that the nervous system uses the error information of the previous movement to update its behaviour for the current one (Scheidt et al. [2001], Donchin et al. [2003]). If this view is correct then the motor controller that generates adjustments to the MC in order to adapt to new force fields is a relatively simple movement-to-movement auto-regressive update. In these previous studies movement error is expressed as a function of the difference between the movement trajectory and a hypothetical ideal trajectory. The ideal trajectory is usually chosen to be a minimum-jerk transition from start to end-point along a straight line, which has been found to fit unconstrained point-to-point movement well (Flash and Hogan [1985]). In our experiments each task is represented by a start and end-point, while target movement time is constant within each data set. The task description TASK(p) for movement p is therefore composed of the y-coordinates of the start and end-point (the x-coordinate of movement targets is always the same). Note that the task description is equivalent to the ideal trajectory since all one needs to specify a minimum-jerk transition is start and end-point and movement time. Incremental learning can be represented in equation form as an update to MC from one movement to the next: ∆M C (p) = f (TASK(p), ERROR(p − 1))  (4.3)  Here the update to the MC for movement p is ∆M C (p). The update is assumed to be some function f (.) of the error made in the previous movement and the task description for the current one. The error is understood to be a function of the ideal and the actual trajectory TRAJ(p − 1), so the update equation can be rewritten as: ∆M C (p) = f (TASK(p), TASK(p − 1), TRAJ(p − 1)) 74  (4.4)  4. Results ∆M C (p) is incorporated into MC(p) by “adding” it to the previous MC, so: MC(p) = g(∆M C (p), MC(p − 1)) = g(TASK(p), TASK(p − 1), TRAJ(p − 1), MC(p − 1))  (4.5) (4.6)  This representation agrees with the diagrams presented in Figures 1.5 and 2.1. To verify that our data supports the hypothesis that incremental learning in this form occurs new data sets were created for the IINN. The first data set implements Equation 4.6 in L1 and ˆ includes incremental learning. MC(p) is substituted with the estimate MC(p) to create the model: ˆ ˆ MC(p) = g(TASK(p), TASK(p − 1), TRAJ(p − 1), MC(p − 1))  (4.7)  The second data set implements a reduced version of Equation 4.6 in L1: ˆ MC(p) = g(TASK(p), TASK(p − 1), TRAJ(p − 1))  (4.8)  ˆ MC(p − 1) was removed from the input by replacing it with zeros. This makes comparison ˆ between the models fair since their IINN structures are identical. Removal of MC(p − 1) as input to L1 represents modelling of the motor control process without incremental learning. By comparing the modelling errors with and without incremental learning the relative imporˆ tance of MC(p − 1) can be established. The IINN was evaluated for both nb = 4 and nb = 5. L2 was maintained as before. Data was equally divided between training and validation set. TASK(p) was represented by the start and target location for movement p. TRAJ(p) was repˆ resented with 15 trajectory (x and y) sample values starting at perturbation onset. MC(p − 1) was taken from the best MC estimate from the previous IINN results. Movement times were not pooled together. Training was repeated 10 times for each data set and bottleneck size. Figure 4.20 shows scatter plots of the errors found. For all cases both the mean and minimum ˆ error increase when MC(p − 1) is removed as input. Application of the unpaired Wilcoxon test with H0: “error increase is less than or equal zero” yields the p-values of Table 4.17. For nb = 5 the alternative hypothesis that the mean error is larger than zero should be adopted. For nb = 4 the difference in means was not significant (p > 0.05), although the p-values do confirm the same trend. In Section 4.5 it was observed that for nb = 5 the field classification was generally better. The same is found here suggesting that nb = 5 is the more likely value for the dimension ˆ ˆ estimate. The results show that MC(p − 1) contributes significantly to MC(p) when nb = 5. Consequently the data supports incremental learning from one movement to the next.  75  4. Results  (c) D,fast,nb=4  0.005  0.015 MSE error [(m/s)2]  MSE error [(m/s)2]  MSE error [(m/s)2]  0.01  0  (b) A,slow,nb=5 0.015  0.01  0.005  T V  T* V*  0  0.01  0.005  T V  T* V*  0  (d) D,fast,nb=5 0.015 MSE error [(m/s)2]  (a) A,slow,nb=4 0.015  0.01  0.005  T V  T* V*  0  T V  T* V*  Figure 4.20.: Scatter plots comparing IINN performance with and without incremental learning. The results for subject A (slow) and D (fast) are shown for nb = 4 and nb = 5. “T” indicates results for the training set and “V” for the validation. The results ˆ for IINN models with MC(p − 1) removed from the input are marked with “*” on the horizontal axes.  Table 4.17.: P-values for testing H0: error increase is less than or equal zero  A,slow,nb = 4 A,slow,nb = 5 D,fast,nb = 4 D,fast,nb = 5  training 0.095 0.0093 0.16 0.026  76  validation 0.062 0.018 0.16 0.0057  5. Discussion Whoever in discussion adduces authority uses not intellect but rather memory. Leonardo da Vinci, first notebook  5.1. The dimension of interactive motor behaviour The results demonstrate that the complexity of motor behaviour during interactive motor tasks is low when compared with the number of DOF available in the arm and reflexes. The dimension of the MC for the tasks performed in the context of this research was found to be 4 to 5. This is in line with expectations based on studies of the dimension of movement trajectories. The difference between previous and the present investigation is the explicit inclusion of interaction forces. We can imagine the MC to be composed of a kinematic component that specifies the unperturbed hand trajectory, and a second component that specifies the mechanical behaviour around this ideal trajectory. This would be in agreement with the notion of impedance control proposed by Hogan [1985]. The unperturbed hand trajectory closely resembles a minimum-jerk transition from start to target location (Flash and Hogan [1985]), which is completely described by start and end location and movement time. Since movement time is fixed in our experiments, we may assume that 2 dimensions are used for specifying movement start and target locations. A dimension of 2 is therefore a lower limit on the dimension we might expect. This leaves 2 to 3 dimensions for stabilization and adaptation during movement. Note that instead of start and target locations we might just as well take any other set of 2 parameters to describe an unperturbed movement, such as start location and movement distance. A surprising finding is that the MC dimension appears the same or almost the same for all subjects. The number of strategies a subject might employ are in part constrained by the task, which in our experiments limits subjects to movements in a horizontal plane with a fixed target movement time and target locations on a single straight line. This is not thought to sufficiently explain however why subjects all use a similar complexity. One possible hypothesis is that the constraint on MC comes from the underlying anatomy and physiology which are more or less the same for different people. The MC is an analogue for the nervous signals travelling down the spine and is therefore limited by the amount of channels available. The number of channels, however, is unlikely to be a constraining factor on movement dimension since each muscle receives independent input to the MNP from the brain. With practice most 77  5. Discussion arm muscles can be controlled independently, indicating that a synergy between muscles is not imposed peripherally. The constraint on the number of dimensions must therefore primarily reside in the brain where the MP and MC are formulated, and this limitation appears to operate in a similar manner on all subjects that were included in this study. The implications of this finding for motor control will be discussed in Section 5.3. Subject B demonstrated a higher dimension estimate for slow and fast movements when compared with other subjects. When movements were pooled, the dimension estimate for pooled movement was lower than the estimate for slow movement, although the confidence intervals on the estimates overlap (see Figure 4.11). It is possible that individual subjects use different dimensions, and so subject B may demonstrate a higher complexity than other subjects. This is supported by the observation that subject B also has the highest dimension estimate for fast movements (see Table 4.5). However, it is not possible that the dimension estimate for pooled movement data is higher than the estimate for either slow or fast movement data alone; the mapping that explains both slow and fast movement is either of the same, or a higher dimension. This suggests that the high dimension estimate for slow movement of subject B is a true outlier, possibly as a result of an anomaly in the data set that leads to poor convergence of the IINN during optimization. This possibility is addressed again below in Section 5.2.  5.2. Movement time and dimension When the data for different target movement times are pooled together, the MC dimension increases with 1. The observations demonstrate a clear relationship between increased task complexity and the increase in bottleneck size required to model the data. Even when movement times for slow and fast movement show a large overlap in their distributions, the increase in MC dimension is present. This indicates that even when a subject is not able to sufficiently adapt their motor behaviour to meet the requirements of the task description, he or she will nonetheless generate an adapted MC which incorporates the new instruction. This validates the diagram presented in 2.1 where the task description is shown as input to the MP. This result does not mean that movement time enters motor planning explicitly. Since it takes some training before a subject is able to consistently move in a specified time, movement time is probably expressed implicitly in the MP, perhaps in the form of an overall level of “effort” or speed. An alternative explanation for the increase in dimension when target movement times are pooled is the inherent increase in complexity when a slower movement time is imposed on subjects. Milner [1992] decomposed targeted movements into movement primitives similar to minimum-jerk transitions, based on their tangential velocity profiles. When moving at an unconstrained speed, a large initial segment is followed by smaller corrections to obtain the required end-point precision. When subjects are asked to deliberately slow their movement 78  5. Discussion however, trajectories appear more segmented and movement segments are distributed more evenly along the trajectory. Milner [1992] found that movement segments were approximately 200 milliseconds long, corresponding to long-loop reflex times. In the current experiments movement time is imposed on subjects, and so it is reasonable to assume that, at least for slow movements, segmentation occurs throughout. The analysis window of the IINN is therefore likely to contain parts of overlapping segments. If this is correct, the slow movements should show an increased complexity when compared to fast movements, because there is an increased likelihood of more than one segment in the analysis window. The mean dimension for slow movement was slightly larger than the mean for fast movement (difference of 0.27, Section 4.2.2), even though the difference was not found significant. It is possible than that the increase in dimension when data is pooled is a result of the increased amount of data available to train the IINN, resulting in a dimension increase that is significant. It is difficult to resolve this issue, based on the current experimental results. The tangential velocity profiles of slow null-field movements do show undulations consistent with the segmentation of movement reported by Milner [1992], but these undulations can be seen in the velocity profiles of fast movements as well.  5.3. Dimension and motor control ˆ for each movement, but the mapping that inferred this The L1 in the IINN generated a MC estimate was the same for all movements in the data set. In other words, motor control behaviour occupied a single manifold of dimension 4 or 5 for all movements for a given target time. This to some extent contradicts the hypothesis put forward by Wolpert et al. [1998] that many pre-existing internal models are available simultaneously and only the most relevant are selected to contribute to movement execution. If this were the case, the number of dimensions would be larger than found here. If we assume that one internal model is most appropriate for each of the 3 presented fields, then 3 of the 5 MC dimensions would be taken. The remaining dimensions would be needed to express the task parameters (start and target location), leaving no additional fields. With an MC dimension of 5 this leaves no room for additional internal models to be included for task execution. It is possible that alternative internal models are evaluated off-line, only to be included for movement execution when they are found relevant, or that other internal models only contribute marginally and are not detected by the IINN. Note that the term “internal model” here is intended to mean pre-existing motor control primitives the CNS draws on. Learning of new internal models is discussed in Section 5.4. ˆ while Section 4.5 indicates that the type of field correlates only moderately well with MC movement start and target location show very good correlation. This result is incongruous with the assumption of one pre-existing internal model for each field. If this assumption were ˆ and field type would have been found. This is correct a much higher correlation between MC taken as evidence that at the very early stage of learning no field-specific adaptation occurs. 79  5. Discussion Instead a generic approach is adopted that gives an overall good performance for point-to-point movements. This is in agreement with results from Franklin et al. [2003] who demonstrated that learning in a new force field is a combination of a rapid overall increase of co-contraction and stiffness, followed by a slow selective reduction in contraction while an internal model is formulated for the task. Franklin et al. [2003] fit exponential curves to these two processes which can be used to calculate that the stiffness increase occurs over 3 to 16 trials, while the internal model formation occurs over 53 to 87 trials1 . In the current experiments subjects were in the same field for 3 to 7 consecutive movements, so it can be assumed that no significant field-specific internal model formation is achieved. It is possible that the multiple internal models all exist on the same low-dimensional “motor manifold”. In other words, the internal models express different motor behaviours but the motor behaviour is limited to a low MC dimension (4 to 5 in our experiments). This construct has benefits for learning, discussed in the next section, but the low dimension also allows one to cover an appreciable segment of possible motor behaviours with a limited set of internal models. The number of internal models needed to cover a certain volume of “motor control space” increases with the number of dimensions. Another way to view this is that the volume of a hypercube with unit-length ribs increases exponentially with the dimension, the so called “curse of dimensionality” (Bishop [1995]). A low MC dimension may therefore allow us to cover an appreciable volume in the motor control space. Viewed in this context the low value for MC dimension can be considered to support the theory of multiple internal models. A lower MC dimension allows less flexibility in adaptation but it makes it easier to become a generalist, able to execute a range of motor tasks even when some were never learned explicitly before. The motor manifold idea can be linked to sparse classifiers (see e.g. Muller et al. [2004]), which makes a similar trade-off between computational complexity and dimension. The idea that there is a multitude of internal models available is not new (Wolpert et al. [1998], Kawato [1999]), but the notion that these exist on a low-dimensional manifold is. This has consequences for the way in which internal models may be detected. If many internal models are used by a subject during an experiment the MC dimension remains low, but is composed of a larger collection of contributing models. To decompose these a source separation approach might be used. The findings are compatible with the work of Donchin et al. [2003], Shadmehr et al. [2005] who used the trajectory error introduced by exposure to a curl force field to find tuning curves. Their tuning curves were constructed based on the assumption that some form of learning and generalization occurred from one movement to the next. In their work the authors suggest Gaussian radial basis functions as the basic elements, which are computational analogues of biological neurons in the brain. The basic elements that give rise to the tuning curves could just as well be internal models, which of course would have a certain receptive field and therefore 1  The lowest and highest values of time constants τ1 and τ2 in the “VF” field were taken for the six muscles considered, and multiplied by 3 to estimate the 95% adaptation point. The VF field is similar to the negative curl field used here.  80  5. Discussion a tuning curve. Conversely, the internal models could each be implemented on a single neuron.  5.4. Dimension and movement learning The 3 stages of learning proposed by Bernstein [1967] (see Section 1.3) suggest that during the initial stages of learning the arm is stiffened to stabilize the trajectory in order to subsequently learn the forces required for the new task and then reduce overall stiffness. This increased stiffness during initial stages of learning and reduction in stiffness when a certain level of learning has been achieved is reported by for example Franklin et al. [2003]. As learning progresses subjects learn the characteristics of the new force field, and the hand trajectory becomes simpler as the internal model becomes more complex. The current results indicate that the MC dimension that describes interactive motor behaviour during the initial stage of learning is low. We did not investigate whether the MC dimension is modified during learning. An attractive hypothesis is that the MC dimension remains the same, independent of the stage of learning. In the context of the previous section, it is possible that internal models exist that specialize in the earlier stages of learning. Clearly, the abundance of DOF the neuromuscular system of the arm has available, far exceeds the number of modes used for motor control. This suggests that we learn a basic mapping for our motor system as infants. This mapping is then refined when we grow up and perhaps influenced by particular specialized motor skills we develop. Movement execution and new tasks are expressed in this low-dimensional basic map or “motor manifold”. For this theory to be plausible adaptation of the motor manifold has to be very slow when compared to adaptation of internal models or learning of new skills. This may perhaps be one of the reasons why certain individuals are more capable in a particular sport. Their motor manifold was formed in such a manner that it better suits the skill set needed for their chosen activity. It is generally believed that practice at a young age is an important factor in the success of elite athletes and other specialized skills (Ericsson and Charness [1994]). Schneiberg et al. [2002] studied children 4 to 11 years of age and found that only from age 10 and up “reach-to-grasp” movements were comparable to adults. At younger ages the variability of movement trajectories was higher as well as movement curvature. The notion of a motor manifold forming during childhood is compatible with these observations. When the motor manifold is not yet fully formed more DOF are being adjusted, increasing the dimensionality of the output. This is a process analogous in some respect to Bernstein’s 3 stages of learning. At a very young age movement is uncoordinated and highly variable, but with time and practice the dynamics of the limbs are learned and captured in a motor manifold which reduces the number of DOF from the perspective of the motor control process. Internal models may be seen as extensions of the motor manifold used to address special tasks. An advantage of having a motor manifold is that it reduces the complexity of learning new motor tasks. It is much easier to find a solution in a space of lower dimension. Consider again 81  5. Discussion a unit hypercube in an N dimensional space. Its volume grows exponentially with N , hence the time to search for an optimal solution also grows exponentially with N . The MC dimension is therefore likely to be a trade-off between versatility (high dimension) and learning speed (low dimension). It is possible that the constraint that made all subjects produce similar MC dimensions is in fact the computational limitation imposed by the the required learning speed, rather than a constraint imposed by the neuro-anatomy. An interesting theory put forward by Franklin et al. [2003] suggests that movement errors are addressed for each muscle individually, resulting in an elegant and simple motor control framework which agrees with their observations with human subjects. However, their postulated framework implies that the MC dimension should be equal or larger than the number of muscles involved. If muscles are lumped together in 2 antagonists for each the shoulder and elbow plus antagonist bi-articular muscles making a total of 6, and we add 2 for movement start and target, than the MC dimension for a given target movement time should have been 8. The current results found a lower value, but our experiment was not specifically designed to vary the contribution of each muscle group. One could argue that subjects never actually adapted their strategy but simply produced a high co-contraction. Although it is probably true that subjects relied on co-contraction to negotiate the variable environment, some learning did take place since subjects remained in each field for 3 to 7 movements. Even in randomly varying fields subjects continuously adapt their motor behaviour (Scheidt et al. [2001]).  5.5. Interpretation of the motor command ˆ but the fields do not (see Section 4.5). The link The task parameters correlate well with MC, ˆ and start and target location is easy to support. When moving freely the arm between MC will tend to move in a straight line with a minimum-jerk profile (Flash and Hogan [1985]). Such movements are completely characterized with start and target location and movement time. It is therefore reasonable to assume that in the current experiments 2 of the MC dimensions are used for specifying movement kinematics. When target movement time is varied in the task description, this also enters the MC with a single dimension. ˆ to field type is better than the guess probability, it is Although the classification of MC much lower than what would have been achieved if the guess for the current field had simply ˆ and field type is interesting. It been the previous field. The lack of a clear link between MC suggests that during the experiment subjects don’t develop a distinctive approach for each field, but instead use a less systematic approach. If subjects used a specific combination of internal models for each field one would expect a better classification result, toward 81%, which the percentage that is achieved if the field estimate for the current movement is the field observed during the previous. As discussed above this is assumed to be due to a generic response of the motor system to produce a stable behaviour. Adaptation to each field is too little to be 82  5. Discussion ˆ reflected in the MC. This lack of field-specific adaptation can be explained by subjects relying primarily on impedance control during the early stage of learning, as discussed in the previous section. When using impedance control, increasing stiffness at the hand gives an overall good behaviour without requiring knowledge of the applied force fields. This strategy therefore agrees with the ˆ and field type. lack of correlation between the MC  5.6. Issues Some reservations should be made regarding the results presented in this thesis, although it is believed that these do not subtract from the overall results. In the following sections some possible concerns are addressed individually.  5.6.1. Absence of adaptation The subjects may simply not adapt but settle for an on average good strategy. This is in fact what was shown in the experiments by Scheidt et al. [2001], but in the same study it was demonstrated that subjects arrived at this strategy by continually adapting the motor behaviour with small steps. Their modelling showed that the results of the previous movement are used to improve the current one, even when changes in the force fields are unpredictable. It is therefore unlikely subjects stop adapting. A difference here is that subjects do not repeat the same movement over and over, but perform point-to-point movements on a line segment. Adaptation in subsequent movements may therefore cancel each other out, reducing the speed of learning. It was demonstrated in Section 4.3 that the previous movement execution did influence the current movement, indicating that some attempt to generalize from one movement to the next is made.  5.6.2. A change of motor plan during an experiment If a subject changes the strategy and the MP during an experiment in response to the force fields and/or force perturbations, they will then also modify the MC. If this happened halfway through an experiment, this is expected to either not effect or increase the dimension estimate. An increase would appear if the subject exploited unique modes previously not used, that now also need to be mapped by the IINN. The dimension estimate would remain the same however, if the changed MP was executed with the same modes used previously. Most subjects produced dimension estimates that were very similar. In a few cases the dimension estimate was high. This could be interpreted as subjects that switched their behaviour and as a result produced a larger dimension estimate. 83  5. Discussion  5.6.3. Insufficient richness in the data The experimental setup and the choice of fields does not necessarily elicit a set of motor behaviours that covers the full range of possible responses. Although this may be true, it is difficult to verify, because it requires evaluation of the dimension for an increasing complexity of the environment. Increasing the number of fields reduces the number of movements executed per field or extends the length of an experiment. The results should be qualified as to apply only to the experimental conditions for which they were found. This does not detract from the fundamental importance of the findings. What was found for the given experimental conditions is that the dimension was much lower than the number of DOF involved. Even when the dimension that was found is specific for the experimental conditions, the observation that the dimension is low with respect to the available DOF is expected to be generally valid. The hypothetical motor manifold is expected to be valid for any motor task.  5.6.4. Relationship between neural feedback and measured position and force The feedback during movement from the arm to the CNS is composed of the length and shortening velocity of muscles, tendon forces, and other sensors in the skin and joints. The measured hand trajectory and interaction force are therefore insufficient to represent motor behaviour and ˆ corresponds to the information sensory feedback. It is therefore incorrect to claim that MC travelling down the spinal cord. It is true that the feedback is more rich than what is measured during the experiments. Knowing the muscle lengths of two antagonistic muscles, for example, it is possible to obtain a measure of the stiffness as well as the torque generated by the muscles. However, during experiments the relationship between force and position is explored through the application of a force perturbation; the force perturbation ensures that the collected data is sufficiently rich. If the neural feedback from the arm contains information other than relating to arm trajectory, the state of the compliance in the arm, and the interaction forces with the environment, then that is not captured by our measurements. This may for example include the level of fatigue in a muscle. Because the measurements capture the movement as well as the dynamic response to changes in interaction force imposed by the environment, it is believed that they do sufficiently represent ˆ therefore is a representation of a portion of the information the sensory feedback. The MC travelling down the spinal cord.  5.6.5. Coordinate system of planning During the experiments, the elbow was supported in a sling and the wrist constrained. The shoulder however was unconstrained, and so the results cannot be applied to joint coordinates. Indeed, the current results make no statement regarding the question whether motor control is 84  5. Discussion organized in joint or external Cartesian coordinates, or both. Although the shoulder was not constrained, subjects generally showed little movement in the shoulder during experiments. In addition, the shoulder adds DOF subjects might exploit, making the low MC dimension that was found the more striking.  5.6.6. The low-pass nature of intrinsic dynamics The arm and muscles behave as low-pass filters, and therefore naturally limit the complexity of motor behaviour. In Figure 1.3 for example this is evident from the decay in eigenvalues of a low-pass linear system excited with a random sequence. It is true that the low-pass nature of the intrinsic dynamics of the arm limit the complexity a in a movement sequence; one can only move so fast. The IINN, however, creates a mapping using data from 200 movements in parallel. The complexity is not expected in the single timehistory of force and position for one movement, but in the alternative strategies that are applied during many movements. Even so, the low-pass nature does limit the complexity of movement, but this limitation necessarily exists at the planning stage also. It is unlikely that a coordinated individual will create motor plans that cannot be executed.  85  6. Conclusions Nature answers only when she is questioned. Jacob Henle (1809-1885), quoted in The Conquest of Epidemic Disease, CharlesEdward Amory Winslow, 1941.  6.1. Main contributions To summarize, the main contributions of this dissertation are the following: 1. In the given experimental setting, the dimension of hand interaction dynamics is relatively low, 4 to 5 in experiments with a fixed target movement time, and is uniform across subjects. The complexity of interactive movement tasks is therefore low when compared to the degrees of freedom available. This has not been demonstrated previously. 2. A new method, the input inference neural network (IINN), was developed which allows identification of hidden states or inputs of a non-linear system without assumption of a model. The IINN was able to extract a representation of the motor command (MC) through inverse modelling and it was demonstrated this representation contributes signifˆ p for each icantly to modelling of motor behaviour. The IINN produces an estimate MC movement p that correlates well with task parameters describing movement kinematics. 3. The existence of a motor manifold was proposed as a hypothesis which explains the findings and simplifies motor planning and learning. 4. Variation of target movement time correlated with an increase in the MC dimension by 1. This is true even when actual movement times show a significant overlap for slow and fast movements. This is an indication that the motor plan is influenced directly by the task description provided to subjects, even when it is difficult to achieve the task goals. It is not suggested that time enters the task description explicitly. 5. The findings support the hypothesis that adaptation in the current movement depends on the internal model state and the error made during the previous movement. 6. A novel and simple impedance controller was developed and implemented on a robot with two degrees of freedom.  86  6. Conclusions  6.2. Future work 6.2.1. Motor manifold Perhaps the most important contribution of the current work is the observation that models of the mature human motor control process need not be overly complex. The low dimension of MC indicates that the adaptive motor control process is constrained by computational limitations. It places a useful limitation on the necessary complexity of models for motor learning and control. The hypothesis that movement control is organized on a low-dimensional motor manifold needs further study. The relationship between the motor manifold and internal models should be investigated; in particular the manner in which internal models populate the motor manifold, and how internal models are selected. An experiment including complete adaptation to the force field can show whether the MC dimension changes when an internal model is formed. Full adaptation to multiple force fields can be used to investigate whether the MC dimension increases with the number of field types used. If MC dimension does increase with the number of fields this would directly support the multiple internal models for motor control adaptation and refute the notion of a motor manifold. Internal models existing on a low-dimensional motor manifold is an attractive possibility, because it offers computational advantages that benefit learning as well as the ease with which acquired motor skills can be recalled. The existence of multiple internal models may be verified by estimating MC for a range of motor tasks and performing a source separation analysis on ˆ to discover groupings. If the results can be separated successfully each can be assumed to MC correspond to an internal model, and the relative contributions to each task quantified. The experiments will have to be redone to include more advanced stages of learning. The motor manifold hypothesis suggests that a basic mapping is developed during childhood and is used to simplify the motor control process. If this is correct, a decrease in MC dimension with age should be observed. According to results by Schneiberg et al. [2002] motor performance reaches a level comparable to adults around the age of 10. If motor variability and MC dimension both decrease with age it is reasonable to assume that a higher motor variability at a young age is due to an incomplete and less organized motor manifold. The motor manifold shape may be an indicator of the particular motor skill set an adult subject possesses. The motor manifold should be different for people who have expert motor skills in different fields, e.g. a violinist and a tennis player. It would be interesting to see how motor manifolds are different and where they overlap in people with different skills.  6.2.2. Movement segmentation The existence of movement segments in tasks requiring end-point precision was demonstrated by Milner [1992] and their influence on the current results discussed in Section 5.2. Their presence should be taken into account in future analysis. Movement segmentation involves the 87  6. Conclusions combination in time of motor primitives, meaning that the IINN model has to be adapted to also allow decomposition in time. One possible way to achieve this would be to apply a wavelet decomposition on the force and position data, and use clustering to reduce the amount of data. This reduced data set could then be used as input to a structure similar to the IINN. Alternatively, the experiment could be redesigned to avoid influence from segmentation, which was found to be more pronounced with increased end-point accuracy and imposed movement timing (Milner [1992]). This could be accomplished by removing the movement time constraint and presenting a larger target.  6.2.3. Increased task complexity In the experiments the task description was simple: move to the next target and stop in the given time period. Since the task description is a direct input to the motor plan MP, it can be manipulated to increase the complexity of the output. This could for example include instructions regarding the shape of the trajectory, where subjects are asked to move through a point before arriving at the target. Alternatively, a subject could be asked to keep the joints very stiff when moving, or the opposite, to resist perturbation as little as possible. It may also be interesting to instruct subjects about the nature of force fields, and to provide visual cues regarding the type of force field they move in. The object would be to see if there is a hard ceiling to the level of complexity subjects can implement.  6.2.4. Alternative dimension estimation methods Other algorithms exist for dimension estimation of system dynamics. One is the correlation dimension (Theiler [1987]), which is a geometric method that infers the intrinsic dimension of a data set using the distance between data points. The data is first embedded in a dimension known to be larger than the intrinsic dimension. Next the average number of points that fall within a radius r is counted. The logarithm of this number of points is plotted against r and its slope represents the dimension. As a simple example, if all points are on a line, then the number of points will increase linearly with r. If all points are on a 2-dimensional surface, then the number of points will increase with the square of r. This method is not applicable here, since we do not have a free system. The MC is an unknown input that influences the dimension, and we also have force perturbations and changing fields. Although the motor behaviour is reflected in the recorded force and trajectory information, the motor planning is more likely to be in terms of a desired trajectory and a hand compliance. These together determine the relationship between interaction force and hand trajectory, and can be expressed in a much simpler way than force and trajectory. This was explained in Section 1.4 where a random, highly complex force sequence is applied to a simple compliance. The observed force and trajectory are complex, but there relationship is simple. This is why correlation dimension or NLPCA cannot be applied. 88  6. Conclusions It may be interesting however to investigate the possibility of using a geometric method, either by adapting the method or by adapting the experiment. The IINN in its current form will demands a large number of computations, and computational expense and the amount of data will increase when higher dimensions have to be identified.  6.2.5. Continuous estimation of the motor command The current IINN method analyzes motor behaviour one movement at a time. This limits application of the method to a constrained set of tasks. It is therefore of interest to generalize the concept of input inference and develop a method for continuous identification of the MC. This would allow analysis of continuous and less structured tasks, and might also allow tracking the possible evolution of the intrinsic dynamics.  89  A. Design of a robust and simple impedance controller Here the design and implementation of a robust and simple impedance controller (RSIC) is discussed. Motor behaviour during interaction tasks with the hand is usually studied by generating a force field with a robotic haptic device. Subjects have to learn the force field in order to properly execute a given task. An impedance controller is an obvious choice as it directly implements such fields. A good deal of literature has been created describing various implementations of impedance control, first introduced by Hogan [1985]. The fundamental concept of impedance control is to set the mechanical impedance a manipulator presents to the environment, rather than force or position. Force and position control rely on contact and absence of contact, respectively, to maintain stability, but an impedance controller can be programmed to be stable under both conditions. Impedance control is particularly important when a robot interacts with a human operator. When applying force to a handle to perform a task the robot has to present some resistance or haptic feedback, but the robot must remain stable and predictable when the operator lets go. The control algorithm that is presented here was designed specifically for interaction with a human operator. The design goal was to create a specified mechanical impedance at the interface while maintaining stability. This chapter first presents the control algorithm in Section A.1. Section A.2 introduces the robotic platform. The actual implementation is described in Section A.3 and the results are in Section A.4.  A.1. The five bar manipulandum A five bar parallel robot has been developed in the Neuromotor Control Laboratory at the University of British Columbia. The geometry of the device has been optimised for workspace size and shape as well as the transmission ratio of motor torque to end-effector force (Hodgson [1996]). An advantage of this design is the uniform inertia at the end-effector, because the distal links or forearms are close to being at a straight angle throughout the workspace. The robot has two degrees of freedom, operating in a horizontal plane. Two direct-drive motors (Parker-Compumotor Dynaserv DR1060B) are located at the base (at the ”shoulders”); the two ”elbow” joints are passive (see Figure A.1). The link lengths are 35 cm for the proximal links (“upper arm”) and the distance between the motor axes, and 65 cm for the distal (“forearm”) 90  A. Design of a robust and simple impedance controller manipulandum  operator  Figure A.1.: The 5 bar parallel manipulandum has 2 passive elbow joints. The diagram on the right shows how an operator interacts with it. links. Exact link-lengths were obtained using a calibration procedure detailed in Appendix B. The end-effector or handle is instrumented with a multi-axes force sensor (ATI Industrial Automation Inc. “Mini” sensor) which measures the forces in the horizontal plane as 12 bit integers. Encoders integrated with the motors supply position feedback at 507904 pulses per revolution. The motors were selected for the high torques they can deliver: 60 N m. Unfortunately they also have rather high friction and cogging amplitudes. These values were identified by fitting a linearised model to data from excitation experiments. The viscous friction was found to be 0.8 N ms/rad, the static friction (modelled as a constant) about 1.7 N m and the cogging amplitude for one motor 0.8 N m. The cogging frequencies are 124 and 496 /revolution. These values are unacceptably large and must be handled by the controller. The large static friction makes the use of a force sensor necessary. Without use of a force sensor the operator has to initiate movements with a force larger than the static friction before the controller can compensate. In a system with very low friction a model-based force estimate can be used instead of a sensor. An initial controller design was based on the sliding mode controller as described by Slotine and Li [1991] with an adaptive model of the robot and a reference model for the desired mechanical impedance. This type of controller uses high gains to drive the robot to the desired state. It turned out that the arms of our robot were too flexible in the out-of-plane direction and started to vibrate even with moderately high gains. High gains should not be needed for an impedance controller that has to produce relatively low stiffness. In the new design the desired impedance is specified in the control loop which makes high controller gains unnecessary. This controller design can therefore be used for impedance control of lighter and therefore less costly robotic platforms. Figure A.2 shows the hardware configuration. The main computer workstation hosts a Digital Signal Processor (DSP) board (dSpace Inc. DS1102) that runs the controller program and 91  A. Design of a robust and simple impedance controller − x  DSP  TD  − F − τ − q  − f  motor drives  Figure A.2.: Schematic of the control system hardware.  τm ,qm  τe ,qe ks  jm bm  je bs  Figure A.3.: Model of the motor, force sensor and the environment.  provides data logging facilities for the Matlab environment. The DSP receives position feedback from the motor drives and provides torque commands. A second computer runs a program called TargetDisplay (TD). It provides visual feedback to the operator and reads the force data from the sensor. The TD communicates with the DSP through a serial port.  A.2. The control algorithm The development of the control algorithm is presented for the scalar case. The extension to more degrees of freedom is straightforward. It is necessary to model the motor together with the force sensor and the environment. Since the environment is not predetermined we will use an inertia je as load for analysis. This can be considered a worst-case scenario, since the manipulandum will be interacting with the hand of an operator which will introduce stabilizing damping. Figure A.3 shows the model, where the motor is simplified to linear damping bm and rotor inertia jm . The force sensor connects the motor to the environment and has stiffness ks and damping bs . The force on the rotor inertia jm and the external load je are τm and τe , and their positions are qm and qe respectively. The transfer equation of the system of Figure A.3 is: τs je ks s = τm jm je s3 + (jm bs + je (bm + bs ))s2 + (bm bs + ks (jm + je ))s + bm ks  (A.1)  To be able to use numerical analysis the transfer function is divided in two. The first transfer 92  A. Design of a robust and simple impedance controller function expresses the coupled motor dynamics: Hqm (s) = =  qm τm  (A.2)  je s2 + bs s + ks (A.3) s(jm je s3 + (jm bs + je (bm + bs ))s2 + (bm bs + ks (jm + je ))s + bm ks )  The second transfer function expresses the coupled torque sensor dynamics: Hts (s) = =  τs 1 τs = qm τm Hqm (s) je ks s2 je s2 + bs s + ks  (A.4) (A.5)  The desired impedance that we wish to present at the interface has the form: τ = jd ω˙ ic + bd (ωic − ω0 ) + kd (qic − q0 )  (A.6)  The subscript d indicates desired values, ωic is the reference velocity for the impedance controller, and ω0 and q0 are reference values for the damping and spring components.  By using the actual motor position for qic Equation A.6 reduces to first-order and can be written as: ωic =  11 (τ − bd (ωic − ω0 ) − kd (qm − q0 )) s jd  (A.7)  A discretized implementation of this relationship is: ωic (k) =  Tz 1 (τ (k) − bd (ωic (k − 1) − ω0 ) − kd (qm (k − 1) − q0 )) z − 1 jd  (A.8)  Where T is the sampling time. For convenience, let us set ω0 = 0 and q0 = 0, then: Tz (τ − kd qm ) jd (z − 1) + T bd = CC(τ − kd qm )  ωic =  (A.9) (A.10)  This formulation contains a single integrator of force. Figure A.4 shows the controller diagram. CC (Equation A.10) specifies a reference velocity based on the desired impedance parameters and force and position feedback. The velocity feedback is derived from the position signal with a bandwidth-limited difference filter: Hvel =  (1 − a) z − 1 T z−a  The controller C is simply a constant gain factor Kp . 93  (A.11)  A. Design of a robust and simple impedance controller The closed loop system equation for the impedance controller is: Hcl =  Hqmd C · CC 1 + Hqmd C (Hvel + CC(kd + Htsd ))  (A.12)  Where Hqmd is the discrete version of Hqm (Equation A.3) with a zero-order-hold, and Htsd the discrete version of Hts (Equation A.5) with a zero-order-hold. Table A.1 shows the values that were used for the stability analysis. The final control loop will be operating with workspace coordinates. The values for motor parameters were found by using the mean of the eigenvalues of the mass and the damping matrix in workspace coordinates. The idealized dynamic equation of the robot is: τ = H(q)¨ q + C(q, q) ˙ q˙  (A.13)  The system matrices can be expressed in workspace coordinates: Hws = J −T H(q)J −1  (A.14)  ˙ −1 Cws = J −T C(q, q)J ˙ −1 − Hws JJ  (A.15)  The system parameters were obtained by fitting a linearized model of the complete robot to data obtained during a position-controlled calibration. Matrix C(q, q) ˙ is composed of Coriolis and viscous friction only; static friction is treated as a disturbance and ignored here. The sensor stiffness ks is from the manufacturer’s literature and the sensor damping bs was calculated from it and the resonance frequency (3200 Hz), also from the specifications. Figure A.5 shows the root-locus plot of the system for two values of desired inertia. The diagram shows that for a large desired inertia and a large gain the system becomes unstable. Clearly there is a trade-off between inertia and control gain. When the influence of the force sensor dynamics is removed by assuming force sensor stiffness to be infinite, all the root-loci of are inside the unit circle. The roots that dominate system behaviour are those closest to the unit-circle. In Figure A.6 the dominant part of root-locus plots for Kp = 500 is shown for varying bd and jd . Also shown are the roots of the ideal system, which are equal to: √ Ts  −bd ±  r1,2 = e  b2 −4(jd +je )kd d 2(jd +je )  (A.16)  For smaller desired inertia the force-sensor feedback becomes more important and performance deteriorates. When the force sensor is made ideal (ks → ∞) the actual and ideal root-loci coincide. 94  A. Design of a robust and simple impedance controller  τd τr +  ωic  CC  − − τs  + − ^ ωm  C  +  + Hqmd  qm  Hvel  kd Htsd Figure A.4.: Impedance controller diagram  Table A.1.: Values used for the analysis variable jm bm bs ks je jd bd kd a Kp T  value 4.2 12.5 4.6 ∗ 103 11 ∗ 106 2.2 jm 1.0 10 0.9 500 0.001  units kg N s/m N s/m N/m kg kg N s/m N/m N s/m seconds  description motor inertia motor damping sensor damping sensor stiffness external inertia desired inertia desired damping desired stiffness velocity filter constant control gain sample time  1  jd=0.1*jm jd=1*jm jd=10*jm  Imag  0.5  0  -0.5  -1 -1  -0.5  0  0.5 Real  1  1.5  2  Figure A.5.: Root-locus plot for Kp = 1 → 104 . The root-loci visible don’t depend on the desired stiffness and damping, but do depend on desired inertia. The filled circles indicate roots for Kp = 500. When the force sensor is made ideal (ks → ∞) the root-loci stay inside the unit circle.  95  A. Design of a robust and simple impedance controller  0.003  jd=0.1*jm (ideal) jd=10*jm (ideal)  Imag  0.002  0.001  0 0.995  0.996  0.997 0.998 Real  0.999  1  Figure A.6.: Detail of the root-locus plot for bd = 1 → 40 N s/m, Kp = 500.  gain [dB]  0  jd=0.1*jm (ideal) jd=10*jm (ideal)  -40 -80  phase [degrees]  -120 0.1  1  0.1  1  10  100  1000  10000  1000  10000  200 100 0 -100 -200 10 100 frequency [rad/s]  Figure A.7.: Bode diagram for the closed-loop system with the ideal behaviour as a reference.  96  A. Design of a robust and simple impedance controller  bounding box force field  + +  +  Σ  conversion to/ from workspace  Fext null field  +  Vic +  Σ  control gain  −  robot + force sensor  V  force perturbation  F−>T  Fs  Frot  V  velocity filter  X  X<−Q  Q  Figure A.8.: Block diagram of the 2 DOF implementation of the RSIC. The force sensor is mounted on one of the robot links and is rotated to achieve the force aligned with the workspace coordinates.  A.3. Implementation The control algorithm is implemented in the x- and y-coordinates of the workspace. It is implemented in C and runs on the DSP board. Figure A.8 shows a block diagram of the final controller implementation. Also indicated in Figure A.8 are functional blocks that implement the force field, force perturbation and a bounding box which implements a barrier that makes it harder (not impossible) to leave the workspace. The workspace here is a rectangular space that has a shape corresponding to the computer screen a user interacts with. The sum of the force vectors and the force sensor values is labeled Fext in Figure A.8, essentially tricking the null field block and creating a reference velocity for the controller that corresponds to the combined effect. Note that velocity is derived from position in workspace coordinates. The velocity filter has the following equation: Hvel (z) =  1−a Ts  z−1 z−a  (A.17)  The controller operates at a sample rate of Ts = 1 milliseconds and the high-frequency is at approximately 35 Hz with a = 0.9. This approach avoids possible high-frequency excitation due to differentiation of noise in the position signal. The null field is implemented as follows: Ts vic mz = F z−1+  97  Ts b m  (A.18)  A. Design of a robust and simple impedance controller 15  10  10  5  0 −5  −10  0  y  F [N]  x  F [N]  5  −5  −10 0  1  2  3  4  5 time [s]  6  7  8  9  −15  10  0  1  2  3  4  (a)  7  8  9  10  1 actual ideal  actual ideal  0.5 V [m/s]  1  0  y  0  x  V [m/s]  6  (b)  2  −1 −2  5 time [s]  −0.5 0  1  2  3  4  5 time [s]  6  7  8  9  10  −1  0  1  (c)  2  3  4  5 time [s]  6  7  8  9  10  (d)  Figure A.9.: Ideal and actual performance of the RSIC. (a) Force in the x-direction data used as input. (c) Actual and ideal velocity in the x-direction. (b) and (d) are the same plots for the y-direction. For small sampling values this approximates: vic 1 = F ms + b  (A.19)  In the final implementation the values were fixed to m = 1 kg and b = 1 Nms . The final controller gain was set to 50.  A.4. Results Data from participants moving in the null-field was used to verify the operation of the RSIC. The recorded forces were used as input to a discrete filter identical to that implementing the null-field in the RSIC:  V (z) Ts z = F (z) z − 1 + Ts  (A.20)  The recorded data was sampled with Ts = 10 milliseconds. Figure A.9 shows the results for 10 seconds of data. The actual velocity is overall lower than the ideal because of the relatively low loop gain. There are also larger deviations, such as in Figure A.9(d) at 2-3 seconds. This is most likely due to the cogging and friction that is particularly high in one of the motors and leads to extra disturbances. Overall experience with the RSIC operating in the null-field was smooth and easy movement requiring very little effort. The RSIC provided a dramatic improvement over unpowered motors. The subjective impression from interacting with the null-field is that the robot behaves 98  A. Design of a robust and simple impedance controller smoothly and is easy to operate. The cogging torque in one of the motor is noticeable sometimes but not distracting. Cogging torque in electrical motors is due to interaction between the permanent magnets and the teeth, and causes toque ripple.  99  B. Link length calibration B.1. Precise link-length calibration is necessary The robot controller operates in work space coordinates, and hence the position feedback from the actuators needs to be transformed from joint space. Errors in the link-length values will introduce additional error in the control and work space position data and should therefore be minimised.  B.2. Methods B.2.1. Data collection The Polaris 3D localizer from Northern Digital was used to determine the link-lengths. This tools is capable of determining the position of marker arrays within its workspace with a precision below 1 mm. A marker array was fixed to each link and to the robot frame. While recording marker positions the robot was moved through the work space to obtain a sufficiently rich set of data. This was repeated 10 times and each trial had on average 323 valid points. Polaris provides an [x y z] vector to the origin of the marker coordinate frame and four quaternions describing the rotation matrix. I found that the quaternion vector was not always unit length, so it was normalized before further processing. From the Polaris data the homogeneous transform matrix was constructed for each marker at each valid point. A recording point was valid if all the markers were within the field of view of the Polaris cameras.  B.2.2. Locating joint centres The method for determining link lengths from 3-dimensional position data derived by O’Brien et al. [2000] was used here to calibrate the link lengths. Figure B.1 shows how the joint centres are determined for a link. Once both joint centres of a link with respect to the link frame are found, the link length is easily calculated as the distance between the two joints. To determine the location of the joint between links 1 and 2 with respect to link coordinate frame F2 , we will express all motion of link 2 in link frame 1. To do this we have to express all homogeneous transforms determined for F2 relative to F1 : −1 T21 = Tw2 ∗ Tw1  100  (B.1)  l  F 2  B. Link length calibration  F1  c  (a)  l*  F*2  (b)  Figure B.1.: Locating the joint centres with respect to the link frame. (b) Shows how a joint is located on link 2 (see text).  If we then consider a second orientation for link 2 with frame F2∗ and transform T2∗ 1 and the transform from link orientation F2 to F2∗ : T22∗  = T21 ∗ T2−1 ∗1 =  (B.2)  R(θ) a 0  (B.3)  1  where R(θ) is the rotation matrix and a the translation vector from F2 to F2∗ expressed in F2 . From Figure B.1 it is clear that:  a = (−p)R(θ) + p  (B.4)  So: starting in F2 , I translate via vector −p to the joint centre, then rotate R(θ) and translate p to reach the origin of F2∗ . From Equation B.3 R(θ) and a are known. Vector p can then be calculated by rearranging Equation B.4: a = (I − R(θ))p  (B.5)  which can be worked into a form that gives the solution for p: (I − R(θ))T a = (I − R(θ))T (I − R(θ))p  (B.6)  pre-multiplying with the inverse of the matrix on the right-hand side and rearranging: p = (I − R(θ))T (I − R(θ))  −1  (I − R(θ))T a  (B.7)  or: p = H −1 q 101  (B.8)  B. Link length calibration where: H = (I − R(θ))T (I − R(θ))  (B.9)  q = (I − R(θ))T a  (B.10)  and:  B.2.3. Combining multiple measurements A single combination of link orientations is not sufficient to provide an accurate estimate of the joint location, due to noise present in the measurement combined with the matrix inversions1 . To improve this the results of multiple combinations of link orientation are used. All measured orientations within a trial for a given link were combined to estimate the joint centre location. A combination was used only if the angle θ between the two link orientations was more than 10◦ from 0 or 180◦ . The matrices Hk and vectors qk for each valid combination of link orientations k was averaged: HΣ =  Hk  (B.11)  qk  (B.12)  k  qΣ = k  These were then used to calculate the final joint centre estimate for this trial: p = HΣ−1 qΣ  (B.13)  A “better” way to calculate the average the results for many observations is to stack the matrices [I − R(θ)]k and the ak vectors:   [I − R(θ)]1      [I − R(θ)]2 B= ..  .  [I − R(θ)]N          a1      s=    a2 .. .        (B.14)  (B.15)  aN The estimate for vector p, analogous to Equation B.7, is then: p = (B T B)−1 B T s 1  (B.16)  The inversion of homogeneous transform matrices is less sensitive to noise than regular matrix inversion, since it does not require division. The inversion of Equation B.7 is more problematic.  102  B. Link length calibration Table B.1.: Link length estimation results for each trial link 2 648.74  link 3 353.54  link 4 649.63  1 2  0.5 0 −0.5 −1 −1.5 −2  link 5 350.58  number of points 3227  2.5 error estimate [mm ]  deviation from final [mm]  link 1 350.10  2 1.5 1 0.5 0  1500 2000 2500 3000 number of observations  (a) Residual error during adaptation  link 1 link 2 link 3 link 4 link 5  1500 2000 2500 3000 number of observations (b) Averaged squared residual  Figure B.2.: Convergence of the link length estimates to the final value: (a) shows the deviation for each link from the final value and (b) shows an error estimate, which was generated by taking the square of (a) and applying a moving average window of 101 points. This is the form that gives the pseudo-inverse of B times s, and hence the least-squares fit of p to the data. It is however very computationally intensive and requires a large amount of memory. The difference between Equations B.13 and B.16 is that the first ignores cross terms between different observations. The results of both, evaluated on a subset of the collected data, is similar, while the form of Equation B.13 produces a much faster algorithm.  B.3. Results Table B.1 shows the result. Figure B.2 shows the development of the error during convergence. The error in the estimates, based on these graphs is very small.  103  Bibliography M. L. Aisen, H. I. Krebs, N. Hogan, F. McDowell, and B. T. Volpe. The effect of robot-assisted therapy and rehabilitative training on motor recovery following stroke. Archives of Neurology, 54(4):443–446, 1997. D. J. Bennett. Stretch reflex responses in the human elbow joint during a voluntary movement. Journal of Physiology, 474(2):339–351, 1994. N. Bernstein. The Co-ordination and Regulation of Movements. Pergamon, London, 1967. N. Bhushan and R. Shadmehr. Computational nature of human adaptive control during learning of reaching movements in force fields. Biological Cybernetics, 81(1):39–60, 1999. C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, Oxford, 1995. J. M. Brebner and A. T. Welford. Introduction: An historical background sketch. In A. T. Welford, editor, Reaction Times, chapter 1, pages 1–23. Academic Press, London, 1980. C. A. Buneo, J. F. Soechting, and M. Flanders. Postural dependence of muscle actions: Implications for neural control. Journal of Neuroscience, 17(6):2128–2142, 1997. E. Burdet, R. Osu, D. W. Franklin, T. E. Milner, and M. Kawato. The central nervous system stabilizes unstable dynamics by learning optimal impedance. Nature, 414:446–449, 2001. H. P. Clamann. Statistical analysis of motor unit firing patterns in a human skeletal muscle. Biophysical Journal, 9:1233–1251, 1969. A. C. Davison and D. V. Hinkley. Bootstrap Methods and their Application. Cambridge University Press, New York, 1997. H. Devanne, L. G. Cohen, N. Kouchtir-Devanne, and C. Capaday. Integrated motor cortical control of task-related muscles during pointing in humans. Journal of Neurophysiology, 87 (6):3006–3017, 2002. T. J. DiCiccio and B. Efron. Bootstrap confidence intervals. Statistical Science, 11(3):189–212, 1996. O. Donchin, J. T. Francis, and R. Shadmehr. Quantifying generalization from trial-by-trial behavior of adaptive systems that learn with basis functions: Theory and experiments in human motor control. Journal of Neuroscience, 23(27):9032–9045, 2003. 104  Bibliography K. A. Ericsson and N. Charness. Expert performance: Its structure and acquisition. American Psychologist, 49(8):725–747, 1994. A. G. Feldman. Functional tuning of the nervous system during control of movement or maintenance of a steady posture - ii controllable parameters of the muscle. Biofizika, 11(3):498–508, 1966. English translation: pp. 565-578. T. Flash and N. Hogan. The coordination of arm movements: An experimentally confirmed mathematical model. Journal of Neuroscience, 5(7):1688–1703, 1985. D. W. Franklin and T. E. Milner. Adaptive control of stiffness to stabilize hand position with large loads. Experimental Brain Research, 152(2):211–220, 2003. D. W. Franklin, R. Osu, E. Burdet, M. Kawato, and T. E. Milner. Adaptation to stable and unstable dynamics achieved by combined impedance control and inverse dynamics model. Journal of Neurophysiology, 90:3270–3282, 2003. A. M. Gentile. Implicit and explicit processes during acquisition of functional skills. Scandinavian Journal of Occupational Therapy, 5:7–16, 1998. J. Gleick. Isaac Newton. Pantheon Books, 2003. H. Gomi and M. Kawato. Equilibrium-point control hypothesis examined by measured arm stiffness during multijoint movement. Science, 272(5258):117–120, 1996. H. Gomi and M. Kawato. Human arm stiffness and equilibrium-point trajectory during multijoint movement. Biological Cybernetics, 76(3):163–171, 1997. H. Gomi and R. Osu. Task-dependent viscoelasticity of human multijoint arm and its spatial characteristics for interaction with environments. Journal of Neuroscience, 18(21):8965–8978, 1998. G. L. Gottlieb, Q. Song, D. A. Hong, and D. M. Corcos. Coordinating two degrees of freedom during human arm movement: Load and speed invariance of relative joint torques. Journal of Neurophysiology, 76(5):3196–3206, 1996. P. L. Gribble, D. J. Ostry, V. Sanguineti, and R. Laboissi`ere. Are complex control signals required for human arm movement? Journal of Neurophysiology, 79(3):1409–1424, 1998. H. Haken. Principles of Brain Functioning: A Synergetic Approach to Brain Activity, Behavior and Cognition. Springer, Berlin, 1996. A. V. Hill. The heat of shortening and the dynamic constants of muscle. In Series B, Biological Sciences, volume 126. The Royal Society of London, 1938. A. J. Hodgson. Inferring Central Motor Plans from Attractor Trajectory Measurents. PhD thesis, Massachusetts Institute of Technology, Boston, 1994. 105  Bibliography A. J. Hodgson. Optimal design of a two degree of freedom manipulandum for human motor control studies. In Proceedings 18th IEEE EMB Annual International Conference, page 973, Amsterdam, Netherlands, 1996. A. J. Hodgson and N. Hogan. A model-independent definition of attractor behavior applicable to interactive tasks. IEEE Transactions on Systems Man and Cybernetics Part C - Applications and Reviews, 30(1):105–118, 2000. J. A. Hoffer and S. Andreassen. Regulation of soleus muscle stiffness in premammillary cats: Intrinsic and reflex components. Journal of Neurophysiology, 45(2):267–285, 1981. N. Hogan. The mechanics of multi-joint posture and movement control. Biological Cybernetics, 52(5):315–331, 1985. N. Hogan. Control of contact in robots and biological systems. In P. Dario, G. Sandini, and P. Aebischer, editors, Robots and Biological Systems: Towards a New Bionics?, volume 102 of NATO Asi Series. Series F : Computer and Systems Sciences. Springer-Verlag, 1993. Jim Horning.  Jim horning’s maxims.  Internet, 2005.  URL http://home.comcast.net/  ~jhorning4/maxims.html. M. Kawato. Internal models for motor control and trajectory planning. Current Opinion in Neurobiology, 9(6):718–727, 1999. B. A. Kay. The dimensionality of movement trajectories and the degrees of freedom problem: A tutorial. Human Movement Science, 7:343–364, 1988. M. A. Kramer. Nonlinear principal component analysis using autoassociative neural networks. AIChE Journal, 37(2):233–243, 1991. H. Krebs, M. Ferraro, S. Buerger, M. Newbery, A. Makiyama, M. Sandmann, D. Lynch, B. Volpe, and N. Hogan. Rehabilitation robotics: Pilot trial of a spatial extension for mitmanus. Journal of NeuroEngineering and Rehabilitation, 1(1):5, 2004. M. L. Latash and G. L. Gottlieb. Reconstruction of shifting elbow joint compliant characteristics during fast and slow movements. Neuroscience, 43(2-3):697–712, 1991. T. E. Milner. A model for the generation of movements requiring endpoint precision. Neuroscience, 49(2):487–496, 1992. T. E. Milner. Contribution of geometry and joint stiffness to mechanical stability of the human arm. Experimental Brain Research, 143(4):515–519, 2002. KR Muller, M. Krauledat, G. Dornhege, G. Curio, and B. Blankertz. Machine learning techniques for brain-computer interfaces. Biomedical Engineering, 49(1):11–22, 2004. 106  Bibliography K. M. Newell and D. E. Vaillancourt. Dimensional change in motor learning. Human Movement Science, 20:695–715, 2001. C. Nooteboom. All Souls Day. Harcourt Inc., first american edition, 2001. J. F. O’Brien, B. Bodenheimer, G. J. Brostow, and J. K. Hodgins. Automatic joint parameter estimation from magnetic motion capture data. In Graphics Interface, pages 53–60, 2000. K. Ogata. Modern Control Engineering. Prentice Hall, third edition, 1996. G. A. Pratt and M. W. Williamson. Series elastic actuators. In Proceedings of the IROS, Pittsburgh, PA, USA, 1995. R. A. Scheidt, J. B. Dingwell, and F. A. Mussa-Ivaldi. Learning to move amid uncertainty. Journal of Neurophysiology, 86:971–985, 2001. S. Schneiberg, H. Sveistrup, B. McFadyen, P. McKinley, and M. F. Levin. The development of coordination for reach-to-grasp movements in children. Experimental Brain Research, 146 (2):142–154, 2002. R. Shadmehr and F.A. Mussa-Ivaldi. Adaptive representation of dynamics during learning of a motor task. Journal of Neuroscience, 14(5 Pt 2):3208–3224, 1994. R. Shadmehr, O. Donchin, E.-J. Hwang, S. E. Hemminger, and A. Rao. Learning dynamics of reaching. In A. Riehle and E. Vaadia, editors, Motor Cortex in Voluntary Movements: A Distributed System for Distributed Function, chapter 11, pages 297–328. CRC Press, USA, 2005. J.-J. E. Slotine and W. Li. Applied Nonlinear Control. Prentice-Hall, Inc., Englewood Cliffs, 1991. M. A. Smith, J. Brandt, and R. Shadmehr. Motor disorder in huntington’s disease begins as a dysfunction in error feedback control. Nature, 403(6769):544–549, 2000. F. Takens. Detecting strange attractors in turbulence. In D. A. Rand and L.-S. Young, editors, Dynamical Systems and Turbulence, volume 898 of Lecture Notes in Mathematics, pages 366–381. Springer-Verlag, Berlin, 1981. J. Theiler. Efficient algorithm for estimating the correlation dimension from a set of discrete points. Physical Review A, 36(9):4456–4462, 1987. T. Wang, G. S. Dordevic, and R. Shadmehr. Learning the dynamics of reaching movements results in the modification of arm impedance and long-latency perturbation responses. Biological Cybernetics, 85(6):437–448, 2001. D. M. Wolpert, R. C. Miall, and M. Kawato. Internal models in the cerebellum. Trends in Cognitive Sciences, 2(9):338–347, 1998. 107  Bibliography R. H. Wurtz and E. R. Kandel. Central visual pathways. In E. R. Kandel, J. H. Schwartz, and T. M. Jessell, editors, Principles of Neural Science, chapter 27, pages 523–547. McGray-Hill, New York, fourth edition, 2000.  108  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0080769/manifest

Comment

Related Items