UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Accurate smooth pursuit eye movements lead to more accurate manual interceptions Fooken, Jolande 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2015_september_fooken_jolande.pdf [ 5.27MB ]
JSON: 24-1.0166559.json
JSON-LD: 24-1.0166559-ld.json
RDF/XML (Pretty): 24-1.0166559-rdf.xml
RDF/JSON: 24-1.0166559-rdf.json
Turtle: 24-1.0166559-turtle.txt
N-Triples: 24-1.0166559-rdf-ntriples.txt
Original Record: 24-1.0166559-source.json
Full Text

Full Text

Accurate smooth pursuit eye movements lead tomore accurate manual interceptions.byJolande FookenB.Sc. Physics, RWTH Aachen University, 2010M.Sc. Biomedical Engineering, RWTH Aachen University, 2013A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFMaster of ScienceinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Computer Science)The University Of British Columbia(Vancouver)August 2015c© Jolande Fooken, 2015AbstractIn ball sports, athletes are taught to keep their eyes on the ball to catch or hit itsuccessfully. This intuitive field experience has already been studied in the labora-tory, indicating that tracking a moving object with smooth pursuit eye movementsenhances our ability to predict the object’s trajectory in time and space. Simi-larly, intercepting a moving object critically relies on motion prediction. Here weassessed the functional significance of eye movements for manual interceptions.In a novel paradigm, we asked observers (n=32) to track a small moving dot,back-projected onto a translucent screen, and to intercept it with their index fingerin a designated ‘hit zone’. Hereby, only the first part (100-300 ms) of the trajec-tory was shown. Thus, observers had to extrapolate the trajectory and intercept itsassumed position anywhere within the hit zone.Results show that better pursuit (low eye position and velocity error, high ve-locity gain, few catch-up saccades of small amplitude) lead to more accurate inter-ceptions. A Hazard analysis yielded two interception strategies: Early interceptorsrelied on tracking quality and memory feedback given at the end of each trial,while late interceptors depended more on tracking smoothness, small initial sac-cades, and accurate eye latencies. Early interceptions (less time of invisibility)yielded smaller 2D interception error, while the interception timing was better forlonger periods of smooth tracking (later interceptions).A regression model tree identified low tracking error and small saccadic eyemovements as those eye parameters predicting accurate interceptions best. Notonly do observers benefit from smooth pursuit eye movements during manual in-terception, but the interception accuracy also scales with the quality of the eyemovements.iiPrefaceThis dissertation is original, unpublished, independent work by the author,Jolande Fooken.Experiments presented in this thesis were conducted in UBC’s SensorimotorSystems Laboratory (SSL) and UBC’s Neuroscience of Vision and Action (NOVA)Laboratory, supervised by Prof. Dinesh Pai and Dr. Miriam Spering.Prof. Dinesh Pai, Dr. Miriam Spering, Dr. Sang-Hoon Yeo, and I were involvedin designing and programming all experimental aspects of the presented study.Additionally, I was responsible for data collection, processing, and analysis.Results presented in chapter 3 have partly been presented in form of a posterat the Society for Neuroscience annual meeting. Fooken, J., Yeo, S.-H., Pai, D.K., Spering, M. (2014). Accurate smooth pursuit eye movements improve handmovements in a manual interception task. Program No. 533.12/HH2. 2014 Neu-roscience Meeting Planner. Washington, D.C.: Society for Neuroscience, 2014.Online.The UBC Behavioural Research Ethics Board approved all procedures relatedto this work. Ethics board certificate ID: H12-02564.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Types of Eye Movements . . . . . . . . . . . . . . . . . . . . . . 11.1.1 Saccadic Eye Movements . . . . . . . . . . . . . . . . . 21.1.2 Smooth Pursuit Eye Movements . . . . . . . . . . . . . . 31.1.3 Other Types of Eye Movements . . . . . . . . . . . . . . 61.1.4 Vision for Action and Perception . . . . . . . . . . . . . . 71.2 Visuo-Motor Coordination . . . . . . . . . . . . . . . . . . . . . 81.2.1 Hitting Moving Objects . . . . . . . . . . . . . . . . . . 81.2.2 Eye Movements and Manual Interception . . . . . . . . . 91.3 Linking the Eye to the Hand . . . . . . . . . . . . . . . . . . . . 112 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.1 Eye-Hand Coordination Task . . . . . . . . . . . . . . . . . . . . 132.1.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . 132.1.2 Visual Stimuli and Apparatus . . . . . . . . . . . . . . . 14iv2.1.3 Experimental Procedure and Design . . . . . . . . . . . . 182.2 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2.1 Eye and Hand Movement Recordings and Analysis . . . . 192.3 Statistical Methods and Learning . . . . . . . . . . . . . . . . . . 202.3.1 General Statistical Methods . . . . . . . . . . . . . . . . 202.3.2 Hazard Analysis . . . . . . . . . . . . . . . . . . . . . . 202.3.3 Attribute Selection . . . . . . . . . . . . . . . . . . . . . 222.3.4 Regression Techniques . . . . . . . . . . . . . . . . . . . 222.3.5 Regression Tree . . . . . . . . . . . . . . . . . . . . . . . 232.3.6 Neural Network . . . . . . . . . . . . . . . . . . . . . . . 262.3.7 Model Evaluation . . . . . . . . . . . . . . . . . . . . . . 283 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.1 Effects of Target Properties . . . . . . . . . . . . . . . . . . . . . 303.2 Attribute Selection . . . . . . . . . . . . . . . . . . . . . . . . . 333.3 Finger Interception Accuracy . . . . . . . . . . . . . . . . . . . . 363.3.1 Manual Interception and Pursuit Quality . . . . . . . . . . 363.3.2 Temporal Evolution of Tracking Towards Interception . . 383.3.3 The Role of Feedback or Memory . . . . . . . . . . . . . 403.3.4 Timing and Spatial Interception Error . . . . . . . . . . . 413.4 Statistical Models . . . . . . . . . . . . . . . . . . . . . . . . . . 443.4.1 Single Predictor Regression . . . . . . . . . . . . . . . . 453.4.2 Multiple Linear Regression Model . . . . . . . . . . . . . 463.4.3 Regression Model Tree . . . . . . . . . . . . . . . . . . . 473.4.4 Neural Network . . . . . . . . . . . . . . . . . . . . . . 493.4.5 Model Comparison . . . . . . . . . . . . . . . . . . . . . 523.5 Interception Strategy . . . . . . . . . . . . . . . . . . . . . . . . 524 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594.1 Manual Interception Improves With Pursuit Quality . . . . . . . . 594.2 Interception Strategy . . . . . . . . . . . . . . . . . . . . . . . . 614.3 Statistical Models . . . . . . . . . . . . . . . . . . . . . . . . . . 624.4 Practical Implications . . . . . . . . . . . . . . . . . . . . . . . . 64v4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66viList of TablesTable 2.1 Constant baseball specific properties of simulated fly ball. . . . 17Table 3.1 p-Values of repeated measures ANOVA for finger attributes,i.e. interception (intercept.) error, finger latency and peak ve-locity (vel.), with factors speed and presentation duration (pres.dur.). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Table 3.2 p-Values of repeated measures ANOVA for eye attributes, i.e. 2Dtracking error, eye velocity gain, peak velocity, and cumulative(cum.) saccades, with factors speed and presentation duration(pres. dur.). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Table 3.3 Target, eye, and finger attributes for eye-hand coordination task.Highly correlated measures were reduced to fewer attributes. . 34Table 3.4 Different regression models for single predictor model. . . . . 45Table 3.5 Fitted coefficients for multiple linear regression. The p-valuesindicate the significance of different attributes. . . . . . . . . . 46Table 3.6 Attribute usage of regression models at terminal leafs of theCubistR tree with 100 committees and a prediction adjust-ment of 9 instances. The interception error either increases withincreasing (⊕) or decreasing (	) attribute values. 4 variablesshow mixed effects. . . . . . . . . . . . . . . . . . . . . . . . 49Table 3.7 Feed-forward neural network using Bayesian regularization. Re-sults for different number of hidden units (neurons). . . . . . . 50Table 3.8 Evaluation of different statistical models applied. . . . . . . . . 52viiTable 3.9 Cubist model tree results compared between early and late inter-ceptors. The interception error either increases with increasing(⊕) or decreasing (	) attribute values. Some variables showmixed effects. . . . . . . . . . . . . . . . . . . . . . . . . . . 55viiiList of FiguresFigure 1.1 Exemplary eye position (A) and velocity (B) during a saccadewith an amplitude of 9.9◦, a peak velocity of 240 ◦/s, and aduration of 100 ms. . . . . . . . . . . . . . . . . . . . . . . . 3Figure 1.2 Exemplary eye position (A: blue trace) and velocity (B: greentrace) compared to target position (grey dashed trace) duringsmooth pursuit tracking. Here, the oculomotor system predictsthe time of target onset. Accordingly, the eyes begin to moveprior to the target. . . . . . . . . . . . . . . . . . . . . . . . 4Figure 1.3 Lateral view of the monkey brain. Traditional descending pur-suit pathway is indicated. MT/MST: middle temporal/middlesuperior temoral visual area; FEF: frontal eye fileds; PON:pontine nuclei; PMN: brain stem premotor nuclei; VN: vestibu-lar nucleus. (Modified from Krauzlis, 2004) . . . . . . . . . . 5Figure 2.1 Experimental setup: Stimuli are back-projected onto a com-pact translucent screen (A) using an LCD projector (B). Par-ticipant are seated in reaching distance. The head is supportedby a chin and forehead rest (C). The finger tracker probe istightly attached to the index finger (D) and connected to thetrakSTARTM magnetic tracker (E). . . . . . . . . . . . . . . . 14Figure 2.2 Forces acting on a spinning baseball in flight. The drag forceFD counteracts the direction of the velocity vector. The Mag-nus force FM acts in the ~ω ×~v direction with ω denoting theangular velocity of the baseball. The gravitational force FGacts downward. Figure from Nathan (2008). . . . . . . . . . . 15ixFigure 2.3 Simulated fly-ball trajectories for three different initial speeds(24.1 ◦/s, 29.3 ◦/s, 34.2 ◦/s) and a constant launch angle (φ =35 ◦). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Figure 2.4 Trial sequence for non-practice trials: (A) Initial fixation andeye-tracker drift correction.(B) Upon successful fixation (500-700 ms) ball motion onset either straight (linear block) or parabolic(curved block). (C) Ball disappears after 100, 200, or 300 ms(randomized). (D) Player intercepts at estimated position indarker grey strike zone (red cross) and gets feedback of theactual ball’s position (black dot). . . . . . . . . . . . . . . . 18Figure 2.5 Exemplary hazard curve for a single subject. For each timepoint after stimulus disappearance the hazard level is calcu-lated. Favored interception times for each player can be deter-mined. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Figure 2.6 Terminology of a decision tree. First node is called the root.Intermediate nodes are reached based on splitting rules. Endnodes (no further splits) are called terminal leafs. A node-system within one branch of the tree is called a subtree. . . . . 23Figure 2.7 Five-region (R1, ...,R5) example tree. Recursive binary split-ting is done by selecting a predictor variable X j and a cutpoints such that the predictor space is split into tregions {X |X j ≤ s}and {X |X j > s}. Splitting rules are chosen such that the result-ing tree has the lowest RSS. . . . . . . . . . . . . . . . . . . 24Figure 2.8 Schematic network diagram of a single hidden layer, feed for-ward neural network. Output Y is predicted by a nonlinearmodel of derived features Z1, ...ZM. These features are linearcombinations of the input predictors X1, ...,Xp (modified fromHastie et al., 2008). . . . . . . . . . . . . . . . . . . . . . . . 27Figure 3.1 Effect of target properties (presentation duration and speed) onfinger attributes. Mean values across all players and trials areplotted for the respective conditions. Finger interception error(A), latency (B) and peak velocity (C) are depicted. . . . . . . 31xFigure 3.2 Effect of target properties (presentation duration and speed)on eye attributes. For each attribute, i.e. tracking error (A),eye velocity gain (B), eye peak velocity (C), and cumulativesaccades (D), the mean values across all players and trials areshown for the respective conditions. . . . . . . . . . . . . . . 33Figure 3.3 Boxplots of prediction attributes (9 test runs) sorted based ontheir importance score during random forest regression. Thesingle most important attribute is the tracking error, indicatedin red. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35Figure 3.4 Relationship between tracking and interception error averagedacross every player and condition. Relationships are plottedfor the respective presentation durations in panel A-C. Differ-ent target speeds are coded in blue (24.1◦/s), green (29.3◦/s),and red (34.2◦/s). The quality of the linear regression fits aresummarized in each panel. . . . . . . . . . . . . . . . . . . . 36Figure 3.5 Relationship between cumulative saccades and interception er-ror averaged across every player and condition. Relationshipsare plotted for the respective presentation durations in panelA-C. Different target speeds are coded in blue (24.1◦/s), green(29.3◦/s), and red (34.2◦/s). The quality of the linear regressionfits are summarized in each panel. . . . . . . . . . . . . . . . 37Figure 3.6 Mean velocity gain values for each player, averaged over forthe slowest speed and every presentation duration (indicatedby symbols). With higher gain, the interception error decreases. 38Figure 3.7 Temporal evolution of relationship between tracking error andinterception error for a presentation duration of 200 ms. Dif-ferent target speeds are coded in blue (24.1◦/s), green (29.3◦/s),and red (34.2◦/s). Trials are aligned at the point of interception(D) and then segmented into equal time intervals of 150 msgoing backwards in time (D-A) . . . . . . . . . . . . . . . . . 39xiFigure 3.8 Relationship between memory and interception error averagedacross every player and condition. Relationships are plottedfor the respective presentation durations in panel A-C. Differ-ent target speeds are coded in blue (24.1◦/s), green (29.3◦/s),and red (34.2◦/s). The quality of the linear regression fits aresummarized in each panel. . . . . . . . . . . . . . . . . . . . 40Figure 3.9 The main dependent measure is the 2D interception error (darkblue). The vertical distance to the simulated trajectory is thespatial error (purple). The distance along the trajectory de-scribes the timing error (green). . . . . . . . . . . . . . . . . 41Figure 3.10 Interception error broken down into a timing and spatial a com-ponent for the three different presentation durations (100 ms:circles, 200 ms: triangles, 300 ms:rectangles) and target speeds(24.1◦/s: blue, 29.3◦/s: green, 34.2◦/s: red). . . . . . . . . . . 42Figure 3.11 Effect of target properties (presentation duration and speed)on time and space component of interception error. Both mea-sures are averaged across all players and trials and are shownfor the respective conditions. . . . . . . . . . . . . . . . . . 43Figure 3.12 Boxplots of most important prediction attributes sorted basedon their importance score during random forest regression forthe timing interception error (A) and the spatial interceptionerror (B). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44Figure 3.13 Regression model tree without boosting. Linear regressions(LR) have been fitted at the terminal leafs resulting in 23 rules. 47Figure 3.14 Evaluation of boosting and prediction adjustment parameters.With increasing number of committees the prediction error de-creases. An instance based correction with 9 instances yieldsthe best model fit. . . . . . . . . . . . . . . . . . . . . . . . . 48xiiFigure 3.15 Feed-forward neural network using Bayesian regularization for14 input attributes I1−−I14 and 14 hidden units (neurons)H1−−H14. The weights are color-coded by sign (black +,grey -) and the magnitude of the connections is coded by thick-ness. A bias term feeds into each neuron. The output O1is connected to every neuron via a single weight. Input at-tributes indicated in bold are the attributes with the connectionsof highest magnitude. . . . . . . . . . . . . . . . . . . . . . . 51Figure 3.16 Relationship between the time of invisible flight (from time ofdisappearance to time of interception) and finger interceptionerror. Data shown are for all presentation durations, while thetarget speed is coded by color (24.1◦/s: blue, 29.3◦/s: green,34.2◦/s: red). . . . . . . . . . . . . . . . . . . . . . . . . . . 53Figure 3.17 Hazard level analysis. All players are divided into a groupof early interceptors (N = 17) and late interceptors (N = 15)based on a k-means clustering analysis (A). Within each groupthe hazard levels are averaged (B). . . . . . . . . . . . . . . . 54Figure 3.18 Early interceptors (N = 17) are plotted in dark blue and late in-terceptors (N = 15) in light blue. Averaged eye velocity (A) ofeach group across trials of medium speed (29.3◦/s) and longestpresentation duration (300 ms). True target velocity is indi-cated by the dashed grey line. Group comparison of initialfinger displacement (B) and mean finger velocity (B). . . . . . 56Figure 3.19 Average interception errors of early (dark blue filling) and late(cyan filling) interceptors broken down into relative timing (x)and spatial (y) component. Average for each presentation du-ration (symbols) and target speed (colors) as previously coded.Standard error of the mean error bars are included but to smallto be visible. . . . . . . . . . . . . . . . . . . . . . . . . . . 57xiiiAcknowledgmentsFirst of all I would like to thank my supervision team Miriam Spering and Di-nesh Pai for their guidance and assistance throughout the last two years. It neverceases to amaze me how both of you manage to come up with the most valuablecomments, scientific inspiration, and constructive criticism in between other meet-ings, teaching sessions, and picking up the kids.I also would like to thank Paul van Donkelaar for taking the time to read thisthesis and especially for coming all the way from the Okanagan to attend the thesisseminar. Your scientific input is greatly appreciated.Another huge thank you to Terry McKaig who started this cooperation andmade this entire project possible. Also, thanks to the UBC Thunderbirds baseballteam who were excellent, competitive, and highly motivated participants.This goes out to my lab: Thank you all for patiently enduring my humming,singing, self-debating, talking, joking, laughing, and cursing over the last coupleof years in general and during the last two months in particular. Debanga (ourlab dinosaur), Cole (the man for emergencies), Prashant, Edwin, Darcy, and Jan-ick thank you all for your constant support and motivation. Kaity, we have beenthrough most of this together and I will miss having you around to brainstorm,chat, and de-stress over the next few years.Lastly, I would like to thank my family. Parents, thanks for asking me repeat-edly how my vacation was going, while I was sweating over this beast. You havetaught me well, especially how to keep up composure and easiness, even when lifegets busy. Jan, I can honestly say that this would not have gone as smoothly withoutyou. You introduced me to three whole new (and mostly very productive) hours ofthe day by getting me out of bed with freshly brewed coffee every morning. MainlyI want to say though - for inspiring me every day - deThankYou.xiv1 IntroductionIn professional baseball, a fastball can be pitched with a velocity of up to100 mph. Yet, it is possible for batters to ‘get a piece of the ball’. When hit-ting a home run, the interplay between sensory information (input) and motor ac-tion (output) is working at its optimum. However, not only professional athletesare interacting with moving objects in everyday life. We are moving through adynamically transforming visual environment. The use of visual sensory informa-tion and cognitive prediction is required to successfully guide motor commandswhile interacting with this constantly changing world. This chapter will give anintroduction on how the visuomotor system is operating to cope with the complexand constantly changing environment around us. First, the functional propertiesof different types of eye movements will be dicussed in Section 1.1, focusing onsaccadic and smooth pursuit eye movements. Next, the visuo-motor coordinationwill be addressed in section Section 1.2, giving an introduction to visual trackingmechanisms for different motor demands. Finally, section Section 1.3 will give apreview of the research rationale presented in this thesis.1.1 Types of Eye MovementsWhen inserting a thin thread into a sewing needle, our fixation system pre-vents our eyes from actively moving around, to focus the eye of the needle mostaccurately. So why are we constantly moving our eyes in every day life? The vi-sual environment around us is full of objects of interest and while we pass throughthis world, neither we nor these objects will remain stationary. Thus, humans usea combination of different kind of eye movements to keep up with the dynamicworld around them. In principal, the function of these eye movements is either to1hold the image of interest steady on the retina, or else, to shift the gaze direction ofthe eye to a new point of interest. Four types of eye movements enable stabiliza-tion of the image of the viewed object on the fovea, that is, the retinal region wherevisual acuity is highest; smooth pursuit eye movements, vergence eye movements(tracking objects in depth), the vestibulo-ocular reflex, and the optokinetic nys-tagmus. The latter two are evolutionary older involuntary reflexes. Furthermore,gaze shift or eye reset is accomplished by quick phases of nystagmus or saccadiceye movements. These functionally different types of eye movements complementeach other in natural situations (e.g. Heinen & Keller, 2004; Krauzlis, 2004).In the following, functional and physiological properties of these differenttypes of eye movements will be discussed in more detail. However, this sectionwill focus on voluntary eye movements that primates mainly use when tracking orshifting gaze to objects of interest: saccadic and smooth pursuit eye movements.1.1.1 Saccadic Eye MovementsWhen looking for Waldo in a busy visual scene of dozens or more people, oureyes search in a series of fixations connected by quick, ballistic eye movementscalled saccades. Saccades rapidly redirect the fovea from one object of interest to-wards another and correct for errors between eye and target position (Dodge, 1903;Sparks & Mays, 1990). Visual perception is actively suppressed during these re-locations of the fovea, presumably to avoid motion blur (Carpenter, 1988). Thedistinct velocity profile of a saccade follows a standard waveform consisting of asingle smooth increase and decrease (figure 1.1). Saccade peak velocities can be upto 900 deg/s while their duration remains rather short (30-100 ms) (Leigh & Ken-nard, 2004). Saccades show a consistent relationship between peak velocity andamplitude as well as duration and amplitude, called the main sequence (Becker &Fuchs, 1969; Bahill et al., 1975). As a single saccade is a very short eye move-ment, it cannot be controlled by visual feeback. Instead, saccades are regulatedby an internal feedback loop based on an efference copy of the motor commandsent to the motoneurons (Bridgeman, 1995). Remarkably, the latency to initiate asaccade is relatively long with up to 200 ms for unexpected target motion, suggest-ing that these discrete eye movements are not just a reflex but require significant2Eye position [deg] Eye velocity [deg/s] Time [ms] Time [ms] A B Figure 1.1: Exemplary eye position (A) and velocity (B) during a saccadewith an amplitude of 9.9◦, a peak velocity of 240 ◦/s, and a duration of100 ms.preparation by the central nervous system.1.1.2 Smooth Pursuit Eye MovementsWhile playing video games such as Pac-Man or Pong we are confronted withtargets (i.e. pac man himself or the pong ball) that are constantly in motion. Whentracking these moving objects, observers will naturally follow them with smoothpursuit eye movements, a slow rotation of the eyes to compensate for the target’smovement. These continuous eye movements are primarily driven by visual mo-tion (Rashbass, 1961; Lisberger et al., 1987; Robinson, 1965). Smooth pursuit eyemovements not only shift the gaze to compensate for the motion of a tracked ob-ject, but also hold an object steady on the fovea during slow body motion or headrotation (Ilg, 1997; Carpenter, 1988).Pursuit eye movements are considerably slower than saccadic eye movements.Human observers are able to track targets moving between 1-100 deg/s (Meyeret al., 1985). However, for target velocities exceeding 30 deg/s pursuit is often notquick enough and will be complemented by so called catch-up saccades (de Brouweret al., 2002). Thus, a combination of smooth pursuit tracking and catch-up saccadesis used to compensate for retinal slip , i.e. the error between eye velocity and targetvelocity. The appearance of a moving stimulus at 30 deg/s elicits pursuit eye move-ments with a latency of about 100-150 ms (Carl & Gellman, 1987; Lisberger et al.,31987; Robinson, 1965). Similarly, an unexpected change in the target’s trajectorywould result in an analogous delay (Schwartz & Lisberger, 1994). The magni-tude of the pursuit latency depends on visual target properties, such as size andluminance (Tychsen & Lisberger, 1986), as well as predictability of the target tra-jectory (Bahill & McDonald, 1983). If the future target trajectory is predictable, theoculomotor system will anticipate the specific target trajectory and initiate smoothpursuit even earlier than target onset (see figure 1.2 A) (Kowler, 1989; Barnes &Asselman, 1991).Eye position [deg] Time [ms] A Eye velocity [deg/s] Time [ms] B Figure 1.2: Exemplary eye position (A: blue trace) and velocity (B: greentrace) compared to target position (grey dashed trace) during smoothpursuit tracking. Here, the oculomotor system predicts the time of targetonset. Accordingly, the eyes begin to move prior to the target.The smooth pursuit response is separated into two intervals: first, the open-loopor initiation phase, and second, the closed-loop or maintenance phase (Lisbergeret al., 1987; Tychsen & Lisberger, 1986). The first ∼ 100 ms pursuit eye move-ments are mainly driven by the visual motion of the target, i.e. the retinal imagevelocity. During this open-loop phase the eye initially accelerates in the same di-rection as the target (first 0-20 ms) and later (20-100 ms) adjusts to the target’svelocity (see figure 1.2 B). After this initial phase, visual feedback closes the loop,that is, the difference between eye and target motion is minimized by means ofa negative feedback control. This feedback control could either be driven by anefference copy signal from the eye movement and the retinal target motion signalwhich are then compared to stabilize the image of the target on the retina (Crapse &Sommer, 2008), or by proprioceptive feedback, that is, an afferent feedback from4the stretch receptors in the ocular muscles (Weber & Daroff, 1972). Lisberger et al.(1987) suggested that the continuation of smooth pursuit is attributed to a neuralvelocity memory that maintains the current speed of the eyes unless visual inputprovides another command. Ideally, the speed of the eye and pursued target matchclosely, resulting in a velocity gain, i.e. the ratio of eye relative to target velocity,close to 1. When the tracked target disappears, ongoing pursuit can be maintained,however, at a much lower gain (Becker & Fuchs, 1985; Barnes, 2008). In summary,pursuit is driven by visual motion, a negative feedback signal, predictive mecha-nisms, and other cognitive mechanisms, such as attention, reward, or anticipation(see Barnes, 2008, for review).Anatomically, physical inputs of visual motion that arrive on the retina will beprocessed by retinal ganglion cells. From there, the signal is transmitted to thelateral geniculate nucleus (LGN) and subsequently to the early visual cortical ar-eas (V1). Motion signals are then sent to the middle temporal visual area (MT)and middle superior temporal visual area (MST). These two brain areas are crucialfor processing smooth pursuit. MT has been shown to guide pursuit eye move-Figure 1.3: Lateral view of the monkey brain. Traditional descending pur-suit pathway is indicated. MT/MST: middle temporal/middle superiortemoral visual area; FEF: frontal eye fileds; PON: pontine nuclei; PMN:brain stem premotor nuclei; VN: vestibular nucleus. (Modified fromKrauzlis, 2004)5ments, containing neurons that code for acceleration, speed, and direction of targetmotion (Lisberger & Movshon, 1999) and has also been related to the perceptionof motion (Newsome & Pare, 1988). The adjacent area MST has been shown toplay an important role in pursuit maintenance (Du¨rsteler & Wurtz, 1988) and thereis evidence that MST neurons also respond to extraretinal (i.e. no image motionon the retina) signals during pursuit (Ilg & Thier, 2008). Next, the target motioninformation is passed on to the frontal brain areas, namely the frontal eye field(FEF). Here, initiation and maintenance of pursuit is facilitated. From the FEF,the signal is mediated to the pons of the brainstem, in particular the pontine nuclei(PON), and finally passed on to the cerebellum (compare figure 1.3). From herea motor command is sent to the extraocular muscles to move the eye. This path-way is similar for pursuit and saccadic eye movements. The anatomic substrates ofboth systems and a detailed discussion of differences and similarities is reviewedelsewhere Krauzlis (e.g. 2004, 2005).1.1.3 Other Types of Eye MovementsWhen driving through the prairies of interior Canada, the visual scene becomesrather stationary and simple. Staring out the car window, it seems as if there isno need to voluntarily view any particular point in this endless nothingness. Yet,the observer’s eyes will move in a sawtooth-like pattern. These involuntary eyemovements are due to the optokinetic reflex (OKR) which consists of two phases: aslow and continuously following eye movement as well as a fast, discrete resettingof the eye position (Collewijn, 1969). This reflex is evoked by the stimulation of awide visual field, that is, large regions of the viewed image move together. Thus,the OKR is a feedback system, driven by the error between desired (stabilizedimage on the retina) and actual image speed.Another type of involuntary eye movements is caused by the vestibulo-ocularreflex (VOR). This reflex is not driven by an external visual scene, but by thevestibular system. It serves to stabilize the image on the retina during head move-ments (Ilg, 1997). The semicircular canals of the inner ear sense the head move-ment velocity and subsequently conduct the signal to the eye muscles via thevestibular and oculomotor nuclei. For natural head movement velocities the gain of6the VOR is close approximately 0.7, that is, the evoked eye movement counteractsthe rotation of the head (Leigh & Zee, 1999).1.1.4 Vision for Action and PerceptionThe different types of eye movements previously discussed can be viewed astools that primates use to navigate through complex visual scenes. However, howdo we eventually identify Waldo, or how do we avoid that Pacman is eaten by theghosts chasing him? The visual information available to us is used to establish aperception of the world around us. Based on earlier work of Mishkin & Unger-leider (1982), Goodale & Milner (1992) proposed two separate cortical pathwaysfor visual processing: The ventral, or vision for perception, stream mediates theperceptual identification and recognition of objects, while the dorsal, or vision foraction, stream facilitates the localization and required sensorimotor transformationfor visually guided actions (i.e. eye movements) towards those objects. This viewof partly independent processing of visual perception and control of motor actionhas caused controversy in the literature and has been challenged by others (e.g.Franz et al., 2000).Nonetheless, a strong link between pursuit eye movements and perception hasbeen reported numerous times in many studies (see Spering & Montagnini, 2011,for review). In particular, smooth pursuit eye movements have been reported toenhance perception of moving objects in time (Bennett et al., 2010) and space(Spering et al., 2011). In the former study Bennett et al. (2010) investigated thejudgment of the ‘time to contact’ of a moving stimulus to a given spatially fixedtarget in a pursuit versus fixation condition. They found a perceptual advantageof smooth tracking in this particular time dependent prediction-motion task. Sim-ilarly, Spering et al. (2011) introduced a paradigm called ‘eye soccer’ , in whichthe perceptual ability to judge if a visual target (i.e. the ball) would hit or miss avertical line segment (i.e. the goal) was compared between a fixation and a pursuitcondition. Accordingly, subjects either fixated the ball, while the goal was movingtowards the fixation point, or they tracked the ball moving towards the stationarygoal. In both cases, ball and goal were presented only briefly (100-500 ms). Theyfound that the judgement of ‘hit’ or ‘miss’ trials was more accurate for pursuit than7for fixation trials and thus concluded that pursuit enabled a more precise estimateof the predicted spatial target trajectory.1.2 Visuo-Motor CoordinationWhile attempting to kill a spider that is quickly running over the kitchen counter,the brain integrates visual feedback information and prediction of the spider’s pathto trigger the deadly slap. However, how are we able to strike at exactly the righttime and place? Despite the seeming effortlessness with which the spider’s life isended, the neural control of this action appears to be rather complex, involving afine-tuned interplay between visual feedback signals and experience-based predic-tive signals (Van Donkelaar et al., 1992; Brenner et al., 1998; Brouwer et al., 2002;de Lussanet et al., 2004; Zago et al., 2009; Soechting & Flanders, 2008). Theimplicit role of eye movements in hand movement tasks, such as hitting, manualtracking, pointing, or intercepting, will be discussed in the following.1.2.1 Hitting Moving ObjectsTo successfully intercept a moving target, the hand must meet the object alongits natural path. The mapping of the three-dimensional object motion on to thetwo-dimensional retina represents the so-called inverse problem of vision (Palmer,1999). Generally, this is an ill-posed problem, that is, one retinal image could beproduced by an infinite number of possible real objects. At the same time, a singlegiven object can cause several different retinal images depending on the viewpoint,spatial occlusion, illumination and so on. One theory of how the brain addressesthis problem was developed by Gibson (1979) stating that due to physical lawsthe solution to the inverse problem is constrained in such a way that ecologicallyimpossible solutions become irrelevant. According to Gibson the relevant infor-mation would be carried through an optic array, specified by the pattern of lightcoming from the environment (for a summary see Zago et al., 2009). Later, Lee(1980) advanced Gibson’s idea that information available in the optic flow fieldis used to control activity, to hypothesize that the visual and motor system arefunctionally inseparable, being components of a unified perceptuo-motor system.Lee et al. (1983) also revived Gibson’s idea of the optic variable tau (τ), the ra-8tio between image size and its expansion velocity, which approximates the timean approaching object will take to reach the potential catcher or hitter (Lee et al.,1983; Savelsbergh et al., 1991; Brouwer et al., 2003). While some studies suggestthat subjects initiate their movement when τ reaches a critical value, other studiesshow shortcomings of the τ theory (see Tresilian, 1999, for review). Yet, all ofthese models share a common approach: Based on a critical time to contact vari-able, an optimal time to initiate an interceptive motor action to e.g. catch or hit amoving object is identified, while these studies do not address the spatial outcomeof the movement.On the one hand, it has been shown that moving the hand rapidly improvesthe temporal accuracy when intercepting a moving target (Schmidt, 1969; Newellet al., 1979; Tresilian et al., 2003). On the other hand, quick motor actions reducethe spatial accuracy of the interception (Fitts & Peterson, 1964). This is knownas the speed-accuracy trade-off. The spatial and temporal aspects of interceptingmoving objects with respect to hand movement characteristics, such as reactiontime, velocity, acceleration, or initial path, have been studied extensively. However,fewer studies consider the role of eye movements.1.2.2 Eye Movements and Manual InterceptionBahill & Laritz (1984) posed the research question ‘why batters can’t keeptheir eyes on the ball’. They monitored eye movement strategies of graduate stu-dents compared to a professional baseball player, when hitting a simulated fastball (60-100 mph). While the graduate students used different and very inconsis-tent strategies, such as preliminary head movements and anticipatory saccades ofvarious sizes, the professional baseball player tracked the ball with the same com-bination of head and eye movements each trial. The reported smooth pursuit track-ing velocity of the professional player was significantly higher, enabling the eyesto keep up longer with the simulated target. Similarly, Land & McLeod (2000)conducted a study, in which they compared eye movements in professional andamateur cricket batsmen. They found that batsmen, generally, view the ball closelyup to the moment the bowler releases it, then make a predictive saccade to theplace where they expect it to bounce of the ground, wait for it to arrive, and sub-9sequently track its trajectory for 100-200 ms after the bounce. Again, they foundmore consistent strategies in professional batsmen compared to amateurs as well asa shorter latency for the first predictive saccade. Other studies confirm that athletesuse a combination of smooth tracking and saccadic eye movements, when hittingor catching balls (Ripoll et al., 1986; McKinney et al., 2010; Land & Furneaux,1997).Successfully intercepting a moving object critically relies on the ability to pre-dict the target’s future location. Extrapolation of the target’s path relies on vi-sual information about its location, velocity, and even acceleration (Brouwer et al.,2002; Eggert et al., 2005; Soechting et al., 2009; Port et al., 1997; Delle Monacheet al., 2014). Furthermore, experience from previous trials and thereby the use ofmemory plays an important role in manual interception tasks (Brouwer et al., 2005;Brouwer & Knill, 2007; Issen & Knill, 2012).Several studies have suggested that smooth pursuit eye movements are benefi-cial for a successful manual interceptions. Mrotek (2013) examined the change ofsmooth pursuit eye movements when intercepting moving targets that underwentspeed perturbations at various times. They found a similar response in hand and eyemovements: Both, smooth pursuit and finger movement responded more quicklywhen the target speed perturbation occurred earlier in the trial. Based on theirresults, they concluded that an active process of visual target path extrapolationguides eye as well as hand movement. In an earlier study, (Mrotek & Soechting,2007b) examined characteristics of eye-hand coordination in a manual intercep-tion task. Here, subjects intercepted a given trajectory by moving their index fingerfrom a fixed starting position at the bottom of the screen along its surface. Theywere free to initiate the movement at any time. Interestingly, subjects tracked thetarget’s trajectory right until the point of interception with high-gain smooth pursuiteye movements without being instructed to do so. Furthermore, the probability ofcatch-up saccades was considerably smaller after onset of the manual interception.Brenner & Smeets (2011) reported that subjects are unable to hold their gaze ona set fixation point just before hitting a moving target, even if they are instructedto do so. All these findings stress the importance of eye movements for successfulmanual interceptions.101.3 Linking the Eye to the HandAs discussed in section 1.1.4, tracking a moving object with smooth pursuiteye movements enhances the observer’s ability to predict its future path. Addition-ally, observers use smooth tracking eye movements in manual interception taskswithout being instructed to do so, indicating that these type of eye movements areadvantageous (Mrotek & Soechting, 2007b). Moreover, the opposite approach alsoholds true: Adding hand tracking to an eye-tracking task, improves eye movementaccuracy (Gauthier et al., 1988; Koken & Erkelens, 1992). Thus, a close couplingbetween eye and hand movement performance seems plausible.In the past, several studies have addressed this interplay between ocular andmanual strategies and performance. However, a shortcoming of these studies isoften the unnaturalness of the experimental design and the hand movements inparticular. Delle Monache et al. (2014) investigated whether interceptive perfor-mance was related to oculomotor behavior. However, the actual task was carriedout by moving a virtual baseball player along the horizontal plane of a simulatedbaseball field by moving a computer mouse. Interception was triggered by buttonclick. Arguably, this task engages the visuo-motor system in a different way thana fully carried out hand movement directed towards the target. Similarly, Brenner& Smeets (2011) used a stylus, which had to be slid across a drawing tablet tointercept the moving target. Even though more extensive hand movements are en-abled in this task, the two dimensional restriction within a plane is still unnatural.In a different study Brenner & Smeets (2010) posed the research question: ‘do eyemovements matter when intercepting moving objects’. They compared the spatialposition of a manual interception between fixation (on a static point) and smoothpursuit (of the moving target) trials. However, subjects were unable to see theirhand during movement and received no feedback about their performance, againlimiting the applicability. Johansson et al. (2001) looked at eye-hand coordinationin a goal-directed bar movement to a target that had to be contacted. Subjects wereinstructed to grasp the bar at its right end and move it so that the left end madecontact with the target. This task was performed with and without obstacles. Sub-jects fixated on critical points such as the grasp site on the bar, the final target,or the obstacles, rather than the hand or the moving bar. They concluded that the11gaze strategy was linked to hand movement planning by directing the hand to keypositions when moving the object to a fixed, stationary target.In conclusion, a lot of these studies have focused on hand movements ratherthan on the functional significant of eye movements. This study aims to link thequality of observer’s eye movements to the quality of the interception. Further-more, eye and hand movement strategies will be identified and again linked to themost important eye movement characteristics.122 MethodsThis chapter introduces a new paradigm to investigate the relationship betweeneye movements and predictive, intercepting hand movements. Section 2.1 de-scribes the experimental design, specific task, and data collection in detail. Sub-sequently, the analysis of the collected data is discussed in section 2.2. Section2.3 summarizes the methodology of different types of data driven computationalmodels.2.1 Eye-Hand Coordination TaskThe core of the experimental methods is a novel paradigm that was developedto explore the coordination of eye and hand movements in greater detail. In partic-ular, subjects were asked to track a small moving dot (the ball) back-projected ontoa translucent screen, and to manually intercept its trajectory as accurately as pos-sible in time and space. The ball disappeared after launching and observers wereinstructed to intercept the ball with their index finger after it would have entered adesignated hit zone. Thus, this task requires the ability to extrapolate visual motiontrajectories in order to give an accurate motor response.2.1.1 Participants32 players (mean age 19.7 ± 1.4 yrs) of the 2013/2014 UBC Thunderbirdsvarsity baseball team participated in the study. Each player gave written informedconsent prior to the experiment. All observers were unaware of the purpose of theexperiment. Experiments were in accordance with the principles of the Declarationof Helsinki and approved by the Behavioural Research Ethics Board of the Univer-sity of British Columbia (ID: H12-02564). Out of the 32 male players, 27 reported13to be right- and 5 to be left-handed. Visual acuity, contrast sensitivity, stereo vision,and color vision were tested with standardized vision tests (Bailey-Lovie high-lowvisual acuity eye chart, Randot stereo vision test, Ishihara color vision) prior tothe experiment. All observers had normal stereo and color vision. Except for twoplayers (visual acuity of 20/32 & 20/25 on the ETDRS acuity chart), all players hadnormal or better than normal vision. The team average ETDRS score was 20/16and the contrast sensitivity team average was 20/ Visual Stimuli and ApparatusThe stimulus was back-projected using a Vivid LX20 LCD projector (ChristieDigital Systems, USA) with a refresh rate of 60 Hz onto a translucent screen thatconsisted of a non-distorting projection screen material (Twin White Rosco screenfor front and rear projection) clamped onto a solid glass plate and fixed in a com-pact aluminum frame. The displayed window was 48.5 (H)× 38.8 (V) cm in sizewith a resolution of 1280 (H)× 1024 (V) pixels. Observers were seated in a dimlylit room at 46.25 cm distance from the screen with their head supported by a com-bined chin- and forehead-rest (see figure 2.1). Observers viewed stimuli binoc-ularly. A magnetic tracker probe was tightly attached to the participant’s indexfinger (figure 2.1 D). To avoid obstruction, the tracker cable was fixed on a fittedglove.Figure 2.1: Experimental setup: Stimuli are back-projected onto a compacttranslucent screen (A) using an LCD projector (B). Participant areseated in reaching distance. The head is supported by a chin and fore-head rest (C). The finger tracker probe is tightly attached to the indexfinger (D) and connected to the trakSTARTM magnetic tracker (E).14The stimulus display was controlled by the Eyelinkr host computer (graphicscard: NVIDIA GeForce GT 430) and the experiment was programmed in Matlab7.1 (Mathworks, Natick, MA) using Psychtoolbox 3.0.8. The ball (black Gaus-sian dot, SD = 0.38) moved across a gray background equally divided into a lighter(35.87 cd/m2) and darker (31.45 cd/m2) grey zone. The ball’s velocity was set to3 different speeds (see table 2.1 for details). Participants performed the task withboth hands (randomized block order across all players). The ball moved from leftto right for right handed trials and from right to left for left handed trials, respec-tively. The trajectory type (linear and curved) was varied block-wise.For linear trials, the ball followed a straight path in the horizontal plane (y = 0)with the initial fixation points of x = ±14◦/s depending on the motion direction.For curved trials, the initial fixation point remained the same. The subsequent tra-jectory was simulated to be the parabolic flight of a batted baseball on which threeforces act: gravitational force FG, drag force FD, and Magnus force FM (comparefigure 2.2).Figure 2.2: Forces acting on a spinning baseball in flight. The drag forceFD counteracts the direction of the velocity vector. The Magnus forceFM acts in the ~ω ×~v direction with ω denoting the angular velocity ofthe baseball. The gravitational force FG acts downward. Figure fromNathan (2008).15Originally the trajectory of a fly ball was described by Brancazio (1985):FD = 12ρACDv2 (2.1)mx¨ = −FDcos(φ) = FD(vxv)(2.2)my¨ = −FDcos(φ) −mFG = FD(vyv)−mg (2.3)x¨ = −κvvx (2.4)y¨ = −κvvy−g, (2.5)where κ = ρ ACD2m . (2.6)In this equation, FD denotes the magnitude of the aerodynamic drag force, ρthe air density, A the cross-section of the flying baseball, CD the drag coefficient,v the ball’s velocity, where vx and vy are the horizontal and vertical component ofthe velocity vector, respectively, x¨ and y¨ the horizontal and vertical accelerationcomponents, m the mass of the ball, φ the angle between the velocity vector andthe horizontal component, and FG = g the gravitational acceleration of the ball (seetable 2.1 for more detail). In addition to the aerodynamic drag force, a baseballwill be exposed to the Magnus force which is a result of its spin. A Magnus forceFM was added to horizontal and vertical accelerations (compare equations 2.7 and2.8), setting the final path of the simulated curved trajectory tox¨ = − 1m(Fdcos(φ)−FMcos(φ + pi2))(2.7)y¨ = − 1m(Fdsin(φ)−FMsin(φ + pi2))(2.8)where FM = K f vCD. (2.9)Here, f refers to the frequency with which the simulated ball spins and K is anempirical constant determined by measurements of a spinning baseball in a windtunnel by Watts & Ferrer (1987). Note that equation 2.9 only holds for velocitiesfor which the drag coefficient does not vary strongly, that is, for velocities at whicha hit fly ball travels (Adair, 2002). Baseball related constants as well as initialconditions used for the simulation are summarized in table 2.1. Figure 2.3 shows16the full trajectories of the simulated fly-balls for the three initial speeds chosen.Table 2.1: Constant baseball specific properties of simulated fly ball.Variable Name Value SourceAir density (20◦C. sea level) ρ = 1.204 kg/m3 ICAO manualBaseball cross section A = 2pi ·0.0365 m2 Bahill et al. (2005)Drag coefficient CD = 0.3 NASA researchMass of baseball m = 0.145 kg Adair (2002)Initial angle of flight φ = 35 ◦ Adair (2002)Gravitational force g = 9.81 m/s2 System of UnitsFrequency of ball spin f = 50 Hz Adair (2002)Empirical constant K = 1.2 ·10−3 kg Watts & Ferrer (1987)Initial x-y position [±14.08 ◦, 0] Experimental designInitial absolute velocities 24.1,29.3 or 34.2 ◦/s Experimental designTarget y-position [deg] Target x - position [deg] Strike zone Φ = 35⁰ 24.1°/s 29.3°/s 34.2°/s Figure 2.3: Simulated fly-ball trajectories for three different initial speeds(24.1 ◦/s, 29.3 ◦/s, 34.2 ◦/s) and a constant launch angle (φ = 35 ◦).172.1.3 Experimental Procedure and DesignEach player completed 4 sessions of 2 blocks each, that is, a linear and curvedtrajectory session with the right and left hand, respectively. At the start of a 2block session, players did 27 pursuit-only baseline trials and 9 manual interceptionpractice trials with the entire trajectory visible. The sequence of events during theactual experiment is shown in figure 2.4. The task was to predict the path of theball after it disappeared and to intercept it upon entering a designated hit-zone asaccurately as possible in time and space. In a given trial, the trajectory type andinterception hand was known, while the ball’s speed and presentation duration wererandomly interleaved. Three ball speeds (24.1 ◦/s, 29.3 ◦/s, or 34.2 ◦/s) and threepresentation durations (100 ms, 200 ms, or 300 ms) yielded 9 different conditions.The initial horizontal position of the fixation spot was at −14 ◦/s for right handedtrials and at +14 ◦/s for left handed trials, while the vertical position was at 0 ◦/s.The ball’s motion started upon a successful fixation: The subject had to fixate onthe ball within a radius of < 2.8 ◦ for a randomly chosen time between 500 and700 ms (drift correction).x Fixation (500 or 700 ms) Stimulus onset (curved or linear) Stimulus disappears (100, 200 or 300 ms) Manual interception (x manual intercept      true position feedback)   Strike Zone Strike Zone Strike Zone Strike Zone (A) (B) (C) (D) Figure 2.4: Trial sequence for non-practice trials: (A) Initial fixation and eye-tracker drift correction.(B) Upon successful fixation (500-700 ms) ballmotion onset either straight (linear block) or parabolic (curved block).(C) Ball disappears after 100, 200, or 300 ms (randomized). (D) Playerintercepts at estimated position in darker grey strike zone (red cross)and gets feedback of the actual ball’s position (black dot).182.2 Data AnalysisData were collected in real time using the Eyelinkr tower. Subsequent dataanalysis was carried out with Matlab (R2014a) and R version 3.2.0 run-ning on Windows 7 Enterprise.2.2.1 Eye and Hand Movement Recordings and AnalysisEye position was monitored with a tower-mounted, video-based eye tracker(EyeLinkr 1000; SR Research Ltd., Ottawa, Ontario, Canada) and sampled at1000 Hz. Index finger position was recorded with a magnetic tracker (3D GuidancetrakSTAR, Ascension Technology Corporation, Vermont, USA) with a samplingrate of 240 Hz. Eye and finger velocity were obtained by digital differentiation ofthe respective eye and finger position signals over time. The 2D finger interceptionposition was recorded in x- and y- screen centered coordinates for each trial. Trialsin which the point of interception was not detected were excluded due to techni-cal error. Eye movements were analyzed off-line using custom-made routines inMatlab. Eye velocity profiles were filtered using a low-pass, second-order But-terworth filter with cutoff values of 15 Hz (position) and 30 Hz (velocity). In eachtrace saccades were detected using customized criteria: 5 consecutive frames hadto exceed a fixed velocity criterion of target speed±50 ◦/s. Precise on- and offsetswere then determined by finding the eye acceleration’s (digital differentiation ofeye velocity) respective minima and maxima. Saccades were excluded from pur-suit analysis. Pursuit onset was detected in individual traces using a piecewise lin-ear function fit to the filtered position trace within a time window between 260 msbefore stimulus motion onset and the first saccade onset or 80 ms after stimulusonset, whichever occurred earlier. We calculated the following open-loop pursuitparameters (pursuit onset to 140 ms after pursuit onset): pursuit, initial mean andpeak velocity and acceleration. Furthermore, the closed-loop gain (140 ms afterpursuit onset to point of interception) and the root mean square eye position andvelocity error across the entire trial were determined.Out of 28556 trials, 243 (0.85%) were excluded due to blinking, 453 (1.59%)were excluded because the final interception position on the screen was not de-tected and 57 (0.2%) trials were excluded because the subject moved their hand19too early.2.3 Statistical Methods and LearningGeneral statistical methods applied to the data set are summarized in section2.3.1 and 2.3.2. Furthermore, statistical learning is discussed in the following sec-tions. Supervised learning involves building a statistical model for predicting orestimating an output based on one or more input variables. Accordingly, statisticallearning techniques were applied to the data set in order to identify the relationshipbetween collected input (eye and finger measures) and output (finger accuracy)data. Models were trained using the data set D of the 32 players described aboveand evaluated using a test set D˜ of 10 new players collected in 2014 (a year afterthe original data collection). These statistical models were build in R version General Statistical MethodsTo flag outliers, a standard score (z Score) analysis was performed on all eyeand finger parameters (previously determined in Matlab) across all players andall trials. Trials that deviated from the mean value of each parameter for morethan ±3σ were excluded for further analysis (compare figure ??). Furthermore,effects of target properties (presentation duration, target speed, and trajectory type)as well as player attributes (handedness and batting side) on the dependent variable(interception error) were tested using repeated measures ANOVA. Moreover, thecorrelation between independent and dependent parameters was analyzed by meansof regression analysis.2.3.2 Hazard AnalysisTraditionally, the hazard analysis is used to assess the risk of a system to be-come hazardous to the environment (Watson & Leadbetter, 1964). However, thisso called survival analysis can generally be used to model any kind of time-to-eventcritical data. At any given time step a hazard level between 0 (nothing is occurring)and 1 (all occurrences of the given event) can be calculated. In this case, a hazardanalysis was conducted to find the critical point of interception for each player in-dividually. The time series from stimulus motion onset to the longest recorded trial20was divided into 50 ms bins. In every time bin the number of executed intercep-tions was counted across all trials.Hazard Level Time after stimulus disappearance [ ms] Favored  time to intercept Figure 2.5: Exemplary hazard curve for a single subject. For each time pointafter stimulus disappearance the hazard level is calculated. Favored in-terception times for each player can be determined.Next, the hazard levels in each time interval were calculated for every player(equation 2.10).Ht =ItN−t∑i=1I, (2.10)where Ht is the Hazard level at time interval t, It number of interceptions countedduring the same time interval t, N the total number of interceptions made, andt∑i=1Ithe number of interceptions that have occurred in all previous time intervals. Timedependent Hazard levels can be plotted for each player and the preferred time ofinterception can be determined (see figure 2.5).212.3.3 Attribute SelectionRedundant (highly correlated) eye and finger parameters were identified usingthe CaretR package that provides a findCorrelation function. This functionanalyzes a correlations matrix of all attributes (eye and finger) in the given dataset. Attributes with an absolute correlation of 0.75 or higher were reduced to oneparameter for the subsequent model analysis. These uncorrelated eye and handmovement attributes were then further investigated using the BorutaR package,a feature selection algorithm that aims to identify all relevant attributes (Kursa &Rudnicki, 2010). The algorithm implemented in the BorutaR package is a wrapperbuilt around a random forest regression algorithm (for more detail see Liaw &Wiener, 2002). The method uses an additional randomly designed shadow attributecontaining shuffled values of original values across all predictors. Attributes areconsidered to be relevant if the random forest ranks their importance higher thanthe shadow attribute.2.3.4 Regression TechniquesLinear regression is a simple tool for predicting a quantitative response. Inparticular, a multiple linear regression model will serve as a baseline for relatingall predictor variables X j (eye and finger measures) to the response variable Y(finger accuracy). The multiple linear regression for p distinct predictors takes theformY = β0 +β1X1 +β2X2 + ...+βpXp + ε. (2.11)Here, β0 is the expected value of Y when X = 0 (intercept term), β j quantifiesand weights the link between the j th predictor variable and the response, and ε is amean-zero random error term. The parameters are estimated using a least squaresapproach: β0,β1, ...,βp are chosen to minimize the sum of squared residualsRSS =n∑i=1(yi− yˆi)2 (2.12)=n∑i=1(yi− βˆ0− βˆ1xi1− βˆ2xi2− ...− βˆpxip)2, (2.13)22where the multiple least squares regression coefficient estimates βˆ0, βˆ1, ..., βˆp min-imize equation 2.13 (James et al., 2013).However, the relationship between a single predictor and the response mightnot be linear. An extension of the linear regression model is a polynomial re-gression, that is, the replacement of the linear with a polynomial function. Thepolynomial regression model output yi of a single predictor xi is computed byyi = β0 +β1xi +β2x2i +β3x3i + ...+βdxdi + εi. (2.14)Here, εi is the error term and d is the degree of the polynomial function, forinstance d = 3 for a cubic regression model. A polynomial regression model fittedto the single best predictor will serve as another baseline model.2.3.5 Regression TreeIn general, a decision tree segments the predictor space by applying a set ofoptimized splitting rules. One way of visualizing the partitioning of the predictorspace is to draw a schematic tree (compare figure 2.6).Root node Node 1 Node 2 Node 3 Terminal Leafs  Subtree Figure 2.6: Terminology of a decision tree. First node is called the root. Inter-mediate nodes are reached based on splitting rules. End nodes (no fur-ther splits) are called terminal leafs. A node-system within one branchof the tree is called a subtree.Generally, a regression tree is built following two basic steps (James et al.,2013):231. Divide the predictor space, i.e. the set of possible values for predictorsX1,X2, ...Xp, into J distinct and non-overlapping regions R1,R2, ...,RJ . Theseregions take the shape of boxes. The goal is to find boxes that minimizes thesum of squared residuals, given byRSS =J∑j=1∑i∈R j(yi− yˆR j)2, (2.15)where yˆR j is the mean response of the training observations in the jth box.The division of the predictor space is done by recursive binary splitting:This approach begins at the top of the tree where all observations belong toa single region and then successively splits the predictor space on its waydown (compare figure 2.7). The tree will grow until a set minimum numberof observations is reached in the terminal node.X2 > s4 X2 ≤ s4 X1 > s3 X1 ≤ s3 X2 > s2 X2 ≤ s2 X1 > s1 X1 ≤ s1 X1 X2 X1 X2 R3 R5 R4 R1 R2 Figure 2.7: Five-region (R1, ...,R5) example tree. Recursive binary splittingis done by selecting a predictor variable X j and a cutpoint s such thatthe predictor space is split into tregions {X |X j ≤ s} and {X |X j > s}.Splitting rules are chosen such that the resulting tree has the lowest RSS.2. Once the regions R1, ...,RJ have been created, the same prediction is madefor every observation that falls into the specific region. The actual responsefor a given test observation is predicted using the mean values for the trainingobservations yˆR j in each R j.24An extension to the standard regression tree was developed by Quinlan (1992)who introduced the M5 model tree. This particular method constructs multivariatelinear models instead of distinct values at the terminal leaves, equivalent to piece-wise linear functions. Later, Wang & Witten (1997) reviewed and revised M5. Thismodel is implemented in the CubistR function used for regression tree modelling.The tree is constructed through the following steps:• The initial tree is built using a splitting criterion that investigates the expectederror at each node, that is, the standard deviation of each attribute’s values istreated as the measure of error at that node. Accordingly, the attribute chosenat each note, maximizes the standard deviation reduction (SDR) given bySDR = sd(y)−∑i|yi||y|× sd(yi) (2.16)• To avoid overfitting the model is then pruned back into a smaller tree withfewer splits. First, the response values for all training instances are predictedby incoming predictor values at a given node. The absolute difference be-tween these predicted responses and the actual response values is averaged.To prevent underestimation of the expected error, the average is multipliedby an error factor ξξ = n+ pn− p, (2.17)where n is the number of training instances (attribute values) and p is thenumber of predictors that represent the response at the given node.• A linear regression model (see section 2.3.4) is built at every interior nodeof the unpruned tree. The regression is fitted using predictor attributes thatappear in the subtree below the node of interest. The linear regression modelsare optimized through dropping predictor terms. Terms are dropped as longas the estimated error calculated using equation 2.17 is minimized. Thereby,the tree is pruned back starting from the terminal leafs until the expectedestimated error no longer decreases.• Next, the model is smoothed to compensate for discontinuities. Starting with25the linear model at the terminal leafs, the predicted values for the responseare computed. Then, this predicted value is filtered at each node along itspath back to the root. In particular, this smoothing is done by joining thepredicted value coming into the node with the prediction made at that node:ρ ′ = nρ+ kqn+ k, (2.18)where ρ ′ is the outgoing prediction that is passed up to the next higher node,ρ is the incoming prediction from the node below, q is the value that ispredicted by the linear model at this specific node, n the number of traininginstances and k is a constant with default value 15 (Wang & Witten, 1997).• Lastly, boosting can be performed. Boosting is a procedure in which severaltrees are grown sequentially (James et al., 2013). Hereby, the information ofeach tree is used to grow the next onefˆ (x) =B∑b=1λ fˆ b(x), (2.19)where λ is the shrinkage parameter and B the total number of trees grown.2.3.6 Neural NetworkIn general, an artificial neural network is a nonlinear statistical model for pre-dicting an output variable based on one or more predictor variables. The centralidea of a neural network is to derive features from a given input and subsequentlymodel the response by fitting a nonlinear function of these features. Thus, buildinga neural network is a two-stage regression model (compare figure 2.8) that can bethought of as an adaptive basis function method. The structure of a feed-forwardneural network (also known as multilayer perceptron) leads to a response functionof the form (Titterington, 2004)Y = g{w00 +M∑j=1w0 j f(w j0 +p∑k=1w jkXk)}+ ε. (2.20)Here, ε refers to a Gaussian white noise error term, w00 describes the output26Xp-1 XpX1 X2 X3YSingle hidden LayerFigure 2.8: Schematic network diagram of a single hidden layer, feed forwardneural network. Output Y is predicted by a nonlinear model of derivedfeatures Z1, ...ZM. These features are linear combinations of the inputpredictors X1, ...,Xp (modified from Hastie et al., 2008).bias, w := {w jk} defines the connection weights of input variables X1,X2, ...,Xpto the hidden nodes Z1,Z2, ...ZM with w j0 denoting the bias term of each hiddennode, while w0 j for j = 1, ...M corresponds to the weight of the connection fromthe hidden nodes to the output node. The function g(·) specifies the activationfunction at the output node and is chosen to be the identity function for a continuousresponse Y . The activation functions at each hidden node are defined by fk(·).Often the neuron activation function is chosen to be sigmoidal, however, for thisimplementation the activation functions are calculated by f (ν) = (e2ν−1)/(e2ν +1).In practice, the model will be fitted using a training dataset D of n traininginstances (Yi,Xi), yielding a likelihood function p(D|w). The problem of learninghow to map the structure of the feed forward neural network was addressed by27MacKay (1992) who suggested a Bayesian framework in which the data error isinterpreted as a likelihood function and the regularizer corresponds to a prior prob-ability distribution over the weights. The posterior distribution of the likelihoodfunction can be written asp(w|D,α,β ) ∝ p(w|α) p(D|w,β ), (2.21)where α (regularizing constant) and beta (precision constant) are hyperparame-ters determined by Bayes’ rule (for more detail see MacKay, 1992). The brnnRpackage uses this Bayesian regularization approach to fit a two-layered (input andone hidden layer) feed forward neural network to the data set D ((MacKay, 1992;Foresee & Hagan, 1997). Initial weights are assigned using the Nguyen & Widrow(1990) algorithm. This algorithm aims to distribute the active region of each neu-ron approximately uniform across the layer’s input space. The optimization of theregularization parameters α and β requires solving the Hessian matrixH= β∇2ED +α∇2EW , (2.22)where ED is the minimized sum of squared errors between data input and networkoutput during training, and EW is the sum of squares of the network weights. Forthe brnn neural network Gauss-Newton approximation is performed to computethe Hessian matrix which is done using the Levenberg-Marquardt optimization al-gorithm (Foresee & Hagan, 1997).2.3.7 Model EvaluationTo compare the different statistical models, the root mean square error (RMSE)will be determined for each approach described above. The better the model fitsthe data, the smaller the value of the RMSE, which is given byRMSE =√1nn∑i=1(yi− fˆ (xi))2 , (2.23)where yi is the actual observation in a test set D˜ and fˆ (xi) the prediction madefor the i th observation. Additionally, the correlation between the actual test val-28ues yi and the predicted values fˆ (xi) is determined using the Pearson’s correlationcoefficient c given byc =cov(y, fˆ )σy σ fˆ. (2.24)That is, the covariance of y and fˆ divided by their respective standard deviations.293 ResultsIn this section the relationship between eye movements and interceptive handmovements will be explored. First, effects of target properties and other externalfactors will be analyzed. Then, the significance of different eye and finger mea-sures on interception accuracy will be investigated. Next, pursuit quality acrossall trials and, in particular, over the time course of a single trial will be evaluated.Furthermore, statistical learning models will be applied to the data set. Finally,different interception strategies will be identified and discussed in detail.3.1 Effects of Target PropertiesAs described in section 2.1, players performed the interception task for linearand curved trajectories. The mean finger interception error differed in magnitudebetween these two trajectory types (linear: M = 2.19, SD = 1.51; curved: M = 2.36,SD = 1.39). We found a main effect of the trajectory type on the finger interceptionerror (F(1,31) = 90.18, p < 0.001) and subsequently analyzed the two data sets sep-arately. Qualitatively, there was no pronounced difference in relating eye attributesto the finger interception error in later analysis. However, intercepting the curvedtrajectory was a more complex task of higher variability. Thus, only results fromcurved trajectory trials will be reported from here on.Players performed the task with both their hands. The data set was split intonatural (interception with player’s strong hand) and unnatural (interception withplayer’s weak hand) trials. Subsequently, the means of the interception error ofright- and left-handed players were compared for the natural and unnatural case.A two sampled, two-tailed t-Test showed no difference in means between the twogroups. Accordingly, data were averaged for all (right- and left-handed) players.30Effects of target properties (presentation duration and target speed) on the in-terception error, finger latency, and peak velocity are summarized in table 3.1. Thetarget speed had a significant effect on all three finger measures. The target pre-sentation duration had an significant effect on the interception error, but not on thefinger latency and peak velocity. The interaction between speed and presentationduration has a significant effect on interception error and finger peak velocity.Table 3.1: p-Values of repeated measures ANOVA for finger attributes,i.e. interception (intercept.) error, finger latency and peak velocity (vel.),with factors speed and presentation duration (pres. dur.).Fing. attribute Speed Pres. dur. Speed× Pres. dur.F(1,31) p value F(1,31) p value F(1,31) p valueIntercept. error 73.09 < 0.001 491.85 < 0.001 56.68 < 0.001Finger latency 123.16 < 0.001 0.85 0.36 2.07 0.15Finger peak vel. 579.63 < 0.001 0.52 0.47 31.08 < 0.001Figure 3.1 depicts the mean values of these finger attributes averaged acrossall players and trials for each of the respective conditions. The interception errorshows a speed range effect (3.1 A), that is, the interception error is lowest for themedium target speed (29.3 ◦/s). The effect of target presentation duration can alsobe seen: The finger error decreases with longer presentation duration.Interception error [deg]Finger latency [ms]Finger peak velocity [cm/s]Presentation duration:100 ms 200 ms 300 msPresentation duration:100 ms 200 ms 300 msPresentation duration:100 ms 200 ms 300 ms24.1°/s29.3°/s34.2°/sA B CFigure 3.1: Effect of target properties (presentation duration and speed) onfinger attributes. Mean values across all players and trials are plottedfor the respective conditions. Finger interception error (A), latency (B)and peak velocity (C) are depicted.31Finger latency and peak velocity mainly depend on target speed. The fingerlatency decreases with increasing target speed (figure 3.1 B), while the finger peakvelocity increases for higher target speed (figure 3.1 C).Main effects of target properties on selected eye attributes (tracking error, ve-locity gain, eye peak velocity, and cumulative saccades) are summarized in table3.2. The target presentation duration and the interaction between target speed andpresentation duration have a main effect on all selected eye attributes. The targetspeed has a main effect on eye velocity gain, peak velocity and cumulative saccadesbut not on the tracking error.Table 3.2: p-Values of repeated measures ANOVA for eye attributes, i.e. 2Dtracking error, eye velocity gain, peak velocity, and cumulative (cum.)saccades, with factors speed and presentation duration (pres. dur.).Eye attribute Speed Pres. dur. Speed× Pres. dur.F(1,31) p value F(1,31) p value F(1,31) p valueTracking error < 0.01 0.96 1771.33 < 0.001 42.75 < 0.001Velocity gain 453.84 < 0.001 863.21 < 0.001 7.26 0.007Peak velocity 32.59 < 0.001 736.70 < 0.001 57.76 < 0.001Cum. saccades 996.31 < 0.001 342.02 < 0.001 78.46 < 0.001Similar to the finger interception error, the eye tracking error (average 2D errorbetween target and eye position across the entire trial) shows a speed range effect(figure 3.2 A). The eye velocity gain (ratio of eye and target velocity) systemati-cally increases with increasing presentation duration and decreases for faster targetspeeds (figure 3.2 B). The eye peak velocity increases with increasing presentationduration (figure 3.2 C). Interestingly, the eye peak velocity decreases with increas-ing speed for the presentation duration of 100 ms, while it scales positively for 200and 300 ms presentation duration. The cumulative saccades (sum of all saccadeamplitudes across each trial) increases for higher speeds and slightly decreases forlonger presentation duration (figure 3.2 D).32Presentation duration:100 ms 200 ms 300 msTracking error  [deg]Eye velocity gainEye peak velocity [deg/s]Cumulative saccades [deg]24.1°/s29.3°/s34.2°/sPresentation duration:100 ms 200 ms 300 msPresentation duration:100 ms 200 ms 300 msPresentation duration:100 ms 200 ms 300 msA BC DFigure 3.2: Effect of target properties (presentation duration and speed) oneye attributes. For each attribute, i.e. tracking error (A), eye velocitygain (B), eye peak velocity (C), and cumulative saccades (D), the meanvalues across all players and trials are shown for the respective condi-tions.3.2 Attribute SelectionExperimentally a large set of eye movement and finger movement attributeswere analyzed and computed. These measures were reduced to a smaller, nonredundant set of 14 target, eye, and finger attributes (see table 3.3). The target at-tributes were: speed, presentation duration and feedback position or memory. The33true position of the target was shown to the player (feedback) at the end of eachtrial, that is, after he intercepted at the estimated position (figure 2.4 D). The visualfeedback positions shown in all previous trials were averaged for each of the threerespective target speeds. This averaged position was then compared to the inter-ception position of the current trial, yielding a measure of feedback information,or memory, players used to intercept.Table 3.3: Target, eye, and finger attributes for eye-hand coordination task.Highly correlated measures were reduced to fewer attributes.Target                                                 Eye measures Finger Open loop Closed Loop Saccades Speed Eye latency Velocity gain Total number Mean velocity Presentation duration Mean velocity 2D tracking error Mean amplitude Peak velocity Feedback (memory) Peak velocity 2D velocity error Peak velocity Mean acceleration Mean acceleration Tracking time Mean duration Peak acceleration Peak acceleration Peak velocity Initial amplitude latency Movement time Speed Presentation duration Feedback (memory) Eye latency Peak velocity Velocity gain 2D tracking error Tracking time Peak velocity Cumulative amplitude Initial amplitude Peak velocity Latency Movement time Quick eye movements, that is saccades, and pursuit eye movements were an-alyzed separately. As pursuit initiation and open loop parameter, eye latency andpeak velocity were chosen. Open loop mean velocity as well as mean and peak ac-celeration were highly correlated with the peak velocity and were thus not includedin further modeling. Similarly, the tracking error, that is, the 2D error between tar-get and eye position across the entire trial, was correlated to the 2D velocity error.Consequently, velocity gain (ratio of eye velocity and target speed), peak pursuitvelocity, and tracking error were chosen as closed loop attributes. Additionally, thesmooth tracking time for which smooth pursuit was maintained after target disap-pearance (i.e. until initial saccade onset), held as a pursuit quality measure. Asdiscussed in chapter 1.1.1, there is a consistent relationship between saccade am-plitude and saccadic peak velocity or mean duration, respectively. Thus, saccadicmeasures were reduced to cumulative saccadic eye movements (total number ofsaccades times the mean amplitude) as well as the size of the initial saccade made34in each trial. Finger measures were reduced to the peak velocity, hand movementlatency, as well as the time it took from hand motion onset to interception (move-ment time).Attributes  Cum. saccades Feedback (memory) Target speed Finger move time Tracking Error Finger latency Finger peak vel. Eye peak vel. Target pres. duration Initial sac. amplitude Eye velocity gain Eye latency Open loop peak vel. Tracking time Importance 0  0.2  0.4  0.6  0.8  1 Figure 3.3: Boxplots of prediction attributes (9 test runs) sorted based ontheir importance score during random forest regression. The single mostimportant attribute is the tracking error, indicated in red.Next, the set of chosen attributes was analyzed with the BorutaR package (seesection 2.3.1). All attributes were ranked according to their relevance when pre-dicting the output variable using a random forest algorithm. 9 importance sourceruns were performed (confidence level of 95%). Importance was scored between0 (not relevant) and 1 (most relevant) for each run. Subsequently, the averagedimportance score of each attribute across all runs was compared to the averagedimportance score of a random shadow variable. All variables were found to besignificantly more important for predicting the output (p < 0.05) than the randomshadow attribute (importance score of 0.02 ± 0.006). Using the random forest al-gorithm, the most important predictor attribute is found to be the tracking error,35followed by feedback position (memory), and target speed (compare figure 3.3).Accordingly, the tracking error was used as a predictor for regression models witha single predictor and the entire subset of 14 predictor attributes was used for sta-tistical models with multiple predictors.3.3 Finger Interception AccuracyThis section focuses on the dependent measure, that is, the 2D finger intercep-tion error. First, pursuit quality is related to the quality of the manual interception.Next, the temporal evolution of the relationship between tracking and interceptionaccuracy during the time course of a trial is investigated more closely. Then, therole of feedback (memory) is explored. Lastly, the interception accuracy is brokendown into a temporal and a spatial component as shown in figure Manual Interception and Pursuit QualityInterception accuracy improves with more accurate smooth pursuit eye move-ments, that is, smaller tracking error (figure 3.4) and fewer saccades of smaller size(figure 3.5). The averaged 2D tracking error of every player is related to the fingerinterception error, separated for the three different presentation durations (figure3.4 panel A to C) and speeds. In each panel the three different speeds are indicatedby color (24.1◦/s: blue; 29.3◦/s: green; 34.2◦/s: red). A linear regression is fitted24.1°/s29.3°/s34.2°/s32 4 50 102345102345102345132 4 50 1 32 4 50 1Tracking error  [deg]Interception error  [deg]100 ms 200 ms 300 msR2=0.19, p=0.012R2=0.28, p=0.002R2=0.38, p<0.001R2=0.09, p=0.086R2=0.48, p<0.001R2=0.49, p<0.001R2=0.10, p=0.008R2=0.29, p=0.001R2=0.48, p<0.001A B CFigure 3.4: Relationship between tracking and interception error averagedacross every player and condition. Relationships are plotted for the re-spective presentation durations in panel A-C. Different target speeds arecoded in blue (24.1◦/s), green (29.3◦/s), and red (34.2◦/s). The qualityof the linear regression fits are summarized in each panel.36for each respective condition. The relationship is strongest for the fastest speedand a presentation duration of 200 ms (figure 3.4 B). The linear regressions show asignificant relationship between tracking and interception error for both, mediumand fast speed levels, for all three presentation durations. However, the relation-ship between the eye tracking and the finger interception error is very poor for theslowest speed (R2 < 0.2 for each respective presentation duration).Likewise, figure 3.5 relates the cumulative saccades to the finger interceptionerror. Generally, more saccadic eye movements yield a higher interception error.24.1°/s29.3°/s34.2°/s2010 300023451023451023451Cumulative saccades [ deg]Interception error  [deg]100 ms 200 ms 300 msR2=0.04, p=0.262R2=0.36, p<0.001R2=0.21, p=0.008R2=0.08, p=0.124R2=0.44, p<0.001R2=0.51, p<0.001R2=0.10, p=0.085R2=0.29, p=0.001R2=0.40, p<0.0012010 300 2010 300A B CFigure 3.5: Relationship between cumulative saccades and interception erroraveraged across every player and condition. Relationships are plottedfor the respective presentation durations in panel A-C. Different targetspeeds are coded in blue (24.1◦/s), green (29.3◦/s), and red (34.2◦/s).The quality of the linear regression fits are summarized in each panel.These results are comparable with the results of the tracking error: The rela-tionship is strongest for the highest speed and 200 ms presentation duration (figure3.5 B). Again, the linear model shows a significant relationship between eye move-ments and finger interception accuracy for the medium (29.3◦/s) and high (34.2◦/s)speed levels, while the relationship is poor for the slowest speed (R2 ≤ 0.1 for eachrespective presentation duration).Accordingly, results for the two higher target speeds are consistent: A smoothertracking supports a more accurate interception (lower error). For the slowest speed,however, the relationship is not as clear. To investigate this further, the eye veloc-ity gain is plotted for the slowest target speed only in figure 3.6. Here, a highergain denotes smoother eye movements, that is, the eye’s velocity follows the target37speed more accurately, the closer the gain is to 1. The linear regression model be-tween velocity gain and interception error is significant for the slowest target speed(F(1,94)= 16.63, p < 0.001). Thus, better pursuit eye movements, that is, closertracking of the target, yield more accurate manual interceptions.0.2 0 0 2 3 4 5 1 Interception error  [deg] R 2 =0. 15, p<0.001  Ve locit y gain [ deg ]  24.1 ° / s  100 ms  200 ms  300 ms  0.6  0.4 Figure 3.6: Mean velocity gain values for each player, averaged over for theslowest speed and every presentation duration (indicated by symbols).With higher gain, the interception error decreases.3.3.2 Temporal Evolution of Tracking Towards InterceptionThe temporal evolution of the relationship between the tracking and intercep-tion error for a presentation duration of 200 ms is shown in figure 3.7. Trials arealigned at the point of interception and then segmented into 150 ms intervals go-ing backwards in time. Same results were found for 100 and 300 ms presentationduration (not shown). The plot shows that over the time course of each trial (fromA to D) the relationship between the tracking and interception accuracy increases.Shortly before the time of interception the relationship is strongest (R2 ≥ 0.47 forall three speeds) and the variability between players is smallest (compare figure 3.73824.1°/s29.3°/s34.2°/s52.5 7.50023451023451Tracking error [deg]Interception error  [deg]600 ±450  ms 450 ±300 msR2=0.01, p=0.576R2=0.32, p<0.001R2=0.38, p<0.001R2<0.01, p=0.945R2=0.34, p<0.001R2=0.39, p<0.001A B52.5 7.50023451300 ±150 msR2=0.26, p=0.003R2=0.47, p<0.001R2=0.39, p<0.001C52.5 7.50023451150 ms ±interception R2=0.55, p<0.001R2=0.71, p<0.001R2=0.47, p<0.00152.5 7.50DTracking error [deg]Tracking error [deg]Interception error  [deg]Tracking error [deg]Figure 3.7: Temporal evolution of relationship between tracking error and in-terception error for a presentation duration of 200 ms. Different targetspeeds are coded in blue (24.1◦/s), green (29.3◦/s), and red (34.2◦/s).Trials are aligned at the point of interception (D) and then segmentedinto equal time intervals of 150 ms going backwards in time (D-A)D). In the early phase of the trial, the relationship is not clear, especially for theslowest target speed. Here, the tracking error is still comparably low for a majorityof all players, which does not necessarily relate to how accurate the interceptionwas at the end of the trial (figure 3.7 A-B). For these time interval the tracking errorof the slowest target has no significant effect on the interception error (p > 0.5).393.3.3 The Role of Feedback or MemoryAs discussed in section 3.2 the role of visual feedback at the end of each trialwas considered in the form of a feedback, or memory, attribute. Hereby, the manualinterception position of each trial, was compared to the average feedback positionof the respective target speed shown in all previous trials. Thus, the smaller thevalue of the memory attribute, the closer the manual interception to the previouslyshown visual feedback. Figure 3.8 shows how this feedback (memory) attribute re-lates to the interception error. For the medium speed there is a very strong relation-ship: The closer the player intercepted to the visually shown feedback, the moreaccurate the interception. This relationship is similar but weaker for the fastest24.1°/s29.3°/s34.2°/s32 4 50 102345102345102345132 4 50 1 32 4 50 1Feedback (memory) [ deg]Interception error  [deg]100 ms 200 ms 300 msR2=0.22, p=0.007R2=0.67, p<0.001R2=0.02, p<0.410R2=0.20, p=0.010R2=0.69, p<0.001R2=0.19, p=0.013R2=0.10, p=0.083R2=0.52, p<0.001R2=0.16, p=0.024A B CFigure 3.8: Relationship between memory and interception error averagedacross every player and condition. Relationships are plotted for the re-spective presentation durations in panel A-C. Different target speeds arecoded in blue (24.1◦/s), green (29.3◦/s), and red (34.2◦/s). The qualityof the linear regression fits are summarized in each panel.speed. For the shortest presentation duration of 100 ms (figure 3.8 A) the linearmodel does not reach significance (p > 0.1), while for the longer presentation du-rations (figure 3.8 B and C) the memory attribute has a significant effect on theinterception error (p < 0.05).Interestingly, the relationship is negative for the slowest target speed, that is, theinterception error decreases with interception further away from the feedback po-sition given. This is consistent across all three presentation durations and strongestfor the 100 ms presentation duration (R2 = 0.22). This could indicate that timingthe interception was particularly difficult for the slowest target speed, since a low40memory value only means that the interception was spatially close to the previouslyshown feedback.3.3.4 Timing and Spatial Interception ErrorThe previous sections related the 2D interception error to different eye move-ment attributes. However, the interception might be performed at exactly the righttime but spatially off the trajectory, or the interception might lie on the simulatedtrajectory path but is not timed correctly (figure 3.9). Thus, the interception erroris separated into a spatial and a timing component.Target y-position [deg] Target x - position [deg] 2D interception  error  Timing error  Spatial error  Strike zone Figure 3.9: The main dependent measure is the 2D interception error (darkblue). The vertical distance to the simulated trajectory is the spatialerror (purple). The distance along the trajectory describes the timingerror (green).Figure 3.10 shows the relation between the timing and the spatial error. Asexpected, player intercept too early, i.e. ahead of the actual target (positive timingerror) for the slowest speed, and too late, i.e. behind the target (negative timingerror) for the fastest target. Similarly, the spatial error is mainly positive (abovetrajectory) for the slowest speed, and negative (below the trajectory) for the fastestspeed. These spatial errors are related to the different trajectory shapes for the41three different initial speeds (compare figure 2.3). Generally, the timing error isslightly larger than the spatial error (data points below identity). This is especiallythe case for the slowest target speed. As expected, both, timing and spatial errorsare greatest for the shortest presentation duration (solid circles in figure 3.10).2 2 3 4 3 4 - 3 5 - 4 - 3 - 2 - 2 Timing err or [ deg ]  Spatial error [deg] 100 ms  200 ms  300 ms  24.1°/s 29.3°/s 34.2°/s Pr es. dur .  S peed  x  = y  Figure 3.10: Interception error broken down into a timing and spatial a com-ponent for the three different presentation durations (100 ms: circles,200 ms: triangles, 300 ms:rectangles) and target speeds (24.1◦/s: blue,29.3◦/s: green, 34.2◦/s: red).How much the target properties effect the timing and spatial component of theinterception error becomes even more apparent when looking at the averaged val-ues across all players per condition (figure 3.11). Both errors range widest for theshortest presentation duration. The timing error remains approximately the samefor a target speed of 29.3◦/s across all 3 presentation durations, while the spatialerror slightly decreases to an underestimation (negative) for 300 ms presentationduration. Timing and spatial errors for the fastest target are largest for 100 ms pre-42sentation duration and approach zero for timing and slightly below zero (underes-timation) for the spatial error. The slowest speed is most effected: Both spatial andtiming error are highest compared to the other speeds for each condition. Spatiallythe error for a target speed of 24.1◦/s approaches 0 (0.46± 0.21◦) for a presen-tation duration of 300 ms, however, the timing error remains more than a degree(1.53±0.25◦) ahead of the actual target.Timing error [deg] Spatial error [deg] Presentation duration:  100 ms        200 ms  300 ms A B Presentation duration:  100 ms        200 ms  300 ms 24.1°/s 29.3°/s 34.2°/s Figure 3.11: Effect of target properties (presentation duration and speed) ontime and space component of interception error. Both measures areaveraged across all players and trials and are shown for the respectiveconditions.Similarly to section 3.2, the given target, eye, and finger attributes can beranked by importance with respect to the timing or spatial component of the in-terception error. For both type of errors the target speed is the attribute of mostimportance (compare figure 3.12). Interestingly, for the timing error the finger la-tency is the attribute of second most importance (3.12 A), while for the spatial errorthe feedback, or memory, attribute ranks second (3.12 B). Eye attributes, that is,cumulative saccades and tracking error rank very similarly for both errors. Thus,finger attributes (latency and movement time) seem to be more important for timingthe interception while the visual feedback given influences the spatial componentof the interception.43Importance 0  0.25  0.5  0.75  1 Feedback (memory) Target speed Tracking Error Finger peak vel. Cum. saccades Finger move time Finger latency Feedback (memory) Target speed Tracking Error Cum. saccades Timing error  Spatial error  A B Figure 3.12: Boxplots of most important prediction attributes sorted based ontheir importance score during random forest regression for the timinginterception error (A) and the spatial interception error (B).3.4 Statistical ModelsIn a first step a linear and a polynomial regression is fitted to the attribute ofmost importance (compare section 3.2). Then, the performance of three differentstatistical models fitted to the whole set of attributes is compared. Models arefitted to the training data set D (N = 7896 observations) collected from the UBC2013/2014 varsity baseball team and subsequently parsed as described in section2.1. To test the fitted models, a test data set D˜ (N = 2572 observations) was used44consisting of collected data from 10 new players that joined the team after theoriginal data collection.3.4.1 Single Predictor RegressionAs single predictor attribute, the tracking error was chosen in accordance withthe previously run attribute selection (compare section 3.2). A linear, quadratic,and cubic regression was fitted to the training Data set D and then tested on thetest data set D˜. Table 3.4 summarizes the results. Here, the root mean square error(RMSE) is a measure of how accurate the model predicted the test data, the Pearsoncoefficient and the R2 value indicate how well the model fits the data set and thep-value indicates if the tracking error has a significant effect on the interceptionerror.Table 3.4: Different regression models for single predictor model.Polynomial RMSE [◦] Pearson c R2 p-Value1 1.510 0.460 0.236 < 0.0012 1.507 0.467 0.244 < 0.0013 1.507 0.467 0.244 < 0.001The accuracy of the prediction does not increase significantly with a higherpolynomial dimension. The coefficients of the cubic regression were fitted toyi = 2.42+67.89 xi +11.89 x2i −0.57 x3i . (3.1)The third coefficient in equation 3.1 β3 = −0.57 was the only coefficient that didnot reach significance (p > 0.5). Thus, further polynomial regressions of higherdegree were neglected and the quadratic regression was chosen to hold as a baselinereference for the following statistical modeling approaches.453.4.2 Multiple Linear Regression ModelAll attributes were fitted to the output variable (interception error) by meansof multiple linear regression. Coefficients estimation and significance levels aresummarized in table 3.5.Table 3.5: Fitted coefficients for multiple linear regression. The p-values in-dicate the significance of different attributes.Attribute Estimate Standard error p-Valueβ0 (intercept) 0.600 0.228 < 0.01Tracking error 0.889 0.023 < 0.001Feedback (memory) 0.080 0.010 < 0.001Target speed −0.041 0.004 < 0.001Finger movement time −2.1 ·10−4 2.2 ·10−4 0.35Cumulative saccades 0.010 0.005 0.06Finger latency −0.001 1.9 ·10−4 < 0.001Finger peak velocity 3.624 2.262 0.11Eye peak velocity 0.007 0.002 < 0.001Target presentation duration −2.8 ·10−4 2.6 ·10−4 0.28Initial saccade amplitude −0.043 0.007 < 0.001Eye velocity gain −0.127 0.068 0.06Eye latency −0.002 2.5 ·10−4 < 0.001Open loop peak velocity 0.005 0.002 < 0.01Tracking time 1 ·10−3 2.6 ·10−4 < 0.01Target presentation duration, finger movement time, and finger peak velocitywere the only attributes that did not reach significance (p-value > 0.1) for the mul-tiple linear regression model. The error between model predictions and actual testvalues came to RMSE = 1.488 ◦, the Pearson coefficient was c = 0.488 and themodel fit R2 = 0.268. The model accuracy was consequently improved comparedto the single attribute prediction (compare table 3.8). Removing the non-significant46variables from the data set did not improve the model accuracy (RMSE = 1.493 ◦,c = 0.482, R2 = 0.264).3.4.3 Regression Model TreeRegression model trees were built using the CubistR package. Figure 3.13shows the result for a regression tree built without boosting. The model consistsof 23 rules, that is, 23 different linear regressions have been fitted at all terminalleafs. Predictions are made following the decision rules at each split. This modelimproved the prediction accuracy compared to the multiple linear regression by0.1 ◦, or ≈ 7% (RMSE= 1.390 ◦, c = 0.582).Tracking error Tracking error <= 2.63 > 2.63 Feedback (memory) Target speed <= 2.09 > 2.09 <= 3.69 > 3.69 <= 26.7 > 26.7 Target speed Feedback (memory) Feedback (memory) Cum. saccades LR 1 LR 6 LR 4 LR 5 <= 5.76 > 3.59 > 5.76 LR 2 LR 3 <= 26.7 > 26.7 Initial saccade amp. <= 3.59 Feedback (memory) Finger move time <= 15.53 > 15.53 Target speed LR 8 <= 1.33 > 1.33 LR 10 LR 11 <= 31.75 > 31.75 LR 9 > 526.5 Target speed <= 526.5 Feedback (memory) LR 12 <= 31.75 > 31.75 Finger move time LR 15 <= 1.69 > 1.69 LR 18 LR 19 <= 488 > 488 Eye latency LR 7 <= 10.28 > 10.28 <= -103.5 > -103.5 Target speed Finger peak vel. LR 13 > 31.75 Feedback (memory) <= 31.75 Feedback (memory) LR 14 <= 5.1 > 5.1 Open loop peak vel. LR 16 <= 1.51 > 1.51 LR 20 LR 21 <= 3.84 > 3.84 LR 17 > 2.52 Finger latency <= 2.52 LR 22 LR 23 <= 401.5 > 401.5 Figure 3.13: Regression model tree without boosting. Linear regressions(LR) have been fitted at the terminal leafs resulting in 23 rules.To improve the model performance, the regression tree was boosted, that is,after building the original tree, several subsequent trees were grown, each onelearning from the model fits of the previous tree. To improve prediction accu-racy an instance based correction was added. Hereby, predictions are adjusted bytaking nearby instances in the training set into account. Figure 3.14 shows how the47model accuracy improves for different number of training instances and commit-tees. The model accuracy improves with increasing number of boosting iterations(committees). The model prediction performs poorest for 1 instance and best for9 instances. Accordingly, 100 committees and 9 instances were chosen for theCubistR model tree. With these parameters the model accuracy was increased toRMSE = 1.304◦ and the correlation to c = 0.639.Committees  RMSE (cross-validation) 1 25 50  75  100  1.2 1.3 1.4 1 instance  0  instances  5 instances  9 instances  Figure 3.14: Evaluation of boosting and prediction adjustment parameters.With increasing number of committees the prediction error decreases.An instance based correction with 9 instances yields the best model fit.Table 3.6 summarizes the attribute usage of all linear models at the terminalleafs. The sign in the third column indicates how the predicted interception errordepends on the different attributes. The⊕ sign indicates an increasing interceptionerror with increasing attribute values, while 	 indicates an increasing interceptionerror with decreasing attribute values. Finger latency and movement time show amixed effect: For very high finger latencies and movement times the dependency48switches and an increasing movement time yields a higher interception error. Theindicated feedback position generally shows a positive relationship to the intercep-tion error, that is, an interception further away from the learned feedback positionyields a higher interception error. However, for the slowest target speed, the re-lationship changes. This could be due to a greater timing error for the slowesttarget speed. For saccadic eye movements, the interception error increases withmore saccades of higher amplitude. For trials in which the tracking error is veryhigh (> 3.1 ◦), correctional saccades of higher amplitude decrease the interceptionerror.Table 3.6: Attribute usage of regression models at terminal leafs of theCubistR tree with 100 committees and a prediction adjustment of 9 in-stances. The interception error either increases with increasing (⊕) ordecreasing (	) attribute values. 4 variables show mixed effects.Attribute Usage SignTracking error 86% ⊕Finger latency 84% 	 (⊕)Finger movement time 80% 	 (⊕)Feedback (memory) 74% ⊕ (	)Cumulative saccades 72% ⊕ (	)Target speed 71% 	Initial saccade amplitude 62% 	Eye latency 56% ⊕Tracking time 51% ⊕Target presentation duration 38% 	Finger peak velocity 37% 	Eye peak velocity 36% ⊕Velocity gain 15% 	Open loop peak velocity 8% ⊕3.4.4 Neural NetworkThe brnnR package software minimizes the objective function F = βED +αEW , with ED denoting the error sum of squares of the actual output values com-49pared to the predicted values in the training set, and EW the sum of squares of thenetworks weights and biases (see section 2.3.6 for more details). Table 3.7 com-pares the model performance and parameters for different numbers of hidden layerunits (neurons). The neural network with the same number of neurons (14) as at-tributes listed in table 3.3 has the lowest RMSE (1.271 ◦) and was thus chosen forfurther analysis.Table 3.7: Feed-forward neural network using Bayesian regularization. Re-sults for different number of hidden units (neurons).# Neurons RMSE [◦] Pearson c α β1 1.480 0.499 3.995 34.286 1.304 0.655 0.864 54.2014 1.271 0.674 0.832 61.3420 1.348 0.636 1.034 64.19The structure of the neural net is shown in figure 3.15. On the left hand side14 input attributes feed into the hidden layer, containing 14 neurons. Additionally,a bias term is added to each hidden input unit. While black lines indicate positive(+) weights, grey lines indicate negative (-) weights. The magnitude of the weightsis coded by the thickness of the line. 7 attributes have ‘thick’ connections to thehidden units. Target speed and feedback (memory) have negative weights to neuron9. Eye velocity gain, cumulative saccades, and finger latency have positive weightsto neuron 14. The tracking error has positive weights to neuron 7 (which is alsostrongly biased) and neuron 11. The cumulative saccades attribute has a strongnegative connection to neuron 13. Finally, finger latency and movement time havepositive weights to neuron 13. Neurons 1 and 14 have a strong positive weight tothe output, while neurons 7 and 13 have a negative weight to the output unit. Thisrelationship, however, is not linear.50Interception error  Neuron biases  Cum. saccades  Feedback (memory)  Target speed  Finger move time  Tracking Error  Finger latency  Finger peak vel.  Eye peak vel.  Target pres. duration  Initial sac. amplitude  Eye velocity gain  Eye latency  Open loop peak vel.  Tracking time  Figure 3.15: Feed-forward neural network using Bayesian regularization for 14 input attributes I1−−I14 and 14 hiddenunits (neurons) H1−−H14. The weights are color-coded by sign (black +, grey -) and the magnitude of theconnections is coded by thickness. A bias term feeds into each neuron. The output O1 is connected to everyneuron via a single weight. Input attributes indicated in bold are the attributes with the connections of highestmagnitude.513.4.5 Model ComparisonTable 3.8 summarizes and compares the different statistical training modelsapplied to the data set D. The neural net performs best in predicting the outputvariable on a separate training set D˜, followed by the model tree.Table 3.8: Evaluation of different statistical models applied.Statistical model # Predictors RMSE [◦] Pearson cQuadratic regression 1 1.507 0.467Multiple linear regression 14 1.488 0.488Cubist model tree 14 1.304 0.639brnn Neural net 14 1.271 0.6743.5 Interception StrategyAll data were analyzed with respect to individual player performance. Differ-ent aspects, such as player’s position, years spent on the team, visual acuity, orcontrast sensitivity were related to the interception performance. Additionally, in-terception accuracy was compared in between blocks and hands. Although, thisanalysis showed a few interesting trends, no clear conclusions have been drawn sofar. Discussing these observational results in detail are beyond the scope of thisthesis and will thus not be reported further.In the eye-hand coordination task, a positive relationship between the time ofinvisible flight (time from stimulus disappearance to point of interception) andthe interception error was generally observed (figure 3.16). The linear regressionmodel between these two measures is significant (F(1,94), p < 0001). Thus, thestrategy to intercept the ball as soon as it enters the designated hit zone may bebeneficial for this particular eye-hand coordination task. Based on this observationthe question arises whether players use different strategies when to intercept.52Interception error [deg] Time of invisible flight [ ms] 24.1°/s 29.3°/s 34.2°/s R 2=0.237, p<0.001  Figure 3.16: Relationship between the time of invisible flight (from time ofdisappearance to time of interception) and finger interception error.Data shown are for all presentation durations, while the target speed iscoded by color (24.1◦/s: blue, 29.3◦/s: green, 34.2◦/s: red).As described in section 2.3.2 the favored time of interception was determinedfor each player by means of a Hazard analysis. Based on these peak values, allplayers were separated into two groups of early and late interceptors (figure3.17A). The division was done by a k-means clustering analysis with two clusters (earlyand late), while the hazard peak levels represented the dependent variable. Thecluster centers were at peak interception times of 725 ms (early) and 940 ms (late)after stimulus disappearance. The averaged hazard level for both respective groupsis plotted in figure 3.17 B. The earliest interceptions were made approximately275 ms after disappearance. The ball was invisible for at least 250 ms (fastest speedand longest presentation duration) before it entered the designated hit zone. TheHazard curve of the early interceptors has a sharp peak at 750 ms, while the broaderHazard curve of the late interceptors peaks approximately 200 ms later.53Hazard level peak count Early interceptor cutoff  Early Interceptors  Late Interceptors  Time after stimulus disappearance [ ms] Averaged Hazard Levels A B 600      650      700      750      800       850      900      950     1000    1050    1100  Figure 3.17: Hazard level analysis. All players are divided into a group ofearly interceptors (N = 17) and late interceptors (N = 15) based on ak-means clustering analysis (A). Within each group the hazard levelsare averaged (B).The eye and finger attributes of these two groups can now be analyzed usingone of the statistical models described in section 3.4. In particular, the CubistRmodel tree was run on the data sets of both groups separately. Table 3.9 summarizesthe results. The tracking error scales positive with the finger interception error andis still the attribute used most for predicting the interception error for both groups.For early interceptors the feedback component is the attribute that the model usessecond most. It mostly scales positively with the finger interception error (inter-54Table 3.9: Cubist model tree results compared between early and late inter-ceptors. The interception error either increases with increasing (⊕) ordecreasing (	) attribute values. Some variables show mixed effects.Attribute  Usage  Sign  Tra cking error  87 %   Initial sac. amplitude  63%   Target speed  59%   Eye  latency  57%  Eye peak velocity  53%   Feedba ck (memory)  52%   Tra cking time  49%   Attribute  Usage  Sign  Tra cking error  88 %   Feedba ck (memory)  85%   Finger latency  83%   Target  speed  80 %   Finger move time  78 %   Eye latency  6 7 %   Cumulative  sac c a des  60 %   Early Interceptors  Late Interceptors  ceptions further away from the feedback position yields a higher interception error)except for the slowest target speed, where the relationship is inverted. Third, thefinger latency is important for the model prediction of early interceptors. For thelatency the relationship with the interception error is mixed, that is, for very shortlatencies the relationship is negative, while for latencies higher than 450 ms therelationship is positive. For both early and late interceptors the relationship of thetarget speed is mixed, which is a consequence of the speed range effect discussedearlier. Generally, the target speed is mainly used for defining the tree’s splittingrules and is only part of some of the linear regressions models at the terminal leafs.For late interceptors, a larger initial saccade amplitude leads to a higher finger inter-ception error and is the attribute, which the model uses second most. This positiverelationship is in contrast to the results of the model on the entire data set (comparetable 3.6), where the a larger initial saccade predicted smaller interception error.Interestingly, eye latency scales negatively for late interceptors, that is, a later eyemovement onset is beneficial for late interceptors. However, both groups pursuiton average anticipatory. Thus, a later eye movement onset brings the eye move-ment closer to the actual target onset. As opposed to the early interceptors, thememory component of the late interceptors scales negatively for the highest target55speed (not the slowest) and otherwise positively. Lastly, the tracking time scalespositively. This could indicate that the eye lags behind for longer tracking periodsresulting in larger catch up saccades.Another interesting comparison between the two groups is to look at the eyevelocity (figure 3.18 A), initial finger displacement (figure 3.18 B), and finger ve- Eye/target velocity [deg/s] Time [ms] Early interceptors (N = 17)  Late interceptors (N = 15)  A  Z-finger position [cm] B  X - finger position [cm]   Finger velocity [cm/s] Time [ms] Early interceptors (N = 17)  Late interceptors (N = 15)  C  Screen  Figure 3.18: Early interceptors (N = 17) are plotted in dark blue and lateinterceptors (N = 15) in light blue. Averaged eye velocity (A) of eachgroup across trials of medium speed (29.3◦/s) and longest presentationduration (300 ms). True target velocity is indicated by the dashed greyline. Group comparison of initial finger displacement (B) and meanfinger velocity (B).56locity (figure 3.18 C) averaged across early (N = 17) and late interceptors (N = 15)for a single exemplary condition (target speed of 29.3◦/s, 100 ms presentation du-ration), respectively. While the eye velocity of the late interceptors is on averagegreater, the finger velocity is higher for the early interceptors. The initial finger dis-placement of the early interceptors follows a more direct path towards the screen,while late interceptors move earlier to the side and arrive at the scene further out-side and at a later point of time.Lastly, the different types of finger interception error were evaluated for bothgroups. Figure 3.19 plots the mean interception values for each condition and thetwo respective groups. The relative timing component of the interception error isplotted on the x-axis, while the spatial component is plotted on the y-axis. Overalltiming errors were larger than spatial errors for both groups. Early interceptors(dark blue filling) performed better for the fastest speed (red bordered symbols),however poorer for the slowest target speed (blue bordered symbols). Early inter-ceptors (cyan filling) had on average a smaller timing error compared to the earlyinterceptors.Timing error [ deg]Spatial error [deg] x = y100 ms200 ms300 ms24.1°/s29.3°/s34.2°/sPres. dur . S peedEarly interceptorsLate interceptors Figure 3.19: Average interception errors of early (dark blue filling) and late(cyan filling) interceptors broken down into relative timing (x) and spa-tial (y) component. Average for each presentation duration (symbols)and target speed (colors) as previously coded. Standard error of themean error bars are included but to small to be visible.57In conclusion, both groups intercept best for the medium target speed. Early in-terceptors perform better for the fastest speed and their interception error increasesconsistently for a longer presentation duration. Late interceptors outperform theearly group for the slow target speed and their timing remains better for almost allconditions.584 DiscussionIn this section, experimental results will be discussed and future research pos-sibilities will be outlined. Section 4.1 will debate the effects of eye movements onthe accuracy on manual interceptions. Then, the advantages and limitations of thestatistical models applied will be discussed in section 4.3. Next, section 4.2 willfocus on different interception strategies. Furthermore, practical implications willbe presented in section 4.4 before a final conclusion is drawn in section Manual Interception Improves With Pursuit QualityOverall, an improvement in interception accuracy is found for higher qualitypursuit eye movements. This is in line with what we expected from the literature,where it has been shown that tracking a moving object with smooth pursuit eyemovements enhances the observer’s ability to predict the target’s trajectory in time(Bennett et al., 2010) and space (Spering & Montagnini, 2011). Furthermore, in-tercepting a moving object critically relies on motion prediction (e.g. Soechting &Flanders, 2008). Thus, our findings are consistent with these previous studies andrelate the quality of both eye and hand movements in a novel interception task.To increase the degree of difficulty and to avoid memorization of a certainentrance points of the target into the hit zone, the target speed and presentationduration was varied randomly. These target properties significantly effected theinterception error. Interestingly, the slowest target speed yielded, on average, thehighest interception errors. The relationship between tracking error or cumulativesaccades, respectively, and finger interception error was weakest for the slowesttarget speed. Moreover, relating the memory of the visual feedback positions to thefinger interception error showed a negative relationship for this speed, that is, the59interception error was greater for trials that were intercepted closer to the feedbackposition. One possible explanation could be the fact that the memory attributepurely refers to the distance between the feedback position indicated in previoustrials and does not take the timing of the interception into account. In line withthis, the relationship between memory and the interception error is strongest forthe medium speed. However, if this was the only effect, the relationship should besimilar for the slowest and the highest speed, respectively, since we are comparingthe memory feedback position to the 2D interception error (not the timing error).Figure 2.3 illustrates another possible reason for the discrepancy: The apex of thesimulated fly ball at this speed is reached before the target actually enters the hitzone. Since the target disappears before the apex is actually reached, it becomesvery difficult to extrapolate the directional change of the trajectory for the slowesttarget speed. Mrotek & Soechting (2007a) showed that the direction of smoothpursuit follows the predicted direction of the target when the trajectory is occluded.In accordance with this, players might have predicted that the target continuedrising until it entered the hit zone as is true for the fastest and medium target speed.Manual interception as well as 2D eye tracking error both show a speed rangeeffect (Poulton, 1975), which is the mean tracking and interception errors are low-est for the averaged (medium) speed. This is to be expected for three target speedsand could be avoided by, for example, changing the initial launch angles instead ofthe target speed. This way, variability in trajectory shape would still be ensured,while speed effects would be minimized. Furthermore, for the shortest presentationduration of 100 ms, the eye movement and manual interception quality is very poor.When the target is only visible for 100 ms, the smooth pursuit system is still in theopen loop phase and hence the target has disappeared before visual feedback closesthe loop to correct for eye positional error. Thus, this presentation duration mightbe too short to yield an accurate motion prediction and effects are consequently notas strong.The separation into timing and spatial error yielded interesting results. Thespatial error strongly depends on the memory of visual feedback given in all pre-vious trials. In a study addressing movement planning, Brouwer & Knill (2009)found that their subjects integrated remembered target position from previous feed-back given and peripheral visual information. In accordance with this, the spatial60position of the interception might be influenced by a movement plan relying on thefeedback given in previous trials. Moreover, the timing of the interception dependsmore on movement initiation (finger latency) and is then guided by eye movementmeasures and feedback position. In general, timing the interception seemed to beslightly more challenging than hitting the trajectory path spatially (compare figure3.10). This could be due to the previously discussed speed range effect, leading toearly interceptions for slow and late interceptions for fast targets, respectively.Finally, it should be noted that the separation into spatial and timing error wasdone by approximating the vertical distance to the simulated trajectory (spatialerror) and then measuring the length of the spatial trajectory-intersection point tothe true target feedback point (timing error). Instead, the spatial component could,for example, be chosen as the shortest distance to the simulated trajectory, or thetiming error could be calculated based on the time that would have passed untilthe trajectory had reached the spatial trajectory-intersection point. These measurescould be explored and compared in future analysis.4.2 Interception StrategyIn general, it was shown that early interceptions were on average more accurate(compare figure 3.16). By intercepting the invisible ball as soon as it enters the hitzone, the player spatially minimizes horizontal error. It now becomes a task toestimate when the ball will reach the hit zone and where along a vertical line it willenter. Early interceptors, that might have followed this strategy, made comparablyhigher errors for the slowest target speed. Again, comparing figure 2.3, we see thatthe entrance points of the three trajectories are not evenly spaced, but the slowestspeed enters at a much lower vertical position. Consequently, even though thisstrategy overall lead to a lower interception error in this task, it might not meanthat early interceptors predicted the target motion more accurately over time. Thatis, even though late interceptions are more difficult in terms of spatial uncertainty,they might be more closely related to a baseball player’s performance out on thefield. Here, a batter can not swing as soon as the ball crosses a certain point buthas to time the bat perfectly. Figure 3.18 shows that the late interception group onaverage reaches a higher eye velocity, indicating that they might track the ball better61and take longer to prepare their interception, relying more on their eye movementsthan on a remembered feedback position.To explore this further, a future approach could be to change the task slightly:Instead of having a large hit zone on one side of the screen, where the ball en-ters horizontally, a smaller strike zone could be implemented. In particular, theball would again disappear some time after launch off and then would have to be‘caught’ (intercepted) once it vertically enters a smaller box. Additionally, catchtrials, in which the ball misses the strike zone and the player consequently shouldnot intercept, could be introduced. This way, the task would even more rely onprecise motion prediction and the demands of the visuomotor coordination wouldbe closely related to an actual baseball bat.4.3 Statistical ModelsThree different statistical models were compared for multiple attribute predic-tions. The multiple linear regression is easiest in terms of model computation andinterpretation. Statistically, it provides information about attributes that have a sig-nificant effect on the dependent measure. Mathematically, the coefficients containinformation about the relationship as well as strength between each attribute andthe dependent measure. For example, a lower tracking error yields a lower inter-ception error (positive relationship), while a higher eye velocity gain yields a lowerinterception error (negative relationship). A disadvantage of the multiple regres-sion is that the fit is done across all samples, e.g. not taking into account differ-ent target properties. Accordingly, different target speeds or presentation durationmight yield different attribute coefficients and relationships. Lastly, compared tothe other statistical models applied, the multiple linear regression performs poorestfor a prediction of the finger interception error on a training set (see table 3.8 forreference).Conversely, the feedforward neural network predicts the interception error mostaccurately on a new test data set. However, the neural net structure is highly com-plex and the hidden layer works as a black box. The mapping from the hidden unitsto the output attribute is nonlinear and the weights are thus difficult to interpret. Forinterpretation of the functional significance of different input attributes this model62might thus not be optimal.Finally, the regression model tree predicts the interception error of a test dataset more accurately than the multiple regression analysis and is easier to interpretthan the neural network. Here, the output of the model is a set of fixed rules re-sulting in several different linear regression models. The summary of the modelyields an overview of attribute usage for building the splitting rules and the linearregressions, indicating the attribute importance with respect to the dependent mea-sure. An advantage compared to the multiple linear regression is that linear modelsare fitted to smaller sub-samples of the entire data set, such as to the highest targetspeed only, or trials with very high tracking error. In conclusion, the regressionmodel tree was best suited for exploratory analysis on e.g. attribute importance fordifferent subject groups.All of the models presented in this thesis make predictions based on averagedresults across each trial. This way, the richness of the data set within one trial mightbe lost. For example, the tracking error across all trials could be extrapolated to beof same length. Next, these samples could be parsed by means of principal com-ponent analysis (PCA) or independent component analysis (ICA) to derive newattributes, that is the principal components. These components would also indicateat which time point the eye tracking error shows high variability between subjects.Another approach could be to consider a Bayesian framework for modelling themanual interception based on eye position data within each trial. In particular, aKalman filter could be fitted to each eye position trace. It has already been shownin the literature that a Kalman filter can be used to model visually guided and pre-dictive smooth pursuit eye movements (Orban de Xivry et al., 2013). This modelcould then be updated by the given feedback position at the end of each trial and beincorporated into a larger statistical model across all trials. Bayesian models haveshown to be successful representations of e.g. multisensory information integration(Beierholm et al., 2008) or sensorimotor learning (Ko¨rding & Wolpert, 2004). Thisapproach will be explored in future work.634.4 Practical ImplicationsVision training in sports is becoming a greater part of many professional pro-grams every day (see Abernethy & Wood, 2001, for a review). Clark et al. (2012)reported that the batting average and slugging percentage of the Cincinnati univer-sity baseball team increased significantly between two seasons after systematicallytraining the player’s vision. Similarly, Deveau et al. (2014) report that players ofthe University of California Riverside baseball team showed significant improve-ment in visual acuity and visual contrast sensitivity, as well as a lower number ofstrike outs and a higher number of runs created, after being part of a specific per-ceptual learning program. Many other studies and books report anecdotal evidenceof improved athletic performance after vision or perceptual-cognitive training (e.g.Peters, 2012; Faubert & Sidebottom, 2012). However, these studies often lack asystematic scientific approach and do not consider for example placebo effects ormatched control groups. Moreover, eye movements are often considered in termsof gaze strategies, that is fixational eye movements and not smooth pursuit. Thisstudy gives evidence that smooth tracking is beneficial for manual interceptions.Accordingly, these types of eye movements should be considered when designinga comprehensive and research based vision training.The results of this study focused on averaged eye movement and interceptionbehavior. This could also be broken down in individual player performance andstrategy. This way strength and weaknesses of each player could be identified andindividual consultation could be given to improve the performance of each player.4.5 ConclusionIn the literature several studies have reported a strong connection betweensmooth pursuit eye movements and manual interception (e.g. Mrotek, 2013; Mrotek& Soechting, 2007b; Koken & Erkelens, 1992). This study shows that observersnot only benefit from smooth pursuit eye movements in a manual interception task,but also that the interception accuracy scales with the quality of the eye movements.Additionally, two different interception strategies were identified. Earlier intercep-tions were biased towards a remembered visual feedback position and guided byfast hand movements as well as accurate tracking eye movements. Later intercep-64tions relied overall more of eye movement accuracy, that is low tracking error andinitial saccade, precise eye latency and eye peak velocity.65BibliographyABERNETHY, B. AND WOOD, J. M. (2001). Do generalized visual trainingprogrammes for sport really work? An experimental investigation. Journal ofsports sciences, 19(3), 203–22. → pages 64ADAIR, R. K. (2002). The Physics of baseball. New York: HarperCollins, 3rdedition. → pages 16, 17BAHILL, A. T. AND BALDWIN, D. AND VENKATESWARAN, J. (2005).Predicting a Baseballs Path. American Scientist, 93(3), 218. → pages 17BAHILL, A. T. AND CLARK, M. R. AND STARK, L. (1975). The main sequence,a tool for studying human eye movements. Mathematical Biosciences, 24(3-4),191–204. → pages 2BAHILL, A. T. AND LARITZ, T. (1984). Why Can’t Batters Keep Their Eyes onthe Ball? American Scientist, (May - June), 249–253. → pages 9BAHILL, A. T. AND MCDONALD, J. D. (1983). Smooth pursuit eye movementsin response to predictable target motions. Vision research, 23(12), 1573–83. →pages 4BARNES, G. R. (2008). Cognitive processes involved in smooth pursuit eyemovements. Brain and cognition, 68(3), 309–26. → pages 5BARNES, G. R. AND ASSELMAN, P. T. (1991). The mechanism of prediction inhuman smooth pursuit eye movements. The Journal of physiology, 439,439–61. → pages 4BECKER, W. AND FUCHS, A. F. (1969). Further properties of the humansaccadic system: eye movements and correction saccades with and withoutvisual fixation points. Vision research, 9(10), 1247–1258. → pages 2BECKER, W. AND FUCHS, A. F. (1985). Prediction in the oculomotor system:smooth pursuit during transient disappearance of a visual target. Experimentalbrain research, (57), 562–575. → pages 566BEIERHOLM, U. R. AND KORDING, K. P. AND SHAMS, L. AND MA, W. J.(2008). Comparing Bayesian models for multisensory cue combination withoutmandatory integration. Advances in Neural Information Processing Systems 20,20, 1–8. → pages 63BENNETT, S. J. AND BAURES, R. AND HECHT, H. AND BENGUIGUI, N.(2010). Eye movements influence estimation of time-to-contact in predictionmotion. Experimental brain research, 206(4), 399–407. → pages 7, 59BRANCAZIO, P. J. (1985). Looking into Chapmans homer: The physics ofjudging a fly ball. American Journal of Physics, 53(9), 849. → pages 16BRENNER, E. AND SMEETS, J. B. J. (2010). Intercepting moving objects: doeye movements matter? In R. Nijhawan & B. Khurana (Eds.), Space and Timein Perception and Action (pp. 109–120). Cambridge: Cambridge UniversityPress. → pages 11BRENNER, E. AND SMEETS, J. B. J. (2011). Continuous visual control ofinterception. Human movement science, 30(3), 475–94. → pages 10, 11BRENNER, E. AND SMEETS, J. B. J. AND DE LUSSANET, M. H. (1998).Hitting moving targets. Continuous control of the acceleration of the hand onthe basis of the target’s velocity. Experimental Brain Research, 122(4),467–474. → pages 8BRIDGEMAN, B. (1995). A review of the role of efference copy in sensory andoculomotor control systems. Annals of biomedical engineering, 23(4),409–422. → pages 2BROUWER, A.-M. AND BRENNER, E. AND SMEETS, J. B. (2002). Hittingmoving objects: is target speed used in guiding the hand? Experimental brainresearch, 143(2), 198–211. → pages 8, 10BROUWER, A.-M. AND BRENNER, E. AND SMEETS, J. B. J. (2003). When isbehavioral data evidence for a control theory? Tau-coupling revisited. Motorcontrol, 7(2), 103–110. → pages 9BROUWER, A.-M. AND KNILL, D. C. (2007). The role of memory in visuallyguided reaching. Journal of vision, 7, 6.1–12. → pages 10BROUWER, A.-M. AND KNILL, D. C. (2009). Humans use visual andremembered information about object location to plan pointing movements.Journal of vision, 9, 24.1–19. → pages 6067BROUWER, A. M. AND SMEETS, J. B. J. AND BRENNER, E. (2005). Hittingmoving targets: Effects of target speed and dimensions on movement time.Experimental Brain Research, 165(1), 28–36. → pages 10CARL, J. R. AND GELLMAN, R. S. (1987). Human smooth pursuit:stimulus-dependent responses. Journal of neurophysiology, 57(5), 1446–1463.→ pages 3CARPENTER, R. H. S. (1988). Movements of the eyes. London: Pion. → pages2, 3CLARK, J. F. AND ELLIS, J. K. AND BENCH, J. AND KHOURY, J. ANDGRAMAN, P. (2012). High-performance vision training improves battingstatistics for University of Cincinnati baseball players. PloS one, 7(1), e29109.→ pages 64COLLEWIJN, H. (1969). Optokinetic eye movements in the rabbit: Input-outputrelations. Vision Research, 9(1), 117–132. → pages 6CRAPSE, T. B. AND SOMMER, M. A. (2008). Corollary discharge circuits in theprimate brain. Current Opinion in Neurobiology, 18(6), 552–557. → pages 4DE BROUWER, S. AND YUKSEL, D. AND BLOHM, G. AND MISSAL, M. ANDLEFE`VRE, P. (2002). What triggers catch-up saccades during visual tracking?Journal of neurophysiology, 87(3), 1646–1650. → pages 3DE LUSSANET, M. H. AND SMEETS, J. B. J. AND BRENNER, E. (2004). Thequantitative use of velocity information in fast interception. Experimental brainresearch. Experimentelle Hirnforschung. Experimentation cerebrale, 157(2),181–196. → pages 8DELLE MONACHE, S. AND LACQUANITI, F. AND BOSCO, G. (2014). Eyemovements and manual interception of ballistic trajectories: effects of law ofmotion perturbations and occlusions. Experimental Brain Research, 233(2),359–374. → pages 10, 11DEVEAU, J. AND OZER, D. J. AND SEITZ, A. R. (2014). Improved vision andon-field performance in baseball through perceptual learning. Current biology :CB, 24(4), R146–7. → pages 64DODGE, R. (1903). Five types of eye movement in the horizontal meridian planeof the field of regard. American Journal of Physiology, 8, 307–329. → pages 268DU¨RSTELER, M. R. AND WURTZ, R. H. (1988). Pursuit and optokinetic deficitsfollowing chemical lesions of cortical areas MT and MST. Journal ofneurophysiology, 60(3), 940–965. → pages 6EGGERT, T. AND RIVAS, F. AND STRAUBE, A. (2005). Predictive strategies ininterception tasks: differences between eye and hand movements. Experimentalbrain research, 160(4), 433–49. → pages 10FAUBERT, J. AND SIDEBOTTOM, L. (2012). Perceptual-cognitive training ofathletes. Journal of Clinical Sport Psychology, 6, 85–102. → pages 64FITTS, P. M. AND PETERSON, J. R. (1964). Information capacity of discretemotor responses. Journal of experimental psychology, 67, 103–112. → pages 9FORESEE, F. D. AND HAGAN, M.T. (1997). Gauss-Newton approximation toBayesian learning. Proceedings of International Conference on NeuralNetworks (ICNN’97), 3. → pages 28FRANZ, V. H. AND GEGENFURTNER, K. R. AND BU¨LTHOFF, H. H. ANDFAHLE, M. (2000). Grasping visual illusions: no evidence for a dissociationbetween perception and action. Psychological science : a journal of theAmerican Psychological Society / APS, 11(1), 20–25. → pages 7GAUTHIER, G. M. AND VERCHER, J.-L. AND IVALDI, F. M. ANDMARCHETTI, E. (1988). Oculo-manual tracking of visual targets: controllearning, coordination control and coordination model. Experimental brainresearch, (73), 127–137. → pages 11GIBSON, J. J. (1979). The ecological approach to visual perception. Boston:Houghton Mifflin. → pages 8GOODALE, M. A. AND MILNER, A. D. (1992). Separate visual pathways forperception and action. Trends in neurosciences, 15(1), 20–5. → pages 7HASTIE, T. AND TIBSHIRANI, R. AND FRIEDMAN, J. (2008). The Elements ofStatistical Learning. Stanford, CA: Springer, 2nd edition. → pages x, 27HEINEN, S. J. AND KELLER, E. L. (2004). Smooth pursuit eye movements:Recent advances. Cambridge, MA: MIT Press, vol. 2 edition. → pages 2ILG, U. J. (1997). Slow eye movements. Progress in Neurobiology, 53(3),293–329. → pages 3, 669ILG, U. J. AND THIER, P. (2008). The neural basis of smooth pursuit eyemovements in the rhesus monkey brain. Brain and Cognition, 68(3), 229–240.→ pages 6INTERNATIONAL CIVIL AVIATION ORGANIZATION, MONTREAL, CANADAAND LANGLEYAERONAUTICAL LABORATORY, LANGLEY FIELD, VA.,U.S.A (1954). Manual of the ICAO Standard Atmosphere. Technical report.→ pages 17ISSEN, L. A. AND KNILL, D. C. (2012). Decoupling eye and hand movementcontrol : Visual short-term memory in fl uences reach planning more thansaccade planning. 12(2012), 1–13. → pages 10JAMES, G. AND WITTEN, D. AND HASTIE, T. AND TIBSHIRANI, R. (2013). AnIntroduction to Statistical Learning. New York: Springer. → pages 23, 26JOHANSSON, R. S. AND WESTLING, G. AND BA¨CKSTRO¨M, A. ANDFLANAGAN, J. R. (2001). Eye-hand coordination in object manipulation. TheJournal of neuroscience : the official journal of the Society for Neuroscience,21(17), 6917–6932. → pages 11KOKEN, P. W. AND ERKELENS, C. J. (1992). Influences of hand movements oneye movements in tracking tasks in man. Experimental brain research, (88),657–664. → pages 11, 64KO¨RDING, K. P. AND WOLPERT, D. M. (2004). Bayesian integration insensorimotor learning. Nature, 427(January), 244–247. → pages 63KOWLER, E. (1989). Cognitive expectations, not habits, control anticipatorysmooth oculomotor pursuit. Vision research, 29(9), 1049–1057. → pages 4KRAUZLIS, R. J. (2004). Recasting the smooth pursuit eye movement system.Journal of neurophysiology, 91(2), 591–603. → pages ix, 2, 5, 6KRAUZLIS, R. J. (2005). The control of voluntary eye movements: newperspectives. The Neuroscientist : a review journal bringing neurobiology,neurology and psychiatry, 11(2), 124–137. → pages 6KURSA, M. B. AND RUDNICKI, W. R. (2010). Feature Selection with the BorutaPackage. Journal Of Statistical Software, 36(11), 1–13. → pages 22LAND, M. F. AND FURNEAUX, S. (1997). The knowledge base of theoculomotor system. Philosophical transactions of the Royal Society of London.Series B, Biological sciences, 352(1358), 1231–1239. → pages 1070LAND, M. F. AND MCLEOD, P. (2000). From eye movements to actions: howbatsmen hit the ball. Nature neuroscience, 3(12), 1340–5. → pages 9LEE, D. N. (1980). Visuo-motor coordination in space-time. In G.E. Stelmach &J. Requin (Eds.),Tutorials in Motor Behavior (pp. 281–295). Amsterdam:North-Holland. → pages 8LEE, D. N. AND YOUNG, D. S. AND REDDISH, P. E. AND LOUGH, S. ANDCLAYTON, T. M. (1983). Visual timing in hitting an accelerating ball. TheQuarterly journal of experimental psychology. A, Human experimentalpsychology, 35(A), 333–346. → pages 8, 9LEIGH, R.J. AND ZEE, D.S. (1999). The neurology of eye movements. NewYork: Oxford University Press, 3rd edition. → pages 7LEIGH, R. J. AND KENNARD, C. (2004). Using saccades as a research tool in theclinical neurosciences. Brain, 127(3), 460–477. → pages 2LIAW, A. AND WIENER, M. (2002). Classification and Regression byrandomForest. R news, 2(December), 18–22. → pages 22LISBERGER, S. G. AND MORRIS, E. J. AND TYCHSEN, L. (1987). Visualmotion processing and sensory-motor integration for smooth pursuit eyemovements. Annual review of neuroscience, 10, 97–129. → pages 3, 4, 5LISBERGER, S. G. AND MOVSHON, J. A. (1999). Visual motion analysis forpursuit eye movements in area MT of macaque monkeys. The Journal ofneuroscience, 19(6), 2224–2246. → pages 6MACKAY, D. J. C. (1992). A Practical Bayesian Framework forBackpropagation Networks. Neural Computation, 4(3), 448–472. → pages 28MCKINNEY, T. AND CHAJKA, K. AND HAYHOE, M. (2010). Pro-active gazecontrol in squash. Journal of Vision, 8(6), 111–111. → pages 10MEYER, C. H. AND LASKER, A. G. AND ROBINSON, D. A. (1985). The upperlimit of human smooth pursuit velocity. Vision Research, 25(4), 561–563. →pages 3MISHKIN, M. AND UNGERLEIDER, L. G. (1982). Contribution of striate inputsto the visuospatial functions of parieto-preoccipital cortex in monkeys.Behavioural Brain Research, 6(1), 57–77. → pages 771MROTEK, L. A. (2013). Following and intercepting scribbles: interactionsbetween eye and hand control. Experimental brain research, 227(2), 161–74.→ pages 10, 64MROTEK, L. A. AND SOECHTING, J. F. (2007a). Predicting curvilinear targetmotion through an occlusion. Experimental brain research, 178(1), 99–114. →pages 60MROTEK, L. A. AND SOECHTING, J. F. (2007b). Target interception: hand-eyecoordination and strategies. The Journal of neuroscience, 27(27), 7297–309.→ pages 10, 11, 64NASA GLENN RESEARCH CENTER (2012). Aerodynamics of Baseball. →pages 17NATHAN, A. M. (2008). The effect of spin on the flight of a baseball. AmericanJournal of Physics, 76(119), 23–28. → pages ix, 15NATIONAL INSTITUTE OF STANDARD AND TECHNOLOGY (2008). TheInternational System of Units (SI). Technical report, US Department ofCommerce. → pages 17NEWELL, K. M. AND HOSHIZAKI, L. E. F. AND CARLTON, M. J. ANDHALBERT, J. A. (1979). Movement Time and Velocity as Determinants ofMovement Timing Accuracy. Journal of Motor Behavior, 11(1), 49–58. →pages 9NEWSOME, W. T. AND PARE, E. B. (1988). A selective impairment of motionperception following lesions of the middle temporal visual area (MT). TheJournal of Neuroscience, 8(6), 2201–2211. → pages 6NGUYEN, D. H. AND WIDROW, B. (1990). Neural networks for self-learningcontrol systems. → pages 28ORBAN DE XIVRY, J.-J. AND COPPE, S. AND BLOHM, G. AND LEFE`VRE, P.(2013). Kalman filtering naturally accounts for visually guided and predictivesmooth pursuit dynamics. The Journal of neuroscience : the official journal ofthe Society for Neuroscience, 33(44), 17301–13. → pages 63PALMER, S.E. (1999). Vision science: Photons to phenomenology. Cambridge:MIT Press. → pages 8PETERS, M. A. (2012). See to Play: The Eyes of Elite Athletes. BASCOM HillPublishing Group. → pages 6472PORT, N. L. AND LEE, D. AND DASSONVILLE, P. AND GEORGOPOULOS, A. P.(1997). Manual interception of moving targets. I. Performance and movementinitiation. Experimental Brain Research, 116, 406–420. → pages 10POULTON, E .C. (1975). Range Effects in Experiments on People. AmericanJournal of Psychology, 88(1), 3–32. → pages 60QUINLAN, J. R. (1992). Learning with continuous classes. Machine Learning,92, 343–348. → pages 25RASHBASS, C. (1961). The relationship between saccadic and smooth trackingeye movements. The Journal of physiology, 159, 326–338. → pages 3RIPOLL, H. AND BARD, C. AND PAILLARD, J. (1986). Stabilization of head andeyes on target as a factor in successful basketball shooting. Human MovementScience, 5, 47–58. → pages 10ROBINSON, D. A. (1965). The mechanics of human smooth pursuit eyemovement. Journal of Physiology, 180, 569–591. → pages 3, 4SAVELSBERGH, G. J. AND WHITING, H. T. AND BOOTSMA, R. J. (1991).Grasping tau. Journal of experimental psychology. Human perception andperformance, 17(2), 315–322. → pages 9SCHMIDT, R. A. (1969). Movement time as a determiner of timing accuracy.Journal of Experimental Psychology, 79(1, Pt.1), 43–47. → pages 9SCHWARTZ, J.. D AND LISBERGER, S. G. (1994). Initial tracking conditionsmodulate the gain of visuo-motor transmission for smooth pursuit eyemovements in monkeys. Visual neuroscience, 11(3), 411–424. → pages 4SOECHTING, J. F. AND FLANDERS, M. (2008). Extrapolation of visual motionfor manual interception. Journal of neurophysiology, 99(6), 2956–67. → pages8, 59SOECHTING, J. F. AND JUVELI, J. Z. AND RAO, H. M. (2009). Models for theextrapolation of target motion for manual interception. Journal ofneurophysiology, 102(3), 1491–1502. → pages 10SPARKS, D. L. AND MAYS, L. E. (1990). Signal transformations required for thegeneration of saccadic eye movements. Annual review of neuroscience, 13,309–336. → pages 273SPERING, M. AND MONTAGNINI, A. (2011). Do we track what we see?Common versus independent processing for motion perception and smoothpursuit eye movements: a review. Vision research, 51(8), 836–52. → pages 7,59SPERING, M. AND SCHU¨TZ, A. C. AND BRAUN, D. I. AND GEGENFURTNER,K. R. (2011). Keep your eyes on the ball : smooth pursuit eye movementsenhance prediction of visual motion. Journal of neurophysiology, 105,1756–1767. → pages 7TITTERINGTON, D. M. (2004). Bayesian Methods for Neural Networks andRelated Models. Statistical Science, 19(1), 128–139. → pages 26TRESILIAN, J. R. (1999). Visually timed action: Time-out for ’tau’? Trends inCognitive Sciences, 3(8), 301–310. → pages 9TRESILIAN, R. AND OLIVER, J. AND CARROLL, J. (2003). Temporal precisionof interceptive action: differential effects of target size and speed. Experimentalbrain research. Experimentelle Hirnforschung. Experimentation cerebrale,148(4), 425–438. → pages 9TYCHSEN, L. AND LISBERGER, S. G. (1986). Visual motion processing for theinitiation of smooth-pursuit eye movements in humans. Journal ofneurophysiology, 56(4), 953–968. → pages 4VAN DONKELAAR, P. AND LEE, R. G. AND GELLMAN, R. S. (1992). Controlstrategies in directing the hand to moving targets. Experimental brain research,91, 151–161. → pages 8WANG, Y. AND WITTEN, I. H. (1997). Inducing Model Trees for ContinuousClasses. European Conference on Machine Learning (ECML), (pp. 1–10). →pages 25, 26WATSON, G. S. AND LEADBETTER, M. R. (1964). Hazard analysis. I.Biometrika, 51, 175–184. → pages 20WATTS, R. G. AND FERRER, R. (1987). The lateral force on a spinning sphere:Aerodynamics of a curveball. → pages 16, 17WEBER, R. B. AND DAROFF, R. B. (1972). Corrective movements followingrefixation saccades: type and control system analysis. Vision research, 12(3),467–475. → pages 574ZAGO, M. AND MCINTYRE, J. AND SENOT, P. AND LACQUANITI, F. (2009).Visuo-motor coordination and internal models for object interception.Experimental brain research, 192(4), 571–604. → pages 875


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items