UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

An investigation of multi-modal gaze-supported zoom and pan interactions in ultrasound machines Halwani, Yasmin 2017

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2017_november_halwani_yasmin.pdf [ 15.18MB ]
Metadata
JSON: 24-1.0357171.json
JSON-LD: 24-1.0357171-ld.json
RDF/XML (Pretty): 24-1.0357171-rdf.xml
RDF/JSON: 24-1.0357171-rdf.json
Turtle: 24-1.0357171-turtle.txt
N-Triples: 24-1.0357171-rdf-ntriples.txt
Original Record: 24-1.0357171-source.json
Full Text
24-1.0357171-fulltext.txt
Citation
24-1.0357171.ris

Full Text

An Investigation of Multi-modalGaze-supported Zoom and PanInteractions in Ultrasound MachinesbyYasmin HalwaniB.Sc., Qatar University, 2013A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF APPLIED SCIENCEinThe Faculty of Graduate and Postdoctoral Studies(Electrical and Computer Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)October 2017c© Yasmin Halwani 2017AbstractWe are investigating the potential and the challenges of integrating eye gazetracking support into the interface of ultrasound machines used for routinediagnostic scans by sonographers. In this thesis, we follow a user-centredapproach by first conducting a field study to understand the context ofthe end user. As a starting point to a gaze-supported interface, we focuson the zoom functions of ultrasound machines. We study gaze-supportedapproaches for the often-used zoom function in ultrasound machines andpresent two alternatives, One-step Zoom (OZ) and Multi-step Zoom (MZ).A state-based analysis on the zoom functions in ultrasound machines ispresented followed by a state-based representation of the gaze-supportedalternatives. The gaze-supported state representation extends the manual-based interaction by implicitly integrating gaze input to OZ and offering agaze-supported alternative to moving the zoom box in MZ. Evaluations ofthe proposed interactions through a series of user studies, seventeen non-sonographers and ten sonographers, suggest an increased cognitive demandand time on task compared to the conventional manual-based interaction.However, participants also reported an increased focus on main tasks usingthe gaze-supported alternative, which could offer benefit to novice users.They also report a lowered physical interaction as the gaze input replacessome functions of the manual input.iiLay SummaryThis work describes designing an interface for ultrasound machines withan integrated eye gaze tracker. A reported 91% of sonographers experiencework-related muskuloskeletal disorders. By delegating some of the tasks per-formed with the manual inputs of the machine to the eye gaze, the amount offrequent physical repetitiveness needed to perform sonography tasks can bereduced. As a starting point, we target the zoom function in ultrasound ma-chines and investigate approaches to perform zooming with a combinationof eye gaze input and manual inputs. We present a field study to under-stand the context of our target users, followed by an analysis of the zoomfunctions in the existing ultrasound machines. Our results from a user studyperformed with sonographers show that the reduction of physical demand,using eye gaze as an additional input, increases both the mental demandand the time on task.iiiPrefaceI was the lead investigator of the work described in this thesis under thesupervision of Dr. Fels and Dr. Salcudean. My supervisors presented mewith the idea of the project of a gaze-supported interface for ultrasoundmachines and through field studies and further discussions, the project wasnarrowed down to investigating gaze-supported interactions for ultrasoundmachines for routine diagnostic ultrasound exams.The work presented in this thesis has all been designed and implementedby myself at the Robotics and Control Laboratory and the Human Commu-nication Technologies Laboratory at the University of British Columbia.External tools include the python wrapper for Ulterius [59], the ultra-sound machine communication tool, that was developed by an earlier studentat RCL, Samuel Tatasurya and the EMDAT eye gaze analysis tool that wasdeveloped at the Intelligent User Interfaces group at UBC [37].The controls box described in Chapter 4 was implemented by Leo Met-calf, a co-op student at RCL.The work presented in Appendix D has been published in the Late-Breaking Work category at CHI 2017 [25] and has been presented at theconference in Denver, Colorado, USA as a poster with the latest proposedinteraction design described in Chapter 4. The same poster was later pre-sented at the Arab Women in Computing conference 2017 held in Beirut,Lebanon.The early stages of this work have been presented as a poster and wonthe third place in the category of student poster presentations at the QatarFoundation Annual Research Conference in Doha, Qatar in 2016 and pub-lished in the Qatar Foundation Annual Research Conference Proceedings[26]. It also won the best student poster award at the annual HCI@UBCforum in April, 2016.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiList of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . xviiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 The Challenges of Sonography . . . . . . . . . . . . . . . . . 21.2 Pan and Zoom and Ultrasound Machines . . . . . . . . . . . 31.3 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Background and Related Work . . . . . . . . . . . . . . . . . 62.1 Eye Gaze Trackers as Input Devices . . . . . . . . . . . . . . 62.1.1 An Overview of Eye Gaze Metrics and Eye Gaze Track-ing Technology . . . . . . . . . . . . . . . . . . . . . . 62.1.2 Eye Gaze-supported Interfaces . . . . . . . . . . . . . 82.1.3 Eye Gaze-supported Zooming . . . . . . . . . . . . . 92.1.4 Eye Gaze-supported Panning and Scrolling . . . . . . 122.2 Ultrasound Machines: Applications and Target Users . . . . 122.2.1 Machines Targeted for Routine Ultrasound Exams . . 122.2.2 Machines Targeted for Point-of-care . . . . . . . . . . 142.2.3 Machines Targeted to Aide Other Clinical Tasks . . . 152.3 Ultrasound Machine Interface Design Analysis . . . . . . . . 16vTable of Contents2.3.1 Input Device Interaction . . . . . . . . . . . . . . . . 162.3.2 Image Browser Interaction . . . . . . . . . . . . . . . 182.3.3 Magnification-related Functions . . . . . . . . . . . . 192.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Field Study And Observations . . . . . . . . . . . . . . . . . 233.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.1.1 Observations . . . . . . . . . . . . . . . . . . . . . . . 243.1.2 Survey . . . . . . . . . . . . . . . . . . . . . . . . . . 243.1.3 Interviews . . . . . . . . . . . . . . . . . . . . . . . . 253.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.2.1 Diagnostic Ultrasound Scan Routine . . . . . . . . . . 273.2.2 Contexts of Attention . . . . . . . . . . . . . . . . . . 283.2.3 Machine Functions and Features . . . . . . . . . . . . 303.2.4 Work-related Injuries . . . . . . . . . . . . . . . . . . 383.2.5 Ultrasound-guided Procedures . . . . . . . . . . . . . 423.3 Eye Gaze Tracking Integration . . . . . . . . . . . . . . . . . 443.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Gaze-supported Interface Design . . . . . . . . . . . . . . . . 474.1 Design Assumptions . . . . . . . . . . . . . . . . . . . . . . . 474.2 Input Device Interaction Concepts . . . . . . . . . . . . . . . 484.3 Image Browser Interaction Concepts . . . . . . . . . . . . . . 584.4 Integrating Eye Gaze Input . . . . . . . . . . . . . . . . . . . 664.4.1 One-step Zoom . . . . . . . . . . . . . . . . . . . . . 674.4.2 Multi-step Zoom . . . . . . . . . . . . . . . . . . . . . 674.4.3 Gaze-based Panning . . . . . . . . . . . . . . . . . . . 684.5 Proposed Design . . . . . . . . . . . . . . . . . . . . . . . . . 694.6 Earlier Investigated Design Alternatives . . . . . . . . . . . . 724.6.1 Alternative 1 . . . . . . . . . . . . . . . . . . . . . . . 724.6.2 Alternative 2 . . . . . . . . . . . . . . . . . . . . . . . 774.7 Gaze-supported Features: Implementation Details . . . . . . 804.7.1 Filtering Gaze Data: Moving-average Filter With aThreshold . . . . . . . . . . . . . . . . . . . . . . . . 804.7.2 Gaze-based Simultaneous Pan And Zoom . . . . . . . 834.7.3 Gaze-based Panning Based on Pan Areas . . . . . . . 844.7.4 Mechanism of Gaze-supported One-step Zoom . . . . 854.7.5 Mechanism of Gaze-supported Multi-step Zoom . . . 854.8 Custom Hardware Interface Implementation . . . . . . . . . 854.9 Evaluation of the Presented Designs . . . . . . . . . . . . . . 86viTable of Contents4.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875 Context-free User Study: Interactive Game . . . . . . . . . 885.1 Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885.2 Experiment Design . . . . . . . . . . . . . . . . . . . . . . . 905.2.1 Game Design . . . . . . . . . . . . . . . . . . . . . . . 905.2.2 Setup and Structure . . . . . . . . . . . . . . . . . . . 965.2.3 Apparatus . . . . . . . . . . . . . . . . . . . . . . . . 1005.3 Analysis Tools . . . . . . . . . . . . . . . . . . . . . . . . . . 1015.3.1 Background on the qualitative tests used . . . . . . . 1015.4 OZE Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 1015.4.1 Demographics . . . . . . . . . . . . . . . . . . . . . . 1015.4.2 Quantitative Evaluation . . . . . . . . . . . . . . . . 1035.4.3 Qualitative Evaluation . . . . . . . . . . . . . . . . . 1075.4.4 Post-Experiment Discussions . . . . . . . . . . . . . . 1115.4.5 Discussion of Results . . . . . . . . . . . . . . . . . . 1135.5 MZE Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155.5.1 Demographics . . . . . . . . . . . . . . . . . . . . . . 1155.5.2 Quantitative Evaluation . . . . . . . . . . . . . . . . 1165.5.3 Qualitative Evaluation . . . . . . . . . . . . . . . . . 1215.5.4 Post-Experiment Discussions . . . . . . . . . . . . . . 1225.5.5 Discussion of Results . . . . . . . . . . . . . . . . . . 1255.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1266 Context-focused User Study: Clinical Experiment . . . . . 1286.1 Goal and Hypotheses . . . . . . . . . . . . . . . . . . . . . . 1286.2 Background on Study Tasks . . . . . . . . . . . . . . . . . . 1286.3 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1316.3.1 Setup And Structure . . . . . . . . . . . . . . . . . . 1326.3.2 Apparatus . . . . . . . . . . . . . . . . . . . . . . . . 1386.4 Analysis Tools . . . . . . . . . . . . . . . . . . . . . . . . . . 1396.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1406.5.1 Demographics . . . . . . . . . . . . . . . . . . . . . . 1406.5.2 Observed Gaze-supported Interaction Challenges . . . 1426.5.3 Qualitative Results . . . . . . . . . . . . . . . . . . . 1446.5.4 Quantitative Results . . . . . . . . . . . . . . . . . . 1496.5.5 Results from the Mixed Models Analysis of Variance 1516.5.6 Suggested Improvements for Other Ultrasound Ma-chine Functions . . . . . . . . . . . . . . . . . . . . . 1576.5.7 Discussion of Results . . . . . . . . . . . . . . . . . . 157viiTable of Contents6.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1587 Conclusions And Recommendations . . . . . . . . . . . . . . 1607.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1607.2 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . 1617.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 163Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166A Pixel-angle Accuracy Conversion for Eye Gaze Tracking Ap-plications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174B Sonographers-Radiologists Survey . . . . . . . . . . . . . . . 176C General Survey Feedback from Sonographers . . . . . . . . 184D First-iteration Clinical User Study . . . . . . . . . . . . . . . 187D.1 Gaze-supported Interface Design . . . . . . . . . . . . . . . . 187D.2 Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187D.3 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189D.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190D.4.1 Time on Task . . . . . . . . . . . . . . . . . . . . . . 191D.4.2 Button Hit Rate . . . . . . . . . . . . . . . . . . . . . 191D.4.3 Qualitative Feedback and Discussions . . . . . . . . . 191D.5 Improvements for the Second Iteration . . . . . . . . . . . . 192E Game User Study Script . . . . . . . . . . . . . . . . . . . . . 195E.1 Participant Recruitment Email . . . . . . . . . . . . . . . . . 195E.2 OZ Preparation Settings . . . . . . . . . . . . . . . . . . . . 196E.3 MZ Preparation Settings . . . . . . . . . . . . . . . . . . . . 196E.4 Before the Participant’s Arrival . . . . . . . . . . . . . . . . 196E.5 After the Participant’s Arrival . . . . . . . . . . . . . . . . . 197E.6 Manual-based Interaction Session . . . . . . . . . . . . . . . 198E.7 Break Session . . . . . . . . . . . . . . . . . . . . . . . . . . 199E.8 Gaze-supported Interaction Session . . . . . . . . . . . . . . 199E.9 Discussion Session . . . . . . . . . . . . . . . . . . . . . . . . 200F Game Participants’ Demographics Form . . . . . . . . . . . 201G Game Qualitative Evaluation Form . . . . . . . . . . . . . . 203viiiTable of ContentsH CBD-CHD Ultrasound Scan Steps . . . . . . . . . . . . . . . 207I Clinical User Study Script . . . . . . . . . . . . . . . . . . . . 209I.1 Participant Recruitment Email . . . . . . . . . . . . . . . . . 209I.2 Before the Participant’s Arrival . . . . . . . . . . . . . . . . 209I.3 Introduction Session . . . . . . . . . . . . . . . . . . . . . . . 211I.4 Phantom Session . . . . . . . . . . . . . . . . . . . . . . . . . 213I.5 Patient Session . . . . . . . . . . . . . . . . . . . . . . . . . . 213I.6 Discussion Session . . . . . . . . . . . . . . . . . . . . . . . . 214J Sonographers’ Demographics Form . . . . . . . . . . . . . . . 215K Post-processing of the Collected Data from the Clinical UserStudy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217ixList of Tables3.1 Survey question 3.1.6 “Provide your rating of the following.Where 1 = Highly Disagree and 7 = Highly Agree.” . . . . . . 293.2 The Most Common Efficient Ultrasound Machine Features asListed by Survey Respondents . . . . . . . . . . . . . . . . . . 343.3 The Most Common Inefficient Ultrasound Machine Featuresas Listed by Survey Respondents . . . . . . . . . . . . . . . . 353.4 Surveyed Benefits of Semi-automated Systems in UltrasoundMachines (e.g. Scan Assistant) . . . . . . . . . . . . . . . . . 363.5 Surveyed Drawbacks of Semi-automated Systems in Ultra-sound Machines (e.g. Scan Assistant) . . . . . . . . . . . . . . 363.6 The Need for Sonographers to Adjust the Ultrasound MachineParameters During an Interventional Procedure . . . . . . . . 423.7 Difficulty Communicating with an Assistant During Ultrasound-guided Procedures . . . . . . . . . . . . . . . . . . . . . . . . 433.8 The Preference for Hands-free Control of Ultrasound Ma-chines in Ultrasound-guided Procedures . . . . . . . . . . . . 444.1 Ultrasound Image Magnification-related Functions and TheirAssociated Devices . . . . . . . . . . . . . . . . . . . . . . . . 494.2 Image Magnification-related Functions and Their Tasks Di-mensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515.1 The Game User Study Procedure . . . . . . . . . . . . . . . . 995.2 OZE Participants’ Session Order . . . . . . . . . . . . . . . . 1035.3 MZE Participants’ Session Order . . . . . . . . . . . . . . . . 1165.4 Mean Time Limit for Gaze-supported Recorded Sessions forMZE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1205.5 Use of Gaze Feature During Recorded Sessions of MZE . . . . 1205.6 TLX Scores for Each Input Modality . . . . . . . . . . . . . . 1216.1 Clinical User Study Phantom Targets and Instructed Tech-niques of Interaction . . . . . . . . . . . . . . . . . . . . . . . 133xList of Tables6.2 The Clinical User Study Procedure . . . . . . . . . . . . . . . 1346.3 Encountered Challenges During The Clinical User Study Trials1426.4 Mean Time on Task Based on Input Method and Target . . . 153D.1 The Dedicated Buttons’ Functionalities Based on the SystemUsed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189D.2 The averages of participants’ tasks results for the number ofbuttons hit, completion time and input rate for each of thesystems tested. . . . . . . . . . . . . . . . . . . . . . . . . . . 191K.1 Summary on Clinical User Study Data Post-Processing . . . . 217xiList of Figures1.1 The Setting of a Diagnostic Sonographer’s Environment andthe Sonographer’s Three Contexts of Attention: the Ultra-sound Image, the Patient and the Machine Controls. . . . . . 22.1 A variety of ultrasound machine interface designs are avail-able for a variety of target users and applications. . . . . . . . 133.1 Survey Statistics and Number of Responses Over Time . . . . 253.2 Survey Respondents’ Years of Experience in Sonography . . . 263.3 Types of Ultrasound Scans Survey Respondents Perform . . . 263.4 Survey Respondents’ Number of Hours of Work per Week . . 273.5 An Ultrasound Room’s Layout: A second screen is placed ata viewing distance from the patient for OB scans. . . . . . . . 293.6 Levels of Agreement on the Survey Statements Listed in Table3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.7 Answers to the Survey Question “Which buttons, functionsor features do you use most frequently? at least once per scanin >90% of all scans” . . . . . . . . . . . . . . . . . . . . . . 313.8 Answers to the Survey Question “Which buttons, functionsor features do you use most frequently? at least once per scanin 40% - 90% of all scans” . . . . . . . . . . . . . . . . . . . 313.9 Answers to the Survey Question “Which buttons, functionsor features do you use most frequently? at least once per scanin <40% of all scans” . . . . . . . . . . . . . . . . . . . . . . 323.10 A Phillips Ultrasound Machine Interface . . . . . . . . . . . . 343.11 Types of Ultrasound Scans that Use Scan Assistant . . . . . . 373.12 Prevalence of Work-related Injuries Among Surveyed Sonog-raphers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.13 Severity of Work-related Injuries Among Surveyed Sonogra-phers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.14 Causes of Work-related Injuries Among Surveyed Sonographers 40xiiList of Figures4.1 The Manual Controls Interface of the GE Logic E9 Ultra-sound Machine . . . . . . . . . . . . . . . . . . . . . . . . . . 504.2 Three-state Diagram for One-step (Low-resolution) Zoom inUltrasound Machines . . . . . . . . . . . . . . . . . . . . . . . 544.3 Three-state Diagram for High-resolution Zoom in UltrasoundMachines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564.4 Browser Taxonomy Presentation Aspects [47]: A check markis added next to the design choices for ultrasound machineimage browsers. . . . . . . . . . . . . . . . . . . . . . . . . . . 594.5 Browser Taxonomy Operation Aspects [47]: The design choicesfor the different alternatives of ultrasound machine imagebrowser magnification-related functions are highlighted withdifferent check marks. . . . . . . . . . . . . . . . . . . . . . . 614.6 The Input Layout Design of the Traditional (Manual-based)Ultrasound Machine Interface with the Mapped Functions perState for Zoom Functions. . . . . . . . . . . . . . . . . . . . . 634.7 A State Diagram of Zoom Functions in Ultrasound Machines:MZ zoom includes all three states. OZ includes only the Full-scale and Zoom states. . . . . . . . . . . . . . . . . . . . . . . 644.8 An Illustrated Interaction of One-step Zoom, Multi-step Zoomand Panning of Traditional (Manual-based) Ultrasound Ma-chine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654.9 The Interface Layout of the Proposed Design Alternative: theactive input elements are the trackball, the toggle button, thegaze button, and the clickable zoom knob. . . . . . . . . . . . 694.10 The Interaction State Diagram of the Proposed Design Alter-native: the same as the state diagram in Figure 4.7, with fouradded gaze-supported interactions taking the Point of Gaze(POG) as an input. In the Zoom states, eye gaze input isimplicitly integrated. In the Pre-zoom states, eye gaze inputis explicitly used to move the zoom box. . . . . . . . . . . . . 704.11 An Illustrated Interaction of One-step Zoom, Multi-step Zoomand Panning of the Proposed Design. . . . . . . . . . . . . . . 714.12 The Interface Layout of Design Alternative 1: the active inputelements are the trackball, push button 1, push button 2, pushbutton 3, and push button 4. . . . . . . . . . . . . . . . . . . 734.13 The Interaction State Diagram of Design Alternative 1: thetotal number of states are reduced by omitting the sub-statesof Zoom and Pre-zoom. . . . . . . . . . . . . . . . . . . . . . 74xiiiList of Figures4.14 An Illustrated Interaction of One-step Zoom, Multi-step Zoom,and Panning of Design Alternative 1. . . . . . . . . . . . . . . 754.15 The Interface Layout of Design Alternative 2: the active inputelements are the trackball, the gaze button, and the clickablezoom knob. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784.16 The Interaction State Diagram of Design Alternative 2: thecombination of inputs is reduced compared to alternative 1,displayed in Figure 4.13, for some functions. . . . . . . . . . . 784.17 An Illustrated Interaction of One-step Zoom, Multi-step Zoom,and Panning in Design Alternative 2. . . . . . . . . . . . . . . 794.18 The Gaze Data Filtering Algorithm Used . . . . . . . . . . . 824.19 Gaze-supported Interaction Pan Areas Located at the Edgesof the Image (8 Areas) . . . . . . . . . . . . . . . . . . . . . . 844.20 The Custom-made Manual Controls Box Used for Both UserStudies Described in Chapter 5 and 6. . . . . . . . . . . . . . 865.1 The Game Software Interface . . . . . . . . . . . . . . . . . . 925.2 When zoomed into an alien, the context view shows the full-scale view with a box around the location of the zoom. . . . . 925.3 When the trackball is in reposition mode, the “reposition”icon is shown at the bottom right of the screen. . . . . . . . . 935.4 When the trackball is in resize mode, the “resize” icon isshown at the bottom right of the screen. . . . . . . . . . . . . 935.5 OZE Alien Targets (Top) and MZE Vertical and HorizontalAliens Targets (Bottom) . . . . . . . . . . . . . . . . . . . . . 945.6 A graph showing the relationship between the generated tar-gets’ size and distance from centre during the One-step ZoomExperiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955.7 The Setup (Left) and Layout (Right) of the Context-freeGame User Study . . . . . . . . . . . . . . . . . . . . . . . . . 975.8 Demographics of OZE Participants . . . . . . . . . . . . . . . 1025.9 Number of Training Levels per Participant per Session for OZE1035.10 Number of Failed Trials (Timeouts) per Training Session Num-ber for OZE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1045.11 Number of Successful Trials per Training Session Number forOZE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055.12 Trials per Training Session Input Modality for OZE . . . . . 1065.13 Participants’ Learning Curve During OZE: The number markedat each data point represents the number of participants thatexperienced this level. . . . . . . . . . . . . . . . . . . . . . . 106xivList of Figures5.14 Time Limits in the Recorded Sessions of OZE . . . . . . . . . 1075.15 Timeouts in the Recorded Sessions of OZE . . . . . . . . . . 1085.16 Box Plots of the Qualitative Evaluation Results of OZE . . . 1085.17 Sources of Task Load for OZE per Input Modality . . . . . . 1095.18 The Reported Sources of Frustration during OZE . . . . . . . 1105.19 Number of Training Levels per Session for MZE . . . . . . . . 1165.20 Number of Times the Gaze Feature was Used During theGaze-supported Session of MZE . . . . . . . . . . . . . . . . . 1175.21 Average Training Sessions’ Learning Curve per Participantfor MZE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1185.22 Average Training Sessions’ Learning Curve for MZE . . . . . 1185.23 Time Limits in Recorded Sessions for MZE . . . . . . . . . . 1195.24 Trials Based on Target Size and Distance From Centre for MZE1195.25 Sources of Task Load for MZE . . . . . . . . . . . . . . . . . 1225.26 The Reported Sources of Frustration During MZE . . . . . . 1236.1 An ultrasound image showing the location of the CBD andCHD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1306.2 The Clinical User Study Room Setup: the setup in the labclosely matches the setup of an ultrasound room in a hospital. 1326.3 Phantom Training Targets . . . . . . . . . . . . . . . . . . . . 1366.4 Phantom Recorded Targets . . . . . . . . . . . . . . . . . . . 1376.5 The Clinical User Study Hardware Architecture . . . . . . . . 1396.6 Participant Sonographers’ Demographics . . . . . . . . . . . . 1406.7 The Observed Issues in Gaze Interaction During the ClinicalUser Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1496.8 Participants’ Behaviour During the Phantom Session, Gaze-supported Multi-step Zoom Trials. . . . . . . . . . . . . . . . 1506.9 Participants’ Choice of Input Method and Technique Duringthe Patient Session . . . . . . . . . . . . . . . . . . . . . . . . 1506.10 Time on Task Statistics . . . . . . . . . . . . . . . . . . . . . 1526.11 The Interaction Effect Between Input Method X Target onEye Movement Velocity: with higher zoom levels, using thegaze-supported interaction slows down the eye movement ve-locity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1546.12 Main Effect on Mean Fixation Duration By Input Method . . 155A.1 Pixel-angle Conversion Parameters . . . . . . . . . . . . . . . 174xvList of FiguresD.1 The Ultrasound Machine and System Components Used forthe Implementation of Our Systems and User Study (Left).The Control Keys Panel of the Ultrasound Machine Used inOur Users Study (Right). . . . . . . . . . . . . . . . . . . . . 188D.2 The Quality Assurance Phantom Used in the User Study andthe Corresponding Ultrasound Images of the First, Secondand Third Targets. . . . . . . . . . . . . . . . . . . . . . . . . 190xviList of AbbreviationsAPI Application Program InterfaceBMI Body-Mass IndexCBD Common Bile DuctCHD Common Hepatic DuctCW ClockwiseCCW Counter-clockwiseDICOM Digital Imaging and Communication in MedicineHCI Human-Computer InteractionMI Mechanical IndexMZ Multi-step ZoomMZE Multi-step Zoom ExperimentOOR Out of RangeOZ One-step ZoomOZE One-step Zoom ExperimentPACS Picture Archiving and Communication SystemPB Push ButtonPOG Point of GazeROI Region of InterestSUS System Usability ScaleTCP/IP Transmission Control Protocol and the Internet ProtocolTGC Time-Gain CompensationUS UltrasoundUX User ExperienceWRMSD Work-related Musculoskeletal DisordersxviiAcknowledgementsI would like to thank Dr. Sid Fels and Dr. Tim Salcudean for their super-vision and continuous support during my master’s degree. It has been anenriching experience to receive feedback from two professors to direct andfine-tune my work. The time and resources they dedicated for discussingwith me all the challenges encountered in this thesis through their busyschedules is very appreciated.Also, I would like to thank Vickie Lessoway for her enormous supportto this project including her help in contacting hospitals, contacting BritishColumbia Ultrasonographers’ Society, recruiting sonographers for user stud-ies, testing a multitude of prototypes and advising on the design of twouser studies with sonographers. I extend my thanks as well to BC Women’sHospital and Richmond Hospital for facilitating my observations at theirmedical imaging departments and to BCUS for distributing the surveys andannouncements of participants recruitments for the user studies.I would like to thank Qatar Research Leadership Program, a member ofQatar Foundation, for their sponsorship of my graduate studies and for theirfinancial support to all other academic-related costs including my travel toconferences and annual QRLP meetings. Special thanks for extending myfunding to cover the extra time needed to finish my thesis work.I would like to thank all the members of both of my labs, the Roboticsand Control Laboratory and the Human Communication Technologies Lab-oratory, for the great support and feedback on my work and for always beingavailable and interested to test new prototypes! Special thanks to Irene Tongfor being a great eye gaze tracking buddy. Also, many thanks to Leo Metcalffor his help in designing and implementing the ultrasound controls box andto Dereck Toker from the Laboratory for Computational Intelligence for hishelp in eye gaze data analysis towards the end of this thesis.I would like to thank my influencing professors from Computer Science,Dr. Joanna McGrenere and Dr. Karon MacLean, for their inspiring coursesin Human-Computer Interaction and User Interface Design. Their engag-ing lectures and coursework involving interesting and creative projects con-tributed in shaping my interest in the field and building a strong foundationxviiiAcknowledgementsin Human-Computer Interaction.Endless thanks go to my community and friends at St. John’s Collegefor being the best source of moral support I could ask for and for formingmy home away from home.My deepest thanks go to my loving parents, Aladdin Halwani and AhlamKarzon, and supportive brothers, Waseem and Hussam, for their patienceand prayers and for their sweet daily calls and messages that kept me goingthroughout the difficult times of my study abroad.xixChapter 1IntroductionAmong the number of tasks involved in image editing-related applications,zooming and panning is one of the basic and most important tasks performedby a user. It belongs to a larger set of functions that require localizing acertain point of interest before performing any further image adjustments.Traditionally, the localization is achieved by the user through variations ofmouse and keyboard input, requiring the user to move a pointer on thegraphical user interface to the area of interest within the image. Recent ad-vances in human-computer interaction research investigated different inputmodalities to interact with image editing functions that require localizationof an area of interest [56], such as, but not limited to, hand gestures [13], footpedals [39], or a multi-modal combination of some of these input modalitiesfollowed by manual buttons and switches for position confirmation [55].Eye gaze tracking is one of the recently explored input modalities. Al-though there are no reported results on an improved effectiveness of gaze-based interaction in comparison to conventional mouse-based input in termsof fine accuracy, many studies show that multi-modal gaze-based interac-tion has potential in terms of an enhanced speed and user satisfaction [54],given the interface is designed carefully and the user is sufficiently familiarwith gaze-based interfaces. Based on early investigations of eye trackinginterfaces, Zhai et al. argues that, assuming eye gaze can be tracked andeffectively used, no other input method can act as quickly as the eye gaze[68].Eye gaze-based interaction has been investigated both as a stand-aloneinput [51] and as a multi-modal approach [54] to achieve image-related tasks.The application space for such interaction spans areas ranging from graphicdesign to medical images inspection. Nevertheless, the majority of thosestudies investigate the interaction from an abstract point of view and basethe interaction on general image editing or image inspection tasks, which isassumed to fit all areas of applications with the same performance level.In our work, we apply these interactions to the case of ultrasound im-age acquisition and inspection in diagnostic sonography. As detailed in thenext section, differences exist between our work and previous work on multi-11.1. The Challenges of Sonographymodal gaze-supported zooming, such as the types of images used and theadded bimanual interaction, which may either amplify or degrade the effec-tiveness of multi-modal gaze-based interfaces within the context of sonogra-phy.1.1 The Challenges of SonographyA sonographer is a medical professional who possesses an in-depth un-derstanding of the anatomy, pathophysiology and principles of ultrasoundphysics to produce medical ultrasound images. A sonographer also commu-nicates with patients while scanning to explain the procedure of an ultra-sound exam and the relation between the symptoms and the sonographicimage. Prior to an exam, a sonographer has a record of the patient’s med-ical information to be correlated with the resulting ultrasound images anddiscussed with the physician [24].Sonographers spend hours of work daily acquiring and modifying pa-rameters of images to be later sent to a physician for further review anddiagnosis. As every minute counts toward the throughput of ultrasound ex-ams per day and the health care quality, ultrasound machine interfaces aredesigned with efficiency in mind to make the access to ultrasound functionsas fast as possible.Figure 1.1: The Setting of a Diagnostic Sonographer’s Environment andthe Sonographer’s Three Contexts of Attention: the Ultrasound Image, thePatient and the Machine Controls.In addition to efficiency, Zhai et al. [68] argues that enhancing interfaceswith eye trackers has the potential to also reduce repetitive stress injuries21.2. Pan and Zoom and Ultrasound Machinesfor computer users. Figure 1.1 shows the typical environment of a sonogra-pher using an ultrasound machine, where an intricate bimanual interactionis taking place: the dominant hand maneuvers the probe so that the imagehas the area of interest on the screen while exerting pressure for better im-age quality, and the other hand manages the details of the acquired imageby repetitively reaching for and pressing buttons on the machine controlsto change the image parameters and other ultrasound machine-related func-tions through various knobs, buttons, sliders and occasionally soft buttonson a touch screen.A study on the prevalence of musculoskeletal disorders among BritishColumbia sonographers [50] found that 91% of sonographers experience oc-cupational injuries and disorders due to awkward postures, forceful actionsand repetitiveness. Furthermore, a survey conducted by our research teamrevealed that nearly half of the sonographer respondents (N = 48) reportedrepetitive movements due to menu selection and physical keys interactionas a major cause of their experienced occupational musculoskeletal injuries.1.2 Pan and Zoom and Ultrasound MachinesDespite the rest of the common drawbacks of gaze-based interaction, suchas the Midas-touch problem (the inability of an interface to distinguish be-tween the user’s intention of simply looking at an interface element andactivating the interface element’s function), noisy data and consequent in-accuracies within small areas on the screen, the potential advantages of in-troducing multi-modal gaze-supported interaction are worth exploring. Re-ducing repetitive keystrokes and achieving a more efficient performance forimaging tasks serve as our primary motivation to introduce a multi-modalgaze-supported interface to ultrasound machines users.We select the zoom and pan function as our starting point to a multi-modal gaze-supported interface as it requires a 2-dimensional input, whichmaps to the type of output an eye tracker device provides: a 2-dimensionalpoint of gaze. Thus, integration is straightforward. In our work, we adoptand modify previously-investigated multi-modal gaze-based approaches forpanning and zooming to best fit the workflow of a sonographer performinga clinical ultrasound exam. This particular user scenario is unique due totwo main factors:1. Types of images Previous studies primarily focused on images witha substantial amount of content, such as maps [2][54], chip design[6], and multimedia retrieval systems [56], where there are multiple31.3. Contributiontargets to be acquired and zoomed into across the overall image. In thecase of medical ultrasound images, sonographers assess the acquiredimage from a holistic perspective and, in most cases, only one object ofinterest is present at a time, such as a gallbladder surrounded by otherorgans, a tumour surrounded by tissue, or a fetal heart surrounded byfetal organs. In this application area, the purpose of performing panand zoom is to obtain a higher resolution image of the area of interest,and not to locate a particular target in a dense image as this step isalready achieved by moving the probe to the required position overthe patient’s body.2. Bimanual interaction The application of bimanual interaction asthe user interacts with a multi-modal gaze-based interface has thepotential to result in a higher cognitive load, which has not been in-vestigated in simpler scenarios explored in earlier work done in thisarea.1.3 ContributionIn this work, we explore pan and zoom, particular ultrasound machine func-tions that are also common in other image editing-related fields. The specificcontribution of this work includes:1. Results from a field study identifying the challenges of sonography,potential advantages and risks of integrating an eye gaze tracker.2. An analysis of the magnification-related functions in ultrasound ma-chines: types, frequency of use and applications.3. A state-based analysis of the manual-based and the proposed gaze-supported zoom interactions for ultrasound machines, resulting in arefinement of earlier investigated interactions in the field.4. Through user studies, we established that:(a) gaze-supported zoom increases task time and adds extra mentaleffort compared to the manual-based alternatives and(b) gaze-supported OZ technique is faster compared to the gaze-supported MZ technique on the pan/zoom tasks.41.3. ContributionThrough the experiments, we measured metrics related to time on taskand cognitive load. The results show that, depending on the target andthe zoom technique used, gaze-supported interaction differs from traditionalmanual-based interaction in terms of time on task. In addition, one ofthe gaze-supported zoom alternatives explored, One-step Zoom, performedbetter in terms of time on task compared to the other gaze-supported zoomalternative explored, Multi-step Zoom.By measuring the cognitive load through eye gaze metrics and qualita-tive evaluation, we find that, although the gaze-supported alternatives in-troduced are designed to reduce the physical demand of the required tasks,they show signs of an increased mental demand, which, in the case of thisdesign, is due to two main reasons: the novelty of the interface, requiring ahigher effort to learn, and the nature of the gaze-supported interaction thatrequires the user to intentionally gaze at the target area to be magnifiedwhile performing the zoom function.In this thesis, we introduce our work in Chapter 1 and explore earlierwork and present the necessary background in Chapter 2. In Chapter 3,we present our field study results including interviews, observations, and asurvey distributed to sonographers. In Chapter 4, we present the proposedgaze-supported zoom interaction design. We test the proposed design in twouser studies: the first is a context-free game-based user study presented inChapter 5 and the second is a context-focused clinical user study presented inChapter 6. We present our final conclusions and recommendations for futurework in Chapter 7. Appendices include supplementary material related tothe content presented throughout this thesis.5Chapter 2Background and RelatedWorkIn this chapter, we introduce the necessary background needed for under-standing our work in the following chapters. We discuss the research ap-proach we follow, eye gaze tracking technology and relevant work in thefield, types of ultrasound machines, and we analyze the input devices andsoftware design of the target ultrasound functions in the target ultrasoundmachines class.2.1 Eye Gaze Trackers as Input Devices2.1.1 An Overview of Eye Gaze Metrics and Eye GazeTracking TechnologyEye gaze tracking is the technology of tracking eye-related measures, in-cluding the position of the point of gaze, the size of the pupil, and othercharacteristics related to eye movement. There is a multitude of eye trackertypes that perform the same tasks with different levels of accuracy and ob-trusion. Vision-based tracking grew to be the most common method of eyetracking in the recent years due to its unobtrusiveness and ease of integrationwith existing user interfaces. By projecting infrared light in the direction ofthe user’s eyes and recording the reflection with an infrared sensor, the eyetracker compares the corneal reflection to the pupil position to determinethe relative target the user’s point of gaze is positioned at.In order to make sense of the eye tracker’s data, and consequently use itas an input channel in user interfaces, one must first understand the basics ofhuman eye behaviour. The human vision spans about 180 degrees, with thehighest visual acuity around the fovea, which spans only about 2 degrees.In the context of eye tracking applications, two eye movements are of mostimportance: saccades and fixations. In the absence of a moving stimulus,the eyes jump rapidly from one area to another about three to four timesper second. This type of sudden ballistic eye movement is called a saccade,62.1. Eye Gaze Trackers as Input Deviceswhich is very jittery in nature. Another type of eye movement is the fixation,which occurs when the eye is focusing on one particular spot. On average,fixations are still jittery and last between one-tenth to one-half of a second[8].Current technology is able to capture the foveal vision’s saccades andfixations with varying accuracies depending on the hardware specs and soft-ware algorithms used for detection. Bojko [8] lists the hardware specifica-tions that control the quality of the output of the eye tracker: sampling rate,accuracy, precision, head box size, monocular/binocular tracking and pupilillumination method. Currently available trackers perform at a samplingrate of 25 Hz to 2000 Hz, depending on the price range. Although basiceye trackers with low sampling rates sufficiently identify areas of fixations,higher sampling rates directly impact the accuracy of measuring the lengthsof those fixations. The accuracy of the eye tracker is the deviation betweenthe recorded point of gaze and the actual point of gaze of the user [8]. Ifcalibrated well, eye trackers are able to record the user’s point of gaze at anaccuracy of 0.5 to 1 degrees of visual angle. The exact number of pixels fora specific degree of visual angle is dependent on several factors: the distancebetween the user and the eye tracker, the resolution of the screen and the di-mensions of the monitor, as detailed in Appendix A. Precision measures thetracker’s ability of reproducing successive identical points of gaze. Head boxsize defines the flexible box area (width, depth and height) around the user’shead where the user is able to move freely without leaving the eye tracker’sfield of view. Currently available eye trackers are mostly binocular, whichmeans they track both eyes of the user. Although data can be sufficientfrom one eye, averaging the data from both eyes yields higher accuracy andprecision and can also provide gaze depth. There are two methods by whichpupils are detected, which serve to create contrast between the eye and thepupil to allow for better detection: bright-pupil and dark-pupil methods.Each method has its merits and is effective under different circumstancesbased on the environment’s brightness and the physical characteristics ofthe user’s eyes. Most eye trackers switch automatically between methods tooptimize for detection in varying conditions.In addition to hardware capabilities of eye trackers, many commercial eyetrackers are provided with developer API’s offering methods that performmost of the needed post-processing of eye gaze data, including filtering forsaccades and fixations, extraction of eye depth (3D) data and varying cal-ibration options. Such products are ideal for human-computer interaction-centred research as they allow the researcher to focus time and effort on theusability of eye tracking rather than on the technical computer vision and72.1. Eye Gaze Trackers as Input Devicesdata processing aspects.Due to differences between users in the way they look and behave, in-dividual calibrations of users are required prior to using an eye tracker [8].A calibration procedure typically includes a set of targets that the user hasto sequentially fixate his or her eye gaze on in the order required by thesystem. Internally, the eye tracker maps each of these targets to the ap-pearance of the eye and performs interpolations to the rest of the visualfield to interpret the rest of the eye movements. Therefore, the more thecalibration targets, the more accurate the eye tracking. Most eye trackersallow the developer/researcher to set the number and position of calibra-tion targets. The conditions under which the users have calibrated shouldremain the same for the rest of the use of the eye tracker; this includes thelevel of ambient lighting in the room and the relative position between theuser and the eye tracker. If any of these conditions change, a re-calibrationis required. Thus, the amount of re-calibration needed is dependent on theindividual user behaviour.In addition to technical aspects, human aspects also contribute in thequality of eye tracking. As mentioned, accurate calibration is a primarypre-requisite to accurate eye tracking. Additionally, the type of eye wear auser has and the user’s age influence the tracker’s accuracy.2.1.2 Eye Gaze-supported InterfacesEye gaze trackers can be used either as active input devices or as passivemonitoring devices. The eye gaze tracker can be used passively to evaluateinterfaces in terms of identifying the areas of attention the users’ eye gazefixates at while interacting with a newly developed interface being evaluatedby designers. In terms of active input, the most common application areais developing input mechanisms for users with disabilities, especially for eyetyping [27]. However, more applications are starting to emerge, especiallywhen paired with other input mechanisms, such as in gaming and entertain-ment [57] and recently in mobile devices [9]. Another area where eye gazetracking-supported interaction is investigated is in facilitating software de-velopment environments, such as the work presented in [22], which presentsan eye gaze-supported system aimed to enhance source code navigation byenabling the user to activate actions by looking at certain triggers. Anotherarea is the combination of eye gaze input with with other forms of modality,such the work presented by Chatterjee et al. [12] investigating multi-modalgaze and gesture recognition.Eye gaze tracking is explored in the area of medical device interfaces,82.1. Eye Gaze Trackers as Input Devicessuch as the work presented by Tong et al. [61] that investigates gaze-basedinteraction in the field of robotic surgery. Another application presented byTan et al. [58] introduces a dynamic control-to-display gain mouse move-ment method with the control of an eye gaze input to facilitate target pre-diction. In their work, mouse movement is reduced by up to 15% for medicalimage analysis tasks.In addition to position-related measurements, eye-tracking technologyhas the ability to measure other eye parameters, such as pupil dilation, whichreflects the level of cognitive load the user is experiencing. Studies suchas [48] investigates several psychophysiological measurements for task loadanalysis and finds pupil dilation to be one of the most responsive measuresthat can reveal cognitive load in real time. Therefore, eye gaze tracking canalso be used to design adaptive user interfaces as the user’s cognitive loadis being continuously assessed.2.1.3 Eye Gaze-supported ZoomingBased on early research by Zhai et al. [68], users look at the target beforethey initiate the action to acquire it. In the case of mouse-based zoominteractions, a user first looks at the target to be zoomed into, moves thecursor to indicate the position of interest then confirms the zoom actionwith a mouse click or a button. Integrating eye gaze tracking with zoominteractions will eliminate the need for the step of moving a cursor towardsthe target. The user simply has to look at the position of interest thenconfirm with another input modality to zoom.Using eye gaze input to aide in zooming is investigated in prior work. Asan example, Mollenbach et al. [46] performs a comparison of pan and zoombetween two modes of input: mouse-activated and gaze-activated. Theyalso study the type of navigation technique that is best suited for gaze-based interaction. The first technique is a search task where the user hasto locate a small target within a large collection of shapes, and the secondtechnique is a localization task where the user is required to zoom into apredefined sequence of clear targets. In the results, the authors argue thatgaze as a navigational input can be very effective if used for the right type oftask. Their experiment yields an improvement of 16% of task performancewith the target selection task, which matches the type of zoom interactionperformed on ultrasound images by sonographers. In ultrasound imaging,once the probe has been placed at the correct position, the visual targetsearch task is not as intensive as the one presented in [46], as ultrasoundtargets take larger portions of the screen, therefore the visual task performed92.1. Eye Gaze Trackers as Input Devicesis more of a target selection task than a target search task. Similar to otherwork [2], Mollenbach et al. [46] also use “edge-scrolling”, with the speed ofthe scroll movement proportional to the vector between the centre of theimage and the point of gaze.Stellmach et al. [56] presents a focus-plus-context interaction based ona gaze-supported zoomable interfaces, which investigates the interaction incombination with keyboard input and with a touch-and-tilt-sensitive hand-held device. An advantage that multi-modal gaze-based has over gaze-onlyinteraction is the avoidance of both the Midas Touch problem and the de-lay caused by dwell-time activation. In their formative user study, theyadditionally find that multi-modal gaze-based interaction supports multi-tasking that is otherwise not achievable with gaze-only interaction, such assimultaneously zooming and panning.Another study presented by Stellmach et al. [54] is an application of sim-ple pan and zoom through multi-modal gaze-based interaction for Googlemaps. Similar to the approach taken in [56], the authors compare differentmodalities of interaction integrated with eye tracking, including a mousescroll wheel, the orientation of a hand-held device and touch gestures with ahand-held device. Typically, traditional mouse interaction without the inte-gration of eye gaze yields the best results in terms of time, accuracy, spatialawareness, ease of learning and intuitiveness, due to the user’s familiaritywith mouse-based systems. However, multi-modal gaze-based pan and zoomintegrated with mouse input follows closely to the pure mouse interactionin the aspect of perceived speed and spatial awareness.The work presented in [46], [56] and [54] is different from our work asthe types of images investigated are different in nature to ultrasound images.Nevertheless, it is promising to see that mouse interaction integrated witheye gaze for simple pan and zoom tasks performs better in terms of speedthan the rest of the multi-modal systems tested.Similar to [54], the work presented in [2] investigates the user’s perfor-mance of zooming and panning in Google maps through four modes: gazeand dwell, mouse only, mouse and gaze, and head movement and eye gaze.In our work, we adopt the DTZ (Dual-to-Zoom) approach, which combineseye gaze with mouse clicks. In [2], the authors use the user’s gaze input tolocalize the area of interest and the mouse right and left clicks for zoomingin and out. Moreover, they define pan regions, where the zoomed imagepans if the user selects a pan region. The results of this paper suggest thatusing multi-modal gaze-based control with mouse for zooming in and outis the best after the traditional mouse control in all aspects including timeand accuracy. Although stare-to-zoom performs similarly to dual-to-zoom in102.1. Eye Gaze Trackers as Input Devicessome respects, staring at images will hinder the speed at which the scanningtask is carried out and will increase eye fatigue for prolonged use. Similar tosome other work in gaze-supported interaction, the pure manual interactionoutperforms the gaze-supported interaction in all quantitative measures.One of the influential papers to our system design is the work presentedby Kumar et al. in [42]. In the beginning of the paper, the authors ac-knowledge that the system they intend to design will not outperform thedefault manual interaction (in the case of their work, the regular mouseand keyboard), and they stress on the fact that it is not intended to “re-place or beat the mouse”. Their work is aimed at designing an efficientgaze-supported interaction for those users who opt not to use a mouse de-pending on their abilities. In their results, users show higher preference forthe gaze-supported alternative in terms of speed, ease of use and user pref-erence. However, in terms of accuracy, the mouse alternative is preferred.This work also finds that, with the recruitment of 20 participants, the eyegaze tracker works better for some participants, depending on the postureand the calibration quality during the experiment. It also depends on theindividual participant behaviour, for instance, they report on one subjectthat squinted and laughed a lot during the experiment, which hinders thequality of eye gaze tracking.In earlier work that combined eye gaze with foot pedals and mouse input[39], results show that participants are able to beat the time on task using themulti-modal gaze-supported interaction compared to the traditional manual-based interaction. However, this difference is not statistically significant.Similar to [42], prior to collecting results from their work, the authors expectthe novel interaction technique to not outperform the conventional manual-based interaction, but expect it to be at least comparable with an improveduser experience.One of the studies that shows that eye gaze interaction can actuallyoutperform manual interaction is the work done by Fono and Vertegaal[17]. In their work, they claim that their gaze-supported system achieves animprovement of 72% over the typical mouse interaction. However, the taskspace is quite different from the image optimization and analysis tasks wetarget for sonography, as they investigate switching and zooming betweenwindows instead. The targets they investigate, windows, are well-definedand make up a large portion of the screen, which makes them easier toselect even with jittery input such as eye gaze. In our work, the target isonly defined upon image acquisition by the sonographer and is not known apriori to the system. In addition, the size of the target, as discussed in laterchapters of this thesis, can vary.112.2. Ultrasound Machines: Applications and Target Users2.1.4 Eye Gaze-supported Panning and ScrollingAnother common application of gaze-supported interaction is automaticscrolling for on-screen reading, such as, but not limited to, the work pre-sented in [53]. Although their results show that there is no significant dif-ference between automatic and manual scrolling, it is worth investigatingfurther as our application area is quite different as we deal with imagesrather than text. In our work, we adopt similar approaches to pan imagesby detecting when the user looks at the edges of the image, instead of theedges of text limits, and move the image accordingly. This is similar to thegaze-supported panning approach used by Adams et al. [2] and Mollenbachet al. [46].2.2 Ultrasound Machines: Applications andTarget UsersUltrasound imaging is used in a large set of clinical applications. With it,comes a diverse set of users in diverse settings. By taking these factors intoconsideration, manufacturers of ultrasound machines created designs with avariety of layouts and capabilities to best suit the different types of ultra-sound operators and different applications. A common application whereultrasound imaging is used in is routine ultrasound exams in ultrasoundrooms. These exams are concerned with producing high quality images ofspecific targets and performing related measurements. Examples span areassuch as abdominal, cardiac, obstetric, gynaecologic, vascular, musculoskele-tal and other general ultrasound exams. Another application area whereultrasound imaging is used is in point-of-care, where urgent ultrasound ex-ams are required in cases of emergencies for diagnosis. Ultrasound imagingis also important to aide physicians who perform interventional proceduresto guide their primary tasks such as needle-insertion. Similarly, surgeons useultrasound imaging to help guide them during their operations and show theunderlying anatomical structures. The user-machine interaction in each ofthese application areas differs from one another, which drive the design of avariety of ultrasound machine types including, but not limited to, platform-based machines and portable machines, as shown in Figure 2.1.2.2.1 Machines Targeted for Routine Ultrasound ExamsThe ultrasound machine operators in this application area are typically sono-graphers: specialized medical professionals trained with an extensive knowl-122.2. Ultrasound Machines: Applications and Target Users(a) GE Logic E9: a Platform-based Ultrasound Machine Typ-ically Used in Routine Ultra-sound Exams. (Image Source:www.kpiultrasound.com)(b) Sonosite X-Porte: an Ex-ample of an Ultrasound Machinewith a Completely Touch-based In-put Interface. (Image Source:www.sonosite.com)(c) Clarius: a Hand-held Ultra-sound Machine Device. (ImageSource: www.clarius.me)(d) GE NextGen LOGIQ e: a TabletUltrasound Machine Suitable forPoint-of-care Applications. (ImageSource: www3.gehealthcare.com)Figure 2.1: A variety of ultrasound machine interface designs are availablefor a variety of target users and applications.132.2. Ultrasound Machines: Applications and Target Usersedge on ultrasound image acquisition and other related ultrasound functions.Routine ultrasound exams are performed in ultrasound rooms that are pre-pared with all the necessary equipment and suitable environment for optimalimage acquisition.During these exams, the sonographer is constantly switching betweenthree contexts of attention: the ultrasound image, the machine controls andthe patient, as shown in Figure 1.1. However, the sonographer’s main focusis on the production of the acquired ultrasound images and the various accu-rate measurements performed on specific anatomical targets. Additionally,image acquisition in this area requires the sonographer’s management of bi-manual input to the machine: ideally, the dominant hand is in charge of themain task, namely, acquiring the image, and the non-dominant hand man-ages the properties of and any operations, such as measurements, performedon the image.Given these interaction factors and the background of the intended set ofusers, machines in this application area support certain layouts, ergonomics,ultrasound functions and imaging capabilities. Typically, they are platform-based machines with a layout providing a large set of manual inputs to fa-cilitate the user’s rapid access of the non-dominant hand to the machine’svarious functions. Given the variety of exams that can be performed with themachine in this application area, ultrasound machines designed for sonogra-phers possess high processing powers, frame rates, image quality and largemonitors to aide in producing the best image possible with substantial detail.Machines in this area typically support multiple types of transducers andare equipped with advanced application-specific technologies, such as, butnot limited to, special measurement packages and 4D visualization tools.Some machines also support customizable software and automatic imageoptimization functions to increase productivity. Platform-based ultrasoundmachines are also designed to be as ergonomic as possible since they are theprimary use of sonographers for prolonged hours during their workday.2.2.2 Machines Targeted for Point-of-careUnlike the first application area, operators of ultrasound machines in point-of-care may not necessarily be sonographers and can have varying clinicalbackgrounds. The setting of point-of-care ultrasound imaging takes placeoutside the ultrasound room, where scanning and diagnosis are done, forexample, at an ICU or at an emergency vehicle. While the image acquisitionand optimization tasks may have some common similarities to the case ofroutine exams, the required mobility of the ultrasound machine impacts the142.2. Ultrasound Machines: Applications and Target Usersinterface layout and offered machine capabilities.Ultrasound machines aimed at this application area are designed withvarying levels of compactness, durability, and portability. A large varietyof options are available including portable machines, tablet machines andeven newer versions of designs including hand-held pocket-sized machines.However, most of these machines are limited in terms of imaging options,capabilities and ergonomic layout due to the higher priority of providinga portable system. For instance, systems such as SonoSite X-Porte andGE Venue 40, as shown in Figure 2.1b, are completely based on touch-screen input with a few soft keys, which do not provide quick access toultrasound image optimization and patient data control. However, thesetypes of machines are considered excellent options for point-of-care due totheir lightweight, long battery life, and layout design, which provides accessto the most frequently-used imaging options only, which makes it suitableto users with limited background in sonography, such as physicians.2.2.3 Machines Targeted to Aide Other Clinical TasksIn contrast to the two aforementioned applications, physicians use ultra-sound machines for a different goal. Ultrasound images used in interven-tional needle-insertion or in intraoperative guidance are aimed at providingthe physician with a visualization of the underlying anatomical structure.In other words, the physician’s main attention is directed elsewhere and theultrasound image serves as a tool to help them perform their primary task.In such applications, the setup and the interaction are quite different fromplatform-based ultrasound machines used in routine scans. The user’s mainfocus is on the primary task (inserting a needle or performing a surgicalprocedure) and the ultrasound machine is only used to acquire an image toaide the guidance of the primary task. For interventional procedures, thedominant hand handles the primary task of the insertion of the needle andthe non-dominant hand holds the probe to provide the visual context to theprimary task. For surgeries, sometimes the ultrasound probe is handled bya second user altogether as the surgeon’s hands are occupied with surgicaltools, while maintaining a parallel attention on the ultrasound image andthe patient.Similar to point-of-care, machines suitable for this area of application areusually portable and do not offer a wide range of options given the machineoperator’s limited background and need of ultrasound image optimization.Provided image optimization is not the primary concern of users of suchapplications, the lack of physical controls and reduced access to image op-152.3. Ultrasound Machine Interface Design Analysistimization settings is not considered as a limitation but as an advantage asthe main focus of the interface is on the image to aid performing other typesof tasks such as guide needle insertion or do a quick investigation at theICU.The Target Ultrasound Application Area of This ResearchIn this research, we target the interface design of the class of ultrasoundmachines used for routine diagnostic ultrasound exams, such as the type ofmachines shown in Figure 2.1a. This is due to the fact that the primaryusers of this machine, sonographers, interact with the machine’s variousfunctions more frequently and for prolonged periods of time compared to theusers of other types of machines. In this application area, the acquisitionof ultrasound images is the main goal of the users, making the use of themachine their primary task. Specifically, we target the machine’s imagemagnification-related functions of High-resolution zoom and Low-resolutionzoom.2.3 Ultrasound Machine Interface DesignAnalysisBefore introducing the new gaze-supported approach, we must first under-stand the existing design of magnification-related functions in ultrasoundmachines in terms of input device interaction and software interaction. Thiswill help create an informed gaze-supported interaction design for the panand zoom function in ultrasound machines.2.3.1 Input Device InteractionIn this section, we discuss the theory behind the design of input methodsrelated to ultrasound machine controls. Specifically, we target classical HCItheories that define the capabilities of input devices and map it to the typesof tasks performed by users of these devices and the target functions we areinvestigating. We apply these theories to current practices of ultrasoundmachine controls layout design and discuss the advantages and drawbacksof the different approaches taken in in the various premium and high-endmodels of ultrasound machines used in routine diagnostic sonography.Buxton’s work on the three-state model of graphical input [11] maps thedemands of interactive transactions to the capabilities of input transducers.In his work, he argues that all input transducers exhibit at least one of the162.3. Ultrasound Machine Interface Design Analysisthree defined states. Therefore, by identifying the tasks to be performedwith an interface, a designer will be able to select the most suitable inputtransducers that will optimally serve those tasks.The first state is “state 0”, which is described as “out of range” or OOR.In this state, the system is not receiving information from the input device.An example is a 2D stylus: when it is lifted off a tablet, the system is unableto tell the position of the stylus while not in contact with a surface. Thesecond state is “state 1”, which has a different description based on the inputdevice. A consistent abstract description of state 1 across all devices is atrackable continuous signal. For example, a mouse or a joystick is alwaysin state 1 unless another action takes place, such as a button press. State2 occurs when an additional simultaneous action that supplements state 1takes place. As mentioned earlier, pressing a button while moving a mouse(such as the mouse’s left button) will transition the mouse’s state from state1 (tracking) to state 2 (dragging) for manipulating a desktop graphical userinterface. Another example for state 2 is a threshold pressure for a pressure-sensitive stylus that will transform its state from state 1 (tracking) to state2 (dragging, or inking, if the stylus is used for a drawing application, forinstance).Although simple in theory, the three-state model can support the rep-resentation of more complicated systems. First, multiple states of the sametype can be present in an input device. Buxton brings an example of amouse as it has two buttons, which could serve two different purposes for aparticular application and thus a mouse with two buttons has three states,of which 2 are of the same class (state 2). He also highlights the differencebetween continuous and binary transactions and how the three-state modelrepresents them. For instance, pointing is a continuous task and clicking is abinary task. In a three-state model diagram, both are represented similarly.The difference is in terms of implementation, as clicking tasks are treated asa state 1-2-1 transactions, without motion within state 2. In other words, aslong as the signal value sent by the input device to the system stays constantwhile in state 2 (the device isn’t moved), the transaction is classified as aclicking task.Lexical and Pragmatic Characteristics The model presented earlierhelps in mapping the tasks to the capabilities of the input devices. These ca-pabilities, however, could be shared across a class of different input devices.For instance, looking at the state characteristics table in Buxton’s work [10],a trackball, mouse and a joystick are all capable of supporting states 1 and172.3. Ultrasound Machine Interface Design Analysis2. Similarly, tablets, touch tablets, touch screens and light pens are capableof supporting all 3 states. Therefore, further interaction theory should beinvestigated related to the human motor/rotary system and its impact onthe choice of input devices made by a layout designer. An earlier work pre-sented by Buxton [11] investigates the lexical and pragmatic considerationsof input devices that targets the classification of continuous hand-controlleddevices in terms of the property sensed, the number of dimensions and themuscle group involved. Buxton’s work is later extended by Kobayashi etal. [40], which created a new classification taxonomy that includes auditoryand visual devices in addition to hand-controlled devices.The most important output of Buxton’s work on the consideration oflexical and pragmatic characteristics of input devices is the tableau of con-tinuous input devices [10], which we use to further narrow down the map-ping of input devices used for the types of transactions required by imagemagnification-related functions in ultrasound machines. The rows in thetableau classify the input devices based on the property sensed: position,motion and pressure. The columns, on the other hand, classify the devicesbased on the number of dimensions they control. Furthermore, rows are di-vided into mechanical versus touch-based control for each type of propertysensed. In addition to defining the type of input devices suitable for a par-ticular transaction based on the three characteristics discussed, the tableauis also helpful for finding equivalences and relations between devices.Input Devices for Bimanual Interaction Provided ultrasound ma-chines are controlled bimanually, the use of the left hand to interact withthe physical layout of the ultrasound machine also requires an investigationof the types of input devices optimized for use by the non-dominant hand.We found the series of studies performed by MacKenzie et al. [43] and Kab-bash et al. [35] to be of use for this particular research, as will be detailedin later sections.2.3.2 Image Browser InteractionThe work presented by Plaisant et al. [47] specifies the browser interfacedesign’s presentation and operation aspects. They classify the tasks of animage browser as follows: image generation, open-ended exploration, diag-nostic tasks, navigation, or monitoring. Sometimes, a combination of one ormore of these tasks may be needed, in which case the image browser mustbe carefully designed to support the appropriate presentation and operationaspects of both (or more) tasks.182.3. Ultrasound Machine Interface Design AnalysisPlaisant et al. [47] also describe the presentation aspects of an imagebrowser. The presentation of an image browser has static and dynamicaspects, where the first is concerned with the layout of the views presentedto the user, and the latter describes the update methods of the presentedlayout(s). A number of different operations can be performed with an imagebrowser, such as inspecting details, moving between important pre-definedareas within an image, navigation, and more. However, the authors placea higher emphasis on the pan and zoom functions, since they are basicmanual operations performed by the user required by most image browsers.Additionally, they categorize a number of operations that they recommenddesigners to automate in their image browser interfaces to make it easier forusers to concentrate on their main tasks.2.3.3 Magnification-related FunctionsLow-resolution Zoom There are two basic variations of zoom func-tions in ultrasound machines. The first one is the Low-resolution: quickly-accessed by the user, magnifies the entire image from the default centreposition in discrete steps within the detail view through an input device,and de-magnifies (zooms out) in the same manner through a reverse actionof the same input device. The input device is usually a continuous rotaryknob, as in the case with all premium and high-end machines researchedand observed, and sometimes input buttons, as in the case of lower-endand some tablet and portable machines. Low-resolution zoom does not per-form any image optimization on the magnified image and is used for quickinvestigation of a particular area of the image.High-resolution Zoom The second zoom function variation is the High-resolution zoom. As shown in Figure 4.5, High-resolution zoom provides thesonographer with a higher level of control over the magnified image. More-over, the scan frame rate (also known as the ultrasound temporal resolution)and line number are typically automatically optimized as the imaging sectoris narrowed by the machine through magnifying the image, which makes it apreferred zoom alternative when a sonographer is likely to zoom into a partof an image that is rapidly moving and requires a higher temporal resolutionimage acquisition. Provided the inherent change in image acquisition as theprobe limits its field of view to capture the magnified area only, panningwithin the zoomed image is not technically possible as a post-processingfunction. An ultrasound probe must have the capability of actively movingits lateral field of view to support panning of the magnified area.192.3. Ultrasound Machine Interface Design AnalysisIn High-resolution zoom, once the pre-zoom mode is activated, the bor-der of the field of view to be magnified (or the “zoom box”) is displayedover the image (either in the overview or the detail view, or both) with adefault size and position, which is initially placed at the centre of the im-age. No magnification takes effect until the zoom action is confirmed withthe selected dimensions and position of the field of view. The zoom factorin this case is implicit, as the dimensions of the field of view determine it.Similar to Low-resolution zoom, magnification is restricted by restricting amaximum and a minimum width and height dimensions of the adjustablefield of view set by the sonographer. This particular zoom approach hasno reverse zoom action. Therefore, zooming out simply resembles restoringback the image to its default magnification outside the selected field of view,which is referred by Plaisant et al. [47] as implicit zoom out or “undoing”.Hybrid PanZoom Newer machines support a hybrid of high resolutionand Low-resolution zoom. As discussed earlier, some machines name itthe PanZoom option. This alternative serves as a more controlled Low-resolution zoom function. Similar to High-resolution zoom, the positionand size of field of view to be magnified is controlled manually. However,since it is essentially a post-processing function on the acquired image, asonographer is able to pan the magnified image within the rest of the overallfield of view. This option is used when the temporal resolution of the imageis not a main concern for the sonographer, but control over the dimensionsof the acquired magnified image is still of importance, as shown in Figure4.5.In both low resolution and hybrid PanZoom approaches, panning is per-formed by first changing the image mode to panning mode, then movingthe trackball in the desired pan direction. Since there is no equivalent for itin Plaisant et al.’s [47] taxonomy of pan operation, we added pan by “cur-sor movement” to the taxonomy as the trackball functions as a cursor inultrasound machine interfaces.Transducer-based Panning One notable interaction we observed withsonographers is sometimes their tendency to manually pan the area of inter-est by moving the transducer over the patient instead of using the software-based pan function, even if the target is completely visible within the rangeof the image acquired by the transducer. Although software-controlled pan-ning is much more stable (as the sonographer might lose the target of interestby slightly moving the probe more than needed), we suspect that panning202.4. Conclusionwith the probe is preferred in some cases as it does not require any contextswitch to perform the software-based pan. By context switch, we mean theextra interaction with the machine controls to select the panning mode andmoving the trackball in the desired direction. It is an additional action that,sometimes, could be avoided if the transducer is carefully moved over thepatient.Depth Upon observations of several ultrasound exams and informal dis-cussions with sonographers, as we describe in Chapter 3, we find that thezoom and depth functions are used interchangeably to bring a particulararea of interest into the central view of an acquired image. In terms ofoperation, both functions are identical (compared to the operation of theLow-resolution zoom only). However, sonographers prefer to use the depthfunction to the zoom function whenever possible, especially if the target islocated close to the surface. The maximum allowed depth by the transduceris dependent on the frequency it is set at. The higher the frequency, thelower the penetrable depth and the higher the lateral resolution. More im-portantly, similar to High-resolution zoom, decreasing the depth improvesthe temporal resolution as well. In this case, the temporal resolution isimproved as it decreases the pulse repetition period. Therefore, unless thetarget is located at a high depth, sonographers typically prefer to controlthe depth of the image to bring a particular target into central attentionas it acquires images with better lateral and temporal resolutions instead ofusing the available zoom functions.2.4 ConclusionWe present a brief background on eye gaze tracking technology and thecommon applications that use the gaze input for direct control. Previousresearch in the field of gaze-supported zoom control has been already ex-plored with different results based on the types of images zoomed and themodality of the interaction. Two studies which show that gaze-supportedzoom is faster than manual-based zoom is that presented by [46] and [17].However, the types of images and tasks presented in these studies differ fromthe target application of zooming into ultrasound images within the contextof sonography.Later, we classify ultrasound machines based on the application area andthe target set of users. In this research, we target the zoom interfaces of ul-trasound machines used by sonographers for routine diagnostic sonography.212.4. ConclusionFinally, we present classical HCI theory on input devices based on Bux-ton’s [11] three-state model and devices’ lexical and pragmatic character-istics [10] and bimanual interaction [43] [34]. In addition, we provide anoverview of image browser design principles based on Plaisant et al.’s work[47]. We refer to these theories in Chapter 4 to explain the input device andimage browser design of the targeted ultrasound machine’s layouts and helpcreate an informed design of gaze-supported zoom interaction.22Chapter 3Field Study AndObservationsIn this chapter, we present the methodology and results of the conductedfield study. We perform observations, conduct interviews and distributea survey to understand the environment of our target users: sonographers.We use these results to help us build an informed gaze-supported interactionwith the ultrasound machine as we analyze the potential benefits and risksassociated with eye gaze input integration.3.1 MethodologyUser-centred design is “a broad term to describe design processes in whichend-users influence how a design takes shape” [1]. In the field of medicaldevice development, following a user-centred design approach is essentialto help support the end user’s needs better, since the target user groupis small, with unique experiences and requirements. User-centred designinvolves iterative cycles of the following: defining user objectives, collectingrequirements, evaluating design alternatives and testing the proposed systemwith end users.Studies on ultrasound machine interface design followed a user-centreddesign approach for over 20 years. The work such as [5] introduces a numberof design alternatives and changes to the ultrasound machine interface basedon a user-centred design approach. In addition, work such as presented byMartin et al. [45], recommends a list of user-centred research methods toensure ergonomically-designed medical devices. In our work, we follow someof the methods recommended, such as contextual inquiry, in conjunctionwith a series of usability tests and heuristics in the early stages with expertsonographers.In this chapter, we present results from interviews with sonographers, ul-trasound scans observations, and a survey distributed to the members of theBritish Columbia Ultrasonographers’ Society. We identify some of the chal-233.1. Methodologylenges that sonographers face in sonography and conclude with discussingthe potential risks and benefits of deploying a gaze-based interaction tech-nique based on observed user behaviour, ultrasound machine capabilitiesand the clinical environment setting.3.1.1 ObservationsThe ultrasound machine, functions, and exam duration all differ based onthe specific ultrasound scan being performed and other factors related to thepatient’s physiology and the sonographer’s experience. To get a practicalview of these factors and the feasibility of integrating an eye tracking systemwith the machines given such diversity, various types ultrasound exams wereobserved, including general, obstetric, breast, renal, vascular, abdominal andechocardiography exams at two different hospitals for two full days summingup to a total of 18 ultrasound scans. In addition, we observed an ultrasound-guided breast biopsy procedure.3.1.2 SurveyAn online survey was conducted to get an in-depth understanding of theobserved challenges with sonography in practice. The aim of the survey is tounderstand the ultrasound machine interaction from a user’s perspective tohelp in directing the design and requirements of the new eye gaze-supportedultrasound machine interface. The questions asked to survey respondentsrelate to an ultrasound machine user’s daily interaction with the machine,musculoskeletal disorders due to work injuries, and emerging ultrasoundmachine technologies. A total of 66 responses were received, of which 48 arecompleted responses. All survey questions can be found in Appendix B.The survey is designed for the distribution to both sonographers andradiologists with experience in ultrasound, including radiologists who per-form ultrasound-guided interventional procedures, to understand the differ-ences in human-machine interaction between the two user groups and theinterface requirements based on the interaction. However, due to difficultiesdistributing the survey among radiologists, only sonographers’ responses areconsidered in this analysis. Responses have been actively incoming since thesurvey was distributed on Feb. 17, 2016 until Apr. 24, 2016, as shown inthe time line in Figure 3.1.The majority (85%) of the complete responses are from female sonogra-phers. The years of experience in sonography of the respondents also variedgreatly, with 20-30 years of experience forming the great majority (33%),243.1. MethodologyFigure 3.1: Survey Statistics and Number of Responses Over Timefollowed by greater than 30 years (23%), 11-20 years (13%), 2-5 years (12%),less than 2 years (10%) and 6 - 10 years (13%), as shown in Figure 3.2.In terms of current occupation, the great majority of the responses re-ceived are from expert sonographers, representing 92% of the survey respon-dents. The rest are 2 student sonographers, 1 instructor and 1 applicationsspecialist. Based on the results, 77% of the respondents have over 5 yearsof experience in sonography.In terms of experience in types of ultrasound exams, the respondentswho have an experience in performing general and obstetric/gynaecologicexams form roughly 84% of the respondents, as shown in Figure 3.3. Otherultrasound exams include breast sonography (3 responses), neonatal (2 re-sponses), neuro (1 response), ocular (1 response) and cranial (1 response).Most of the respondents work for 31 to 40 hours per week distributed asan average of 8 hours a day for 5 days a week, as shown in Figure 3.4.3.1.3 InterviewsWhile carrying out the observations and collecting survey responses, struc-tured interviews and informal discussions with two sonographers were beingcontinuously conducted for further clarifications on the observations andsurvey results and later to assist in the user study design by bringing in aprofessional’s perspective. The first sonographer mainly performs obstet-ric/gynaecologic (OB/GYN) ultrasound scans and the second sonographerperforms general ultrasound scans. Both are expert sonographers with over253.1. MethodologyFigure 3.2: Survey Respondents’ Years of Experience in SonographyFigure 3.3: Types of Ultrasound Scans Survey Respondents Perform263.2. ResultsFigure 3.4: Survey Respondents’ Number of Hours of Work per Week30 years of experience in sonography. In addition, we held follow-up dis-cussions with sonographers from the hospitals where the observations tookplace and with the six recruited sonographers from our user study duringthe first design iteration.3.2 Results3.2.1 Diagnostic Ultrasound Scan RoutineWe observed the steps taken by the sonographer during a typical OB/GYNexam through a full-day observation at BC Women’s Hospital. Before theexam, the sonographer loads the patient’s data, checks the patient’s report,and helps the patient lay in the patient’s bed. Typically, the sonographeralso makes sure the patient can see the ultrasound scan in the secondarymonitor.During the exam, the sonographer typically starts by adjusting TGC,depth and focus levels values after locating the target to be scanned by theprobe. The sonographer proceeds to take the required measurements andimages are taken during the exam, either to be reported to the physician viaPACS or to be printed out for the patient (such as an ultrasound image ofthe infant on thermal paper). In either cases, the following steps are takenfor image capture: (1) freeze image, (2) annotate image, (3) print/capture273.2. Resultsimage, (4) unfreeze. We also observe that the sonographer typically restsher left hand on the freeze button, as it is being frequently used beforeperforming image captures or measurements.After the exam, the sonographer updates the ultrasound image and datareport to PACS. In some hospitals, this is automatically performed throughthe DICOM communication protocol. Depending on how critical the caseof the patient is, the sonographer discusses the results and gives feedbackto the patient. If the case is too critical, it is best discussed between thepatient and the physician only. The report is then delivered and discussedby the sonographer with the corresponding physician.3.2.2 Contexts of AttentionAs shown in Figure 1.1, a sonographer’s attention is distributed across threemain visual contexts. The ultrasound image forms the region of central at-tention, as the goal of a sonographer is to analyze and produce high qualityimages. Next, the sonographer must concentrate on transducer manipula-tion, as images must be obtained with the best transducer location. Finally,the sonographers must also use ultrasound machine controls which includevarious buttons, knobs, sliders, switches, a trackball, and most of the timesa touch screen for further menu navigation. In addition, effective communi-cation with the patient is also required, which also accounts for some of thesonographer’s attention.We observed that in obstetric ultrasound scans, it is important for theultrasound screen to be placed at a location where both the sonographerand the patient are able to see the acquired images. There is typically asecond screen placed at a viewing distance from the patient bed, as shownin Figure 3.5.We noticed that, even the most experienced sonographers, glance repet-itively back and forth between the monitor and the ultrasound machine’sphysical controls. One of the interviewed sonographers reported that thelarge amount of options sometimes causes unwanted distraction which drawsattention away from the ultrasound image. She mentioned a common sce-nario in obstetrics is repetitively losing the chance to capture the “perfectimage” as the fetus rapidly moves while the sonographer is still trying tolocate some option on the controls panel.Based on these observed and discussed challenges with the sonographer,we design relevant questions in the survey, shown in table 3.1, to under-stand the relevance of these challenges to other sonographers with differentexperiences.283.2. ResultsFigure 3.5: An Ultrasound Room’s Layout: A second screen is placed at aviewing distance from the patient for OB scans.Table 3.1: Survey question 3.1.6 “Provide your rating of the following.Where 1 = Highly Disagree and 7 = Highly Agree.”# Statement1 Scanning anatomical structures that are in constant motion can bea time-sensitive task that requires an efficient and responsive userinterface (such as freeze or print).2 Ultrasound foot switches can be helpful in repetitive tasks (such asfreeze or print).3 Sometimes I have to go through a lot of steps (through interfacemenus) to select a particular setting.4 I switch my attention between the monitor and the ultrasoundinterface buttons very often and it gets distracting sometimes.5 I switch my attention between the monitor and the ultrasoundinterface buttons very often and it makes me lose focus ofimportant image details sometimes.293.2. ResultsThe results, as illustrated in Figure 3.6, show that survey respondentsagreed the most with the first statement regarding the need for an efficientinterface to capture ultrasound images of anatomical structures in motion.Some respondents disagreed that foot switches are helpful for repetitivetasks, but a larger number agreed, which suggests that there is some poten-tial for using foot switches with ultrasound machines, but it is still unclearwhether users will welcome this change if it is not necessary.The third, fourth, and fifth statements, relevant to attention and contextswitch, show almost an equal amount of agreements and disagreements.This result suggests that there exists a reason influencing the opinion of therespondents, which could be based on the type of ultrasound exams theyperform, their experience and their general approach in organizing theirwork flow.Figure 3.6: Levels of Agreement on the Survey Statements Listed in Table3.13.2.3 Machine Functions and FeaturesFrequently-used FunctionsWhen asked about their use of ultrasound functions, more than half of therespondents (54%) indicated that they use about 30% to 60% of the ultra-sound machine image settings and machine functions out of all the settingsand functions they are familiar with, followed by 27% of the respondentswho indicated their use of only about 10% to 30%.We further surveyed the image settings and machine functions used in303.2. ResultsFigure 3.7: Answers to the Survey Question “Which buttons, functions orfeatures do you use most frequently? at least once per scan in >90% of allscans”Figure 3.8: Answers to the Survey Question “Which buttons, functions orfeatures do you use most frequently? at least once per scan in 40% - 90% ofall scans”313.2. ResultsFigure 3.9: Answers to the Survey Question “Which buttons, functions orfeatures do you use most frequently? at least once per scan in <40% of allscans”more than 90% of ultrasound scans. The most common responses are shownin Figure 3.7.Similarly, we surveyed the ultrasound image settings and machine func-tions used in some of the ultrasound scans, estimated as between 40% to 90%of the scans performed by sonographers. Figure 3.8 shows the results col-lected. Figure 3.9 shows the ultrasound image settings and machine featuresthat are rarely used during ultrasound scans.Adapting to Machine ChangesWhen sonographers were asked in the survey about the time it takes themto adapt to new changes in an ultrasound machine interface, 48% of therespondents reported that it generally takes them less than one working dayto find their way around the machine, followed by 37.5% of them believingthat it takes around a week or so. The sonographers’ perception of theirability to learn a new interface is a positive indicator to the possibilityof introducing slight changes to the ultrasound machine interaction, if itsbenefits significantly outweigh its drawbacks.About 10% of the respondents reported other reasons that the learn-ability of a new interface depends on. For instance, one of the respondentsmentioned that it depends on how big the changes are: if the changes areconsistent with the general interface of the machine sonographers are usedto, then it will not take them long to learn the new additions. Anotherrespondent pointed out that getting used to a new interface depends howoften the sonographer should use the newly-added feature. Another respon-323.2. Resultsdent mentioned that getting used to a new interface depends on the level ofsupport the sonographer gets.Evaluation of Machine FeaturesWhen asked in the survey about the ultrasound machine hardware and soft-ware features that sonographers find helpful, survey respondents mentioned22 different features.Touch Screens Although opinions differed regarding touch screens, themajority found distributing the functions between hard and soft keys anadvantage, as shown in Table 3.2. One of the respondents mentioned thatshe prefers soft keys to hard keys in some cases:“I love touchscreen interfaces. I prefer soft keys to hard keys when theycan assist with work flow (i.e. only when contextually necessary)”On the other hand, another respondent does not prefer this combinationas it increases the confusion.“Machines that have combination of soft keys (touch panel) + hard keys(rotating knobs) + toggles + keys that you press (such as freeze, record, etc.)are the worst. They have too many types of keys, which makes the operationinefficient. They should keep the types of keys to a maximum of two.”Follow-up discussions during the observations at the hospital with sono-graphers reported that performing menu navigation on the touch screen canget distracting as the sonographer has to look at the menu to perform thesettings changes. Physical buttons are always preferred to touch screen but-tons, as the sonographer doesn’t have to look down at the panel and losetrack of where she was looking at the image displayed on the monitor.Co-located Keys Having all the related controls co-located around theleft hand resting area is also reported as very helpful by sonographers, asshown in the ultrasound machine interface in Figure 3.10. We observed thatthe left hand of the sonographer typically rests around the trackball and thecapture and freeze buttons.Sliding Keyboards On the other hand, when sonographers were surveyedabout the ultrasound machine features which they think requires more at-tention to and improvement, a number of sonographers (N = 7) found thesliding keyboard to be inefficient and difficult to use. The rest of the resultscan be found in Table 3.3.333.2. ResultsFigure 3.10: A Phillips Ultrasound Machine InterfaceTable 3.2: The Most Common Efficient Ultrasound Machine Features asListed by Survey RespondentsEfficient Ultrasound Machine Feature # RespondentsTouch screen functions (automatic annotations,programmable layout, etc.)11GE Scan assistant and similar automation protocols 10Frequently-used buttons are co-located near thehand resting position8Keyboard on the same platform as the rest of thebuttons8Sliding keyboards 4Patient data loaded automatically 4Adjustable screen position 3Pre-programmed annotations 2343.2. ResultsTable 3.3: The Most Common Inefficient Ultrasound Machine Features asListed by Survey RespondentsInefficient Ultrasound Machine Feature # RespondentsNon-intuitive arrangement of interface panel 13Sliding keyboards 7Unadjustable (fixed) monitors 6Touch panels with multiple layers 4Hard-to-reach keyboards 3A lot of steps for a task 3Sticky trackball / keys 3Heavy probes, cables and inaccessible ports 3Hard-to-move machines 2No leg room under the ultrasound machine 2Non-intuitive and hard to reach touch screens 5Scan Automation Another ultrasound machine feature that is foundvery helpful by sonographers is automation software, such as the GE ScanAssistant, which we dedicate a sub-section to evaluate later in the survey aswe found it being used often during the ultrasound scans we observed.Scan Assistant and Other Scan Automation SoftwareOut of the 48 respondents, 20 of them have experienced using Scan Assistantin GE machines (or similar software in other machines). “Scan Assistant”is a software created for GE Ultrasound machines that provides “customiz-able automation at each step of an ultrasound exam for a fast, comfortable,consistent scanning experience, which could reduce injury-causing repetitiveactions” [29]. Lowering the amount of repetitive interactions with the ultra-sound machine and the amount of time it takes to finish an ultrasound examare equally leading benefits of such a semi-automated system. Other bene-fits follow, as shown in Table 3.4, such as helping the sonographer not forgetthe steps required for a particular exam and focus more on the patient thanon the machine interaction. Respondents also reported other reasons forusing Scan Assistant, such as preventing the sonographer from incorrectlylabelling the ultrasound images.Although found very helpful, some sonographer survey respondents also353.2. Resultsreported facing challenges with it, as detailed in Table 3.5.Table 3.4: Surveyed Benefits of Semi-automated Systems in Ultrasound Ma-chines (e.g. Scan Assistant)Semi-automated Ultrasound Machine SystemBenefit# RespondentsSignificantly shortens the duration of an ultrasoundexam18Contributes to lowering the risk of WRMSDs as itlowers the amount of repetitive movements withphysical ultrasound machine interface buttons18Helps me not forget any steps for a particularultrasound exam and organizes my work flow14Helps me focus more on the patient 11Other 2Table 3.5: Surveyed Drawbacks of Semi-automated Systems in UltrasoundMachines (e.g. Scan Assistant)Semi-automated Ultrasound Machine SystemDrawback# RespondentsMy routine is inconsistent with Scan Assistant 6Too much automation allows the mind of thesonographer to wander1Not optimal for students 1Slower than manual 1Can be counter-efficient with lack of training 1Given this variation in responses, it is important to highlight that suchsystems work better with some types of ultrasound scans compared to others,which could be the main reason behind the inconsistency of the automatedsystem’s routine with the sonographer’s routine in some ultrasound scans.When asked about the types of ultrasound scans which the sonographersuse Scan Assistant with, carotid ultrasound scans were mentioned by 15 outof 20 sonographers. The rest of the ultrasound scans can be found in Figure3.11.363.2. ResultsFigure 3.11: Types of Ultrasound Scans that Use Scan AssistantIn addition, one of the interviewed sonographers mentioned that shesometimes has to change the default presets based on some patients’ physi-ology. For instance, scanning obese patients requires a lot of effort to acquirethe same level of detail in the image as scanning normal patients, since thesonographer needs to go through changing some particular settings in thedefault preset to allow the ultrasound signal to penetrate through fat andnot cause too much noise.Evaluation of Voice-enabled FunctionsIn addition to eye gaze input, voice-enabled systems are another candidatethat have the potential to reduce the physical demand of interfaces, if in-tegrated carefully. Old models of Phillips iU22 were the first to integratevoice-enabled functions into the ultrasound machine. We surveyed sonog-raphers to understand their experience with this feature and its advantagesand disadvantages within the context of sonography. Out of 48 sonographers,only 12 experienced using voice-enabled ultrasound machines. Respondentsfound only little advantages associated with this feature, which explains whywe do not see it widely used in recent premium to high-end ultrasound ma-chines in the market. Only three sonographers found it helpful, with theadvantage being able to reduce repetitive strain injuries. The major dis-advantage found (mentioned by 11 sonographers) with this voice-enabledsystem is the confusion and interference with the sonographer-patient com-munication as explained by the respondents in details:“It does not allow you to connect with the patient during the exam. Pa-373.2. Resultstients have less anxiety if you can talk to them and you also need to get moreclinical history often.”“I did not like it, I found it distracting for me and the patient and theradiologist if they were present.”The rest of the respondents found it unhelpful due to poor voice recog-nition or a counter-intuitive interaction:“It is difficult to learn the entire dictionary required to operate properly.”“Extended exam time due to repeating commands.”On the other hand, one of the time-consuming tasks that the sonogra-pher is required to perform before and after the diagnosis is retrieving thepatient’s information from and saving to the work list and the PACS system.Based on one of the interviewed sonographers’, automating these tasks andintroducing improvements to it requiring less interaction time will improveher focus on performing her main job, that is imaging and diagnosis. Shesuggested using voice commands to automate the retrieval and modificationto the patient’s data.3.2.4 Work-related InjuriesThere is a great variation in the prevalence of work-related musculoskeletalinjuries in the population surveyed, as shown in Figure 3.12, which we as-sume is related to the variation in age and work experience. Nevertheless,92% of the respondents reported experiencing stress-injuries they believeis due to their career at least once, which agrees with an earlier study con-ducted in 2002 regarding the prevalence of musculoskeletal symptoms amongsonographers of British Columbia [50].More than half of the respondents reported that the experienced stress-injuries are rated as either painful (37.5%) or very painful (18.75%), as shownin Figure 3.13, which suggests that the ergonomics of the current ultrasoundmachine setting require to be seriously considered in terms of redesign andimprovement. Looking at what respondents believe the common causesof their work-related injuries are, we find that poor posture is the leadingcause, followed by sustained force and pressure, equipment design challenges,repetitive movements, infrequent short breaks and patient obesity. Moredetails are found in Figure 3.14.Some respondents pointed out more reasons why they believe are thecause of their work-related injuries. These include the following:• “Failure of machine manufacturers to provide any sort of support fornon-scanning arm/hand”383.2. ResultsFigure 3.12: Prevalence of Work-related Injuries Among Surveyed Sonogra-phersFigure 3.13: Severity of Work-related Injuries Among Surveyed Sonogra-phers393.2. ResultsFigure 3.14: Causes of Work-related Injuries Among Surveyed Sonographers• “Small fine motor movements”• “High case loads”• “Transducer weight and grip”• “Trackball design”One of the respondents explained the difficulty of returning back to workafter experiencing a stress-related injury “Once you are injured at work, evenif it is recognized as WCB (rarely), and you are given time off work, you areusually returning to work with pain/muscle weakness and tightness.”Only one respondent appeared to be confident about his/her inexperi-ence of work-related injuries by reporting: “I have never had injuries, evenwhen I worked 80 hours a week, part time plus full time job.”Repetitive MovementsOne of the functions we observed in action, panning and zooming, requiresthe sonographer to perform a number of steps before the image is zoomed.Once the probe is positioned over the required area, the following is per-formed by the sonographer:1. Enable the “zoom mode”,2. Using the trackball, move the zoom box to the location of object ofinterest on the screen,403.2. Results3. Press a button to toggle the function of the trackball from positioningto resizing the box (or vice versa),4. Using the trackball again, resize the box,5. Repeat 2 - 4 to fine-tune the size and position of the box as necessary,6. Finally, confirm the zoom action.This is only one form of zooming that is called the High-resolution zoom.Low-resolution zoom is quite straightforward, as it only requires a twist ofa knob to zoom in and out and moving the trackball to pan once the imageis zoomed. However, sonographers prefer to use the High-resolution zoomas it provides an image with an improved quality and more control over theborders of the zoomed image.In some scenarios, this repetitiveness in some functions causes some se-rious interruptions to the ultrasound exam work flow. One of the sonogra-phers recruited for the first-iteration user study, described in Appendix D,reported a difficulty in scanning patients with irregular breathing or thosewho cannot hold their breath for some time until the sonographer zoomsinto and acquires an image of the area of interest. Similar to the obstet-rics scenario, the structure is also in constant motion due to the patient’sbreathing. This makes it hard to obtain an image in a certain position whilesimultaneously performing a number of steps to zoom, freeze and capturethe image.Sonographer’s PostureWhile observing examinations at BC Women’s Hospital, we found that thesonographer was sitting down while performing the first 4 obstetric ultra-sound exams, and preferred to stand up for the next 4 gynaecologic ultra-sound exams. Therefore, the sonographer’s posture while using the ultra-sound machine is not always fixed.In cases where the patients have high body mass index (BMI), we ob-served that sonographers force themselves awkward and unnatural posturesto obtain clear images by simultaneously applying high pressure on the probeand struggling to reach the controls on the ultrasound machine. Sometimes,the sonographer exercises her wrist a little after the exam as the right handhas been exerting pressure on the probe to scan the patient.413.2. Results3.2.5 Ultrasound-guided ProceduresPhysicians who perform ultrasound-guided procedures, are a separate setof users of ultrasound machines. As explained in earlier chapters, their in-teraction with the ultrasound machine is quite different from sonographers,as they are typically secondary users of the machine and use the ultra-sound image to guide them in performing some interventional procedure,and not to optimize and capture images for later analysis. An ultrasound-guided procedure requires the operator’s hands to be both occupied by theprobe and needle and requires a sterile environment, which prohibits a directcontact with the ultrasound machine’s panel to change the image settings.Commonly, most of the settings are preset and their main focus of the radi-ologist is on the procedure itself (such as needle insertion). However, theremight be exceptions.We included questions regarding a potential hands-free interaction withthe ultrasound machine in our survey to get a preliminary idea of its feasibil-ity. Out of the 48 sonographers surveyed, 24 of them have an experience inassisting in ultrasound-guided interventional procedures including biopsiesand others, such as porting catheters, thoracentesis and pericardiocentesis.When asked about the frequency of changing ultrasound image settingsduring procedures, an equal number of respondents (N = 11) answered “Yes.All types of procedures frequently require it.” and“Yes, but the frequency ofthis need changes based on the type of interventional procedure being per-formed.” Only 6 respondents answered “No, most of the procedures haveimage settings pre-set. There might be exceptions though.” None of therespondents answered “No, not at all.”, as shown in Table 3.6.Table 3.6: The Need for Sonographers to Adjust the Ultrasound MachineParameters During an Interventional ProcedureNeed for Adjusting Parameters # RespondentsYes. All types of procedures frequently require it. 11Yes, but the frequency of this need changes basedon the type of interventional procedure beingperformed.11No, most of the procedures have image settingspre-set. There might be exceptions though.6No, not at all. 0423.2. ResultsThe attending medical staff of the observed breast biopsy included aradiologist and a sonographer. During the observation, we found that thephysician performing the biopsy did not interact with the ultrasound ma-chine, except for holding the probe to acquire images at the needed position.The sonographer assisting the physician performed all the ultrasound-relatedroutine starting by capturing images of the biopsy area before and after theprocedure and optimizing all the ultrasound image settings. During theprocedure, the only ultrasound machine buttons pressed by the sonographerwere “freeze” and “capture”. Three biopsy samples were taken, thereforethree ultrasound images were captured after taking each sample.Provided assistants are the primary users of the ultrasound machineduring interventional procedures, since they perform the required ultrasoundimage adjustments, we also asked in the survey if there are any difficulties inthe communication between the radiologist and the sonographer during anultrasound-guided procedure. Results are found in Table 3.7. Most of therespondents (N = 11) answered that there is difficulty, but it only dependson the background and training of the assistant. One of the respondentsfurther explained that there is lack of training of assistants recently, whichmakes this communication quite difficult:“Intervention is now performed in radiology departments where the sup-port is usually a nurse or a radiology technologist and they have no under-standing of the buttons on the unit that need to be adjusted”.One of the respondents explained further that some of the challengingultrasound image settings to communicate include “Changing depth, anglingthe beam, changing a preset”.Table 3.7: Difficulty Communicating with an Assistant During Ultrasound-guided ProceduresNeed for Adjusting Parameters # RespondentsNo, instructions are very straightforward. 5It depends on the experience and background of theassistant.11Yes, but it’s tolerable and does not affect the flowof the procedure.1Yes, it would be much easier if the radiologist couldchange the parameters directly.2433.3. Eye Gaze Tracking IntegrationWhen asked about the potential of integrating a hands-free interactionmode with ultrasound machines for interventional procedures, the majority(N = 11) of the respondents found it helpful, but it does not eliminate theneed for an assistant sonographer, as she knows better how to operate thedifferent functions in an ultrasound machine better than a radiologist as shehas received complete training on the machine. Results are found in Table3.8.Table 3.8: The Preference for Hands-free Control of Ultrasound Machinesin Ultrasound-guided ProceduresNeed for Adjusting Parameters # RespondentsYes, significantly. 7Yes, but the assistant might still know better interms of ultrasound machine settings control.11No, I do not prefer to interact with the ultrasoundmachine at all.23.3 Eye Gaze Tracking IntegrationWhen discussing the idea of integrating eye gaze trackers with sonographers,they found that it is worth exploring: as far as diagnostic sonography is con-cerned, eye gaze input could be very useful, as long as it does not introduceany interaction overhead. Given that the sonographers’ attention duringa diagnosis session is mostly focused on the ultrasound monitor, providingsettings selection with eye gaze tracking (based on where they are looking aton the screen) has the potential to improve their focus on the current taskand reduce attentional draw caused by searching for the knobs and buttonson the ultrasound machine’s panel.We find that, although zoom is listed only by 13 sonographers as afunction that is used in >90% of the ultrasound scans, it is the first functionin the order shown in Figure 3.7 that requires a 2D input, which can beprovided as a point of gaze. Another potential function is colour Doppler,as it requires positioning a box around an area of interest to show bloodflow. Measurements is another potential function, as it requires placingcalipers. However, caliper placement requires high accuracies. Based onearlier research and inherent capabilities of eye gaze tracking, as discussed443.4. Conclusionin Chapter 2, eye gaze input is not a suitable candidate for tasks requiringhigh accuracies.Integrating an eye tracker in such an environment can be challenging.Although the lighting conditions in an ultrasound exam room are optimalfor eye tracking, calibration is always required for high accuracy eye track-ing. Given the typical length of a general ultrasound exam of at least 20minutes, the routine changes in sonographer positions between sitting downand standing up, the frequent context switch between the monitor and thepatient, and in some cases the frequent rotation of the display towards thepatient and back towards the sonographer during the exam, there is a riskof calibration deterioration. Another risk for calibration deterioration, asreported in earlier studies [42], is due to a drift effect, which is caused bychanges in the characteristics of the eye over time while exerting mentaleffort due to changes in pupil size or dryness of the eye [44].3.4 ConclusionWe observed 18 ultrasound scans of various types, surveyed 48 sonographers,and conducted structured interviews and informal discussions mainly withtwo sonographers throughout the process. The data collected helped usunderstand in depth the environment of sonography and the sonographers’daily challenges.Each ultrasound exam is a structured process that involves an under-standing of the medical case, continuous communication with the patient,prolonged interaction with the ultrasound machine with the goal of per-forming scan-specific measurements and obtaining images with an optimalamount of details, and follow-up discussions with the patient and physicians.This process generates three main contexts of attention throughout the scan:the monitor, the patient and the machine controls.The types of ultrasound machine and image settings and their frequencyof use depends on the specific ultrasound scan type. We surveyed othertechnology that aims at reducing repetitive work-related injuries for sonog-raphers, such as scan automation and voice-enabled machine controls. Semi-automation protocols are widely-used due to their ability to shorten theexam time and reduce potential work-related injuries. Voice commands, onthe other hand, are not as efficient as they might seem. This is mainlydue to the hindered efficiency of communication between the sonographerand the patient. Nevertheless, we still find a prevalence of work-relatedinjuries (92%) among the surveyed sonographers, which requires attention453.4. Conclusionfrom hospitals and ultrasound machine manufacturers.When we investigated the usage of ultrasound machines in ultrasound-guided interventional procedures, we found that the interaction with themachine is minimal compared to sonography. Moreover, it is difficult toreplace the role of a sonographer due to her knowledge of the machine op-eration and the shared cognitive load with the physician performing theultrasound-guided procedure.Through this extensive field study, we identified potential advantages tointroducing a multi-modal gaze-supported ultrasound machine interfaces,such as a reduced manual interaction with the machine. We also narroweddown the potential ultrasound functions to consider as a starting point foreye gaze tracking integration. We also identified potential risks associatedwith multi-modal gaze-supported interfaces, such as the need for frequentre-calibrations and the potential interruption of work flow.46Chapter 4Gaze-supported InterfaceDesignThis chapter presents an overview of the analysis, design and implementationof a gaze-supported zoom interface for the class of ultrasound machines weare investigating. First, we apply the design concepts presented in Chapter2 related to input device interaction and image browser interaction to zoomand pan functions in ultrasound machines, including High-resolution zoomand Low-resolution zoom. Based on the analysis presented, we present astate-based representation of the zoom and pan functions in ultrasound ma-chines to offer a visual understanding of the interaction. Next, we presentan approach for integrating gaze-supported interaction into the presentedmanual-based state space. We also present our previously-investigated gaze-supported interaction approaches and highlight their limitations. Finally, wepresent implementation details of the gaze-based features including filtering,simultaneous panning and zooming algorithms and hardware interface de-sign.4.1 Design AssumptionsAs discussed in Chapter 3, currently available ultrasound machines offerHigh-resolution zoom, Low-resolution zoom and some provide a hybrid func-tion of the two alternatives. However, there is a variation in terms of the im-age quality produced between the two main zoom functions: Low-resolutionzoom post-processes the acquired image to change its magnification, there-fore the frame rate is unaffected with zooming and the image quality is notimproved. On the other hand, performing High-resolution zoom narrowsthe area of target acquisition by the ultrasound transducer, thus enhancingthe frame rate and providing a zoomed image with higher quality and de-tails. Since High-resolution zoom is not a post-processing function, panningthe zoomed image is typically not allowed, as it requires the probe to ac-tively reset the acquired sub-area of the overall visible range. Given these474.2. Input Device Interaction Conceptstechnical differences between the two zoom functions and the diverse set ofapproaches taken by ultrasound machine manufacturers, we set these designassumptions:1. Both Zoom Functions Provide the Same Resolution. Giventhat image resolution is not our main concern, we assume all magnification-related functions acquire the magnified area of interest with no im-proved quality. This means that we are not taking into considerationthe adjustment of the probe’s lateral and temporal resolution of theimage when it is magnified with the High-resolution zoom function.2. Rename the Zoom Functions. Consequent to assumption 1, werefer to the interaction of the Low-resolution zoom in ultrasound ma-chines as “One-step Zoom” (OZ) throughout this thesis and the inter-action of High-resolution zoom as “Multi-step Zoom” (MZ).3. Enable Pan and Resize. The sonographer is always able to panand resize a magnified image, whether it was obtained through OZ orMZ.4.2 Input Device Interaction ConceptsThe layout of ultrasound machines greatly varies based on the target users,available budget and target applications the machine is designed for. Some-times, machine interfaces differ between manufacturers even if they targetthe same application and class of users. First, we eliminate a great portionof variety by targeting to analyze the layout interface of only recent premiumand high-end machines that are designed for routine diagnostic ultrasoundexams performed by sonographers and observed during our field study, de-scribed in Chapter 3. This specific class of machines exhibits common char-acteristics in terms of the input devices on the machine’s controls layout andtheir mapping to the various ultrasound machine functions. Furthermore,we analyze only image magnification-related functions: Low-resolution zoom(or what we refer to as OZ), High-resolution zoom (or what we refer to asMZ), depth and focus. Figure 4.1 shows the layout of the GE Logic E9 ul-trasound machine, one of the commonly-used machines in routine diagnosticsonography.Table 4.1 lists the typical input devices used for the selected ultrasoundmachine functions. Note that, in some cases, the type of input device is notthe same across different machines. Later in this section, we discuss and484.2. Input Device Interaction ConceptsTable 4.1: Ultrasound Image Magnification-related Functions and Their As-sociated DevicesUltrasoundFunctionInput Device RoleMZ - Multi-step(High-resolution)ZoomTrackball Reposition/resizezoom boxPush button 1 Toggle betweentrackball functionsPush button 2 Confirm zoom/resetviewOZ - One-step(Low-resolution)ZoomClickable knob (Turn)Increase/decreasezoom ratio(Click) Reset tooriginal ratioTrackball Pan zoomed areaDepth Knob or arrowbuttons*Increase/decreasedepth valueKnob or arrowbuttons*Increase/decreasefocal depth valuePush button, knob orarrow buttons*Increase/decreasenumber of focal zones*Type of input device is not consistent across machines for this function494.2. Input Device Interaction ConceptsFigure 4.1: The Manual Controls Interface of the GE Logic E9 UltrasoundMachinesuggest the best type of input device based on the presented theory relatedto input devices.By referring to Buxton’s work on lexical and pragmatic considerations ofinput structures [10], and our discussions of his work in Chapter 2, we applythe presented theories to the design of magnification-related input devicesin this section.Lexical and Pragmatic CharacteristicsTable 4.2 shows four types of dimensionality involved in the sonographer-machine interaction for image magnification-related tasks: 0 (binary), 1D,2D and 1+1D. The selection of input devices for the binary case can beas simple as a push button. However, the other three types of dimensionsrequire more careful selection by further defining the property sensed andwhether it should be mechanical or touch-sensitive.Property Sensed Given the occupational musculoskeletal disorders al-ready common in sonographers due to their prolonged hours of work withultrasound machines, using input devices that introduce any type of unnec-essary pressure are avoided in the design of ultrasound machine interfaces, if504.2. Input Device Interaction ConceptsTable 4.2: Image Magnification-related Functions and Their Tasks Dimen-sionalityUltrasoundFunctionTask DimensionalityMZ - Multi-step(High-resolution)ZoomPosition box 2DResize box 1D (width) + 1D(height)Toggle resize andposition*0 (binary)Confirm zoomReset viewOZ - One-step(Low-resolution)ZoomChange zoom ratio 1DPan image 1D (width) + 1D(height)Reset image 0 (binary)Focus Position line 1D (verticalpositioning)Set number of lines 1DDepth Set depth value 1D*Type of input device is not consistent across machines for this function514.2. Input Device Interaction Conceptsan alternative can be found. Thus, input devices that sense pressure inputare eliminated. In his work, Buxton [10] discusses how to decide betweenmotion-sensing and position-sensing devices by asking the question “wouldmy input device cause a nulling problem?” A nulling problem occurs whena position-sensing input device is used for multiple functions in a system,thus changing the value for one function will position the device at a par-ticular place that will interfere with the value of the other function. Forinstance, if a designer decides to use the same input device for parameterA and parameter B facilitated by a mode switch, the numerical value forboth parameters must be the same at all times since the value is directlymapped to the position of the input device, which is not practical. Anothersource of the nulling problem occurs when the system automatically changesthe value of some parameter for auto-optimization purposes or to load somepre-defined settings. In such case, there will always be an inconsistencybetween the position of the input device that was last set by the user andthe value that the system has set. For this reason, motion-sensing devicesare the suitable option for ultrasound machine interfaces since the values ofimage parameters are often reset by the ultrasound machine system or setautomatically based on image presets.Mechanical vs. Touch-sensitive Devices In terms of the last classi-fication property, mechanical-based devices are accessed faster than touch-sensitive devices as they exhibit a more tangible physical interface. Since asonographer’s work requires a lot of context switch, glancing repetitively atthe controls to locate touch-sensitive devices (such as touch screens) makesthem impractical for image parameters that are used frequently throughoutan ultrasound exam and will take up more time and cognitive load by thesonographer.2-D Tasks For 2-dimensional tasks, we are presented with the choice be-tween a trackball and a mouse. A number of studies prove the superiorityof mice over trackballs in terms of time and accuracy for achieving point-ing and dragging tasks, such as the study conducted by MacKenzie et al.[43]. However, a follow-up study by Kabbash et al. [35] that adopted thesame tasks and input devices for their user study took into considerationthe differences in the performance between the dominant/preferred and thenon-dominant/non-preferred hand. In their study, they tested the hypothe-sis “Preferred and non-preferred hands yield the same speed, accuracy, andbandwidth using the mouse, trackball and stylus in pointing and dragging524.2. Input Device Interaction Conceptstasks”, which they rejected as the use of the trackball by the non-preferredhand yielded higher accuracies than the preferred hand. Additionally, theyargue, “The ease of acquiring a fixed-position device (such as a trackball,touch pad, or joystick) may more than compensate for slower task perfor-mance once acquired.” In the case of the multitude of functions a sonogra-pher performs with ultrasound machines which requires a detailed layout ofphysical input, using a non-stationary input device, such as a mouse, willrequire them to lift off their arms repeatedly to reach it and the rest of thephysical inputs, which consumes extra time, effort and cognitive load thatcould be avoided by using a stationary input device. Given that it has beenempirically proven that a trackball outperforms a mouse in terms of accu-racy when used by the non-dominant hand, trackballs are an integral part ofall current ultrasound machine interfaces for 2-dimensional tasks. Anotherinput device which has similar capabilities to a trackball and has not beendiscussed by Buxton [10], is the touch pad with multi-touch features, en-abling seamless zooming and panning. However, we have not observed anyultrasound machine interfaces with an integrated touch pad. Therefore, itis outside the scope of our discussion.1+1D Tasks Using a 1D device with a mode switch is sufficient to performa 1+1D task. For instance, a clickable continuous rotary knob can be usedto set the width and height of the area of interest in an image. Anotherapproach would be to use a 2D device with a threshold. The latter approachis taken when the number of input devices on a layout needs to be minimized,where the same 2D device can be used to perform a multitude of functionsincluding 1D, 1+1D, and 2D tasks. In ultrasound machines, a trackball isoften performing 1+1D involved in magnification-related functions.The Three-state ModelBased on Buxton’s three-state model [11], the types of transactions per-formed by a sonographer for each zoom function fall under the point/selecttasks. We generated the three-state model for both Multi-step (High-resolution)zoom and One-step (Low-resolution) zoom shown in Figures 4.2 and 4.3.Depth and focus are trivial cases with one state only for each input deviceinvolved, which we will briefly discuss.Figure 4.2 shows the three-state diagram for the input devices involvedin performing OZ in ultrasound machines including the clickable knob andthe trackball. Figure 4.3 shows a more complicated case for performingMZ for the input devices involved in performing the function, that is the534.2. Input Device Interaction Conceptstrackball and its supporting push buttons.One-step (Low-resolution) Zoom Figure 4.2 describes the interactionwith the OZ function that is composed of two independent user transactions.Referring to Table 4.2, OZ is composed of three tasks: one is a 1-dimensionaltask that handles the magnification ratio and another is a binary task thatresets the magnification. The third task is independent of the other two tasksas it handles panning the magnified image. A suitable input device for the1-dimensional and binary tasks is a continuous clickable rotary knob. Thesystem is always in state 1 tracking the knob’s position, unless it is pressed,which initiates a state 1-2-1 transaction. The third task involved in the OZfunction is a 1+1D task, which, as explained earlier, can be realized witha thresholded 2D input device such as a trackball. Referring to Buxton’swork [11], if no other input devices assist the transaction performed bythe trackball, it exhibits only one state: the tracking state. Therefore, thethree-state diagram is at its simplest case.Figure 4.2: Three-state Diagram for One-step (Low-resolution) Zoom inUltrasound MachinesMulti-step (High-resolution) Zoom 2D, 1+1D and three binary taskscompose the interaction with the MZ function. Note that two of the binarytasks (confirm zoom and reset zoom) occur in different image modes; there-fore the same binary input device (a push button) can be used for both. Asdiscussed earlier, a trackball is the selected choice for performing 2D taskswith ultrasound machines. The selected input device(s) for the 1+1D task544.2. Input Device Interaction Conceptswould determine the number of binary tasks (and therefore the number ofinput devices) there are to support the rest of the function. The followingare the possible options:1. Two separate 1-dimensional input devices (width and height controlof the zoom box), one 2D device (reposition the zoom box) and twobinary input tasks (confirm and reset zoom),2. One 1-dimensional input supported by a mode switch (toggle betweenwidth and height control), one 2D device (reposition) and three binaryinput tasks (confirm zoom, reset zoom and mode switch for the 1+1Dtask), or3. One 2D device with a mode switch (toggle between resize and reposi-tion the zoom box) and three binary input tasks (confirm zoom, resetzoom, toggle between resizing and repositioning).The diagram in Figure 4.3 shows the full interaction of Multi-step (High-resolution) zoom. Note that there are two main modes/states in this inter-action: Pre-zoom and Zoom. The Pre-zoom mode contains the set of tasksa sonographer performs to set the size and position of the area of interest,while in the Zoom mode, the only possible interaction is to reset the imageto its original view and to pan the image with the trackball. Given this modeswitch, a single push button can be used to perform the tasks confirmingthe zoom and resetting the image. In addition, within the Pre-zoom modethere are two sub-modes: reposition and resize. In the diagram, we denotePre-zoom’s reposition with (a), Pre-zoom’s resize with (b) and Zoom with(c). Thus, states 1 and 2 can be in any of the three (sub)modes based onthe user’s interaction.Performing a composite press and move with the trackball lowers its per-formance, as investigated by some studies such as MacKenzie et al.’s [43],which attributed the decrease in performance to the interaction between thesmall muscle groups of the fingers while moving the trackball and simulta-neously holding a button. Therefore, similar to One-step (Low-resolution)zoom, state 2 is always a 1-2-1 transaction.When the trackball is in state 1(a), two possible options can be per-formed with the two different mode switch input push buttons: either totoggle to resize the area of interest or to confirm the zoom. Similarly, if thesonographer decides to toggle to resize, the trackball’s state changes to state1(b) where the there are two possible options with the same mode switchinput push buttons: either to toggle to reposition the area of interest, which554.2. Input Device Interaction ConceptsFigure 4.3: Three-state Diagram for High-resolution Zoom in UltrasoundMachines564.2. Input Device Interaction Conceptstakes the trackball back to state 1(a), or to confirm the zoom. If the sonog-rapher decides to confirm the zoom while the trackball is in either of states(a) (reposition) or (b) (resize), the trackball moves to state 1(c), where it isable to pan the magnified view, similar to its function in OZ.Focus and Depth Depth is another function in ultrasound machines thatmagnifies the ultrasound image by adjusting the transducer’s image acquisi-tion to display a deeper view of the target. Although we do not extensivelyanalyze it, we classify depth as a magnification-related function. Similarly,focus is not particularly a magnification function. However, we include it inour discussion on image magnification-related functions in ultrasound ma-chines, as it is one of the direct image features that are changed along withmagnification. In ultrasound images, the focus is the horizontal line wherethe set of ultrasound beams produced by the probe are all focused at, there-fore producing the highest lateral resolution at that depth. The majority ofthe recent machines support multiple focal points, so an ultrasound operatorcan set a number of focal depths where the lateral resolution of the image ishighest. However, increasing the number of focal points comes at the costof decreasing the temporal resolution.Often, when a sonographer changes the image magnification, especiallyin MZ, the focus is reset to some position within the magnified area. Mostmachines place the focus in the middle of the magnified view. By referringto Table 4.2, focus is composed of two tasks: reposition the focal point(s)and change the number of focal points. Both tasks are 1-dimensional, so theoptimal input devices for these tasks would be either a continuous rotaryknob or a continuous treadmill thumb-wheel. In both cases, the three-state representation is a trivial 1-state diagram where the input device isconstantly tracking the position of the input device. The implementation inultrasound machines in terms of the choice of input devices for focus differsfrom one machine to another. However, most implementations dedicate aknob input solely for changing the position of the focal point(s), as it isfrequently accessed during a routine diagnostic ultrasound exam and placethe option for changing the number of lines within the touch screen display.Similarly, depth is a trivial 1-state case diagram where the system isconstantly tracking the position of the rotary knob or thumb-wheel dedicatedfor setting the depth of the image. Depth is one of the most frequently-accessed ultrasound functions; ultrasound machines typically have a separatededicated input device for depth setting, as shown in Figure 4.1574.3. Image Browser Interaction Concepts4.3 Image Browser Interaction ConceptsIn this section, we explain the approaches taken in the design of the magnification-related functions in commercially available ultrasound machines based onPlaisant et al.’s [47] image browser classification theory and compare andanalyze the choices taken based on the tasks and the class of users interactingwith different image browser alternatives.In routine diagnostic sonography, a sonographer’s main task is to locateand acquire an anatomical target and then to optimize the acquired imageto clearly show a particular area of interest. Often, a sonographer is alsorequired to perform secondary post-acquisition tasks such as measurementsand diagnostics and report all the acquired, generated and measured datato a physician. Based on Plaisant et al.’s [47] classification, a sonographermainly performs diagnostic tasks on the image in an image browser. Some-times, a sonographer’s task might also be classified as open-ended explo-ration as the sonographer is still in the localization phase of the anatomicaltarget to be acquired in the image (for example, as the sonographer is lo-cating the liver in an abdominal ultrasound exam).Once a target’s location is acquired with the probe, a sonographer mightchange the depth of the image to bring the target to the central attentionor perform a local zoom operation to be able to accurately measure an areaof interest. In either case, the sonographer’s task is a diagnostic task asshe is investigating an area of interest within a magnified image. For diag-nostic tasks, speed of panning and zooming is an essential requirement forthe image browser’s interface since the sonographer’s short-term memoryis actively being used to compare patterns and investigate parts of the im-age. Additionally, the interface should include a mechanism to provide thesonographer with a context within a zoomed image, as complete coverage iscrucial to show the target in relation to the surrounding anatomy.Presentation Aspects Figure 4.4 shows the complete taxonomy for browserpresentation aspects as presented by Plaisant et al. [47], with additionalcheck marks next to the optimal presentation aspects for ultrasound ma-chine image browsers. As mentioned earlier, an effective browser interfacehas to provide the sonographer with contextual awareness. Therefore, ahybrid of single and multiple views is a rational design approach for the pre-sentation of the image browser in ultrasound machines, where the multipleviews are presented as an overview-detail pair where the detail acts as azoom-and-replace view and the overview provides the sonographer with thedimensions and the location of the detailed view. Given that the sonogra-584.3. Image Browser Interaction ConceptsFigure 4.4: Browser Taxonomy Presentation Aspects [47]: A check mark isadded next to the design choices for ultrasound machine image browsers.594.3. Image Browser Interaction Conceptspher performs accurate measurements on the detailed view after zoominginto an acquired image, the aspect ratio between the overview and detailmust be low, with a larger detail view since it is the central attention of thesonographer. Consequently, any changes within the detailed view are mir-rored to the overview, and vice versa, making the views tightly coupled withbidirectional coordination, which reduces the cognitive load of the sonogra-pher by not having to manage the views in addition to managing her primaryultrasound-imaging task. Similarly, continuous/incremental update of theimage (in this context, by “update” we refer to updating the detailed viewas the sonographer zooms in) allows the sonographer to concentrate on hermain ultrasound image acquisition task and not on the browser navigationtool. In the presented taxonomy, the authors [47] classify the nature of theimage update to zoom, explode and distort. Due to the nature of imagesand the transducer capability of ultrasound machines, the only applicableoption for updating an ultrasound image to reveal higher definition detail ismagnification-related functions (zoom and depth). Explode refers to reveal-ing an internal structure of the zoomed part, which is applicable in areassuch as dense maps or network diagrams. Distort refers to a fisheye-likepresentation, such as the work presented in [56], which is not suitable forultrasound images as it will interfere with an accurate presentation of thestructure and with accurate measurements of the target. The zooming factoris the ratio between the presented image in the overview and the detailedviews in terms of the level of magnification. Plaisant et al. [47] suggestempirical testing to set these values as they differ from one application toanother. However, based on their experience, the ratio between the twoviews should not exceed 20:1, otherwise intermediate views should be calledfor. In ultrasound imaging, high zoom levels are not required as in the caseof dense images, as the magnification does not reveal new data, but onlycentres the target of interest in the view and allows for higher accuraciesof operations to follow. For instance, GE’s Voluson E8 does not allow amagnification ratio beyond 3.4:1.Operation Aspects Figure 4.5 shows the operation aspects taxonomybased on Plaisant et al.’s [47] work. We modified it by adding extra op-tions to zoom and pan as implemented in the ultrasound machines’ imagebrowser interface. Similar to the presentation taxonomy, we highlighted theoperation approaches followed in ultrasound machines. However, due to thediversity of machines in the market, there is no unified approach to imple-menting the operation aspects of ultrasound image browsers. As mentioned604.3. Image Browser Interaction ConceptsFigure 4.5: Browser Taxonomy Operation Aspects [47]: The designchoices for the different alternatives of ultrasound machine image browsermagnification-related functions are highlighted with different check marks.earlier in this chapter, most machines offer a Low-resolution quick-accesszoom function (OZ) and another High-resolution zoom (MZ) function thatprovides the machine operator with higher control over the location anddimension of the magnified image, which requires the usage of more thanone type of input device to achieve the magnification task. Some newermachines offer a hybrid of both zoom options, which some machines nameit the PanZoom option: a Low-resolution magnification with extra controloptions over the area of interest’s dimensions. Secondly, machines differ es-pecially in the design of the operation of the High-resolution zoom (MZ).Through our market research and field observations, we were able to find thedesign trends in the image browser zoom design for premium and high-endultrasound machines used in routine diagnostic ultrasound exams. Finally,sonographers often interchangeably use depth and zoom as these functionsserve the same task with varying levels of control and image quality to mag-nify an area of interest.In Figure 4.5, we separately identified the operation aspects followed byHigh-resolution zoom (MZ) and Low-resolution zoom (OZ). Additionally,we omitted the automation recommendations of the operational aspects ofimage browsers, since we are concerned with the design of the manual mag-nification function itself, and not in secondary image browser tasks discussed614.3. Image Browser Interaction Conceptsby Plaisant et al. [47] such as saving points, navigation, window manage-ment and search.State-based AnalysisTo understand the mapping between ultrasound machine manual input de-vices and their functionalities for performing the zoom and pan actions, wegenerate a state-based analysis. This analysis also provides a visualizationof the interaction that helps later in identifying the potential areas of eyegaze input integration.We focus on zoom functions that implement three states: Full-scale im-age, Pre-zoom and Zoom.1. Full-scale is the state where the whole image is displayed to the sono-grapher and panning is not possible.2. Pre-zoom is the state where the user is actively changing the dimen-sions and position of the zoom box, or area of interest to be zoomedinto.3. Zoom is the state where only a magnified portion of the whole imageis visible.Figure 4.6 shows the typical interface layout used to operate magnification-related functions. In addition, Figure 4.7 is a state diagram for the zoomfunctions discussed. An illustration of the interaction is shown in Figure4.8.The Zoom and Pre-zoom states each have two sub-states. Based on thesub-state, the functions of some of the input devices shown in Figure 4.6change. In sub-state (a), in both Zoom and Pre-zoom states, is associatedwith movement:• Zoom (a) the trackball pans the image• Pre-zoom (a) the trackball repositions the zoom box.In sub-state (b), the function of the trackball is to resize:• Zoom (b) the trackball resizes the zoomed area• Pre-zoom (b) the trackball resizes the zoom box.624.3. Image Browser Interaction ConceptsFigure 4.6: The Input Layout Design of the Traditional (Manual-based) Ul-trasound Machine Interface with the Mapped Functions per State for ZoomFunctions.634.3. Image Browser Interaction ConceptsFigure 4.7: A State Diagram of Zoom Functions in Ultrasound Machines:MZ zoom includes all three states. OZ includes only the Full-scale and Zoomstates.644.3. Image Browser Interaction ConceptsFigure 4.8: An Illustrated Interaction of One-step Zoom, Multi-step Zoomand Panning of Traditional (Manual-based) Ultrasound Machine.654.4. Integrating Eye Gaze InputSwitching between the sub-states, (a) to (b) or (b) to (a), is always donewith the same toggle button.In the ultrasound machine layout design, the zoom knob’s button is themain input for transitioning between the states in a periodical fashion: inthe Full-scale state, pressing the button transfers to the Pre-zoom state, inthe Pre-zoom state, pressing the button transfers to the Zoom state, in theZoom state, it transfers back to the Full-scale state.The zoom knob rotation is only enabled in the full-scale and zoom states:it zooms in or out with a constant ratio and uniform image border size.OZ includes only the Full-scale and the Zoom states and sub-states. MZincludes all three states.4.4 Integrating Eye Gaze InputBy mapping eye gaze to Buxton’s [10] tableau of input devices, eye gazeserves as a 2-dimensional input that conveys position. Any other actionsto be performed by the user related to selection must be conveyed eitherthrough another channel (such as manual input, speech, gesture, etc.) orthrough other eye behaviour, such as dwell time, in case a multi-modaloption is not feasible. Consequently, using the three-state model [11], in-dependent eye gaze input supports states 0 and 1, where it can be eitherOOR (eye gaze is undetectable by the tracker), or being actively trackedfor position. A second input modality, such as manual buttons, must beintegrated for eye trackers to support state 2.In his work, Jacob [32] classifies the interaction techniques performedwith eye tracking-supported interfaces. One or more interaction techniquescan be present in an interface based on the task requirements of the system.The first technique discussed is “object selection”. As the name suggests,object selection is the action of intentionally looking at a particular objectin the user interface and performing selection. Similar to object selection isanother interaction technique called “continuous attribute display”, whichis the retrieval of information of the particular object on the screen uponuser selection. “Moving an object” is another interaction technique, whichcan be achieved in one of two approaches, where in both the eye is used topoint at the particular object of interest to be moved. In the first approach,the manual input signal performs both the confirmation of selection andmoves the object, while in the second approach, the manual input is usedonly to confirm the initial and final positions of the object and the eyeis used to drag the object (the object is latched to eye movement). The664.4. Integrating Eye Gaze Inputnext interaction technique is “Eye-controlled scrolling text”, which can begeneralized to scrolling of any type of content within graphical borders. Theidea behind it is that as the eye reaches the end (edge) of text (content),the interface naturally scrolls to reveal the rest of it. “Menu Commands”is another interaction technique discussed, which is quite similar to “objectselection”, except that the objects being selected are graphical user interfacemenu items. The final interaction technique discussed is “Listener Window”,which automatically sets the listener/active window based on the user’slocation of gaze in window-based graphical user interfaces.In both our design alternatives, we follow the recommendations madeby [68] and followed by numerous gaze-supported interaction work, such as[42], to not overload the visual channel with motor control by enabling eyegaze input only when combined with a manual input to initiate a motorcontrol action.4.4.1 One-step ZoomOne-step zoom could leverage eye gaze input implicitly by using the pointof gaze to define the area of interest the user aims to zoom at. Traditionally,OZ magnifies the centre of the image. If the user were interested in an areathat is located towards the side of the image, manually zooming into thecentre would require the user to eventually pan the zoomed area to reach thearea of interest located at the side. With eye tracking, the system is alreadyaware of the area of interest the user aims to zoom at, therefore it wouldrequire less or no follow-up panning to reach the area of interest. From anabstract perspective, this interaction falls under “object selection” [32], sincethe sonographer implicitly selects a particular feature in the image to zoominto/out of it.4.4.2 Multi-step ZoomBy referring to Jacob’s [32] work, this interaction corresponds to “movingan object”, as the eye gaze input is used to set the position of an object onscreen. In this context, the object is the zoom box that defines the area ofmagnification. The work presented in [32] suggests two techniques: the firstis to use the eye gaze only to select the object to be moved then use a manualinput device to perform dragging, and the second is to use the eye gaze toselect the object and latch the position of the object to the eye gaze as longas another muscle group is engaged in the interaction (a button is depressed,for example). In the case of our application area, there is only one object674.4. Integrating Eye Gaze Inputto be selected, which is the zoom box, so the first implementation approachwould not apply. Furthermore, in [67], they observe that users preferred thesecond approach more as once they pick up the object, they directly lookat the desired destination, and thus latching the position of the box to theeye gaze would make the interaction much faster. They also note that usersperformed better when the destination formed a clear recognizable feature,and not just white empty space, as it is easier for the eye to look at and fixateon particular features. However, one unavoidable risk that the interactioncould run into due to a moving object on the screen latched to the eye gazeis the positive feedback loop, as explained later in Jacob’s [32] work on eyetracking interfaces, which might occur in case the initial calibration was notaccurate enough, which will cause the user’s eyes to be drawn to the objectand displacing it further.Although we cannot employ the same interaction techniques exploredby Zhai et al. [68], as there are no distinct targets in the interface knownto the system as “hotspots” and the whole image is treated as a target,we could still adopt some concepts, especially from his conservative designapproach, which warped the eye gaze to the targets and fine movements arefurther performed by manual input. In our design, the zoom box is alwayswarped to the user’s fixation and fine details, such as the box dimensionsare determined by a manual input.This design follows the first refinement step in the two-step refinementzoom approach described in [42]: the region of interest is only defined withina confidence interval (zoom box) as the user presses a key and looks at aspecific area. Contrary to [42], our design does not capture a zoomed areawithin the zoom box before performing the zoom action, as this will distortthe underlying ultrasound image. The full two-step refinement approachis more suited for fine targets on screen, such as for text reading or webbrowsing.4.4.3 Gaze-based PanningFollowing the eye-gaze interaction approach introduced by Jacob [32] named“eye-controlled scrolling text”, eye gaze input can also be used in our contextto scroll the zoomed image to reveal more content, just as it is presented inJacob’s [32] work (and many later studies, such as [34]) to reveal more textonce the point of gaze approaches the edges of the image.684.5. Proposed Design4.5 Proposed DesignThrough exploring a number of design alternatives, as detailed later in sec-tion 4.6, we arrive at our final interaction design, as shown in the layoutdiagram in Figure 4.9 and in the state diagram in Figure 4.10. An illustra-tion of how this interaction works is further graphically illustrated in Figure4.11.Figure 4.9: The Interface Layout of the Proposed Design Alternative: theactive input elements are the trackball, the toggle button, the gaze button,and the clickable zoom knob.This design has the same base as the interaction design of the typicalultrasound machine zoom functions explained earlier in this chapter. Thedifferences are the following added gaze features.1. In the Zoom state, rotating the zoom knob (zooming in) always takesthe point of gaze as an input and zooms into that area.2. In the Pre-zoom state, holding a button moves the zoom box based oneye gaze input.The first feature targets the One-step Zoom function by using the user’seye gaze implicitly as the user is already looking at the point of interest694.5. Proposed DesignFigure 4.10: The Interaction State Diagram of the Proposed Design Al-ternative: the same as the state diagram in Figure 4.7, with four addedgaze-supported interactions taking the Point of Gaze (POG) as an input.In the Zoom states, eye gaze input is implicitly integrated. In the Pre-zoomstates, eye gaze input is explicitly used to move the zoom box.704.5. Proposed DesignFigure 4.11: An Illustrated Interaction of One-step Zoom, Multi-step Zoomand Panning of the Proposed Design.714.6. Earlier Investigated Design Alternativeswhile zooming. In the manual-based interaction alternative, One-step Zoomrequires iterative panning as zooming in always uses the centre of the visibleimage as the point of interest. In this gaze-supported alternative, the needfor panning is omitted, as the system automatically sets the point of gazewithin the visible image as the point of interest. This is designed to reducethe manual interaction and speed up the zooming task.The second feature targets the Multi-step Zoom function by latching themovement of the box to the movement of the user’s eye gaze. Although this isan explicit gaze-supported interaction, it is only activated upon pressing andholding a button, which is designed to eliminate the Midas touch problemdiscussed earlier in this thesis. Similarly, this is designed to reduce themanual interaction by reducing the use of the trackball to move the zoombox and consequently speed up the interaction.Unlike earlier work in the field [25] [2] [46], we did not integrate gaze-supported panning due to negative feedback and observations during pilotstudies. Similar challenges have been reported in earlier work [54] [30], whereparticipants found the gaze-based panning feature interfering with visuallytracking the object.4.6 Earlier Investigated Design AlternativesIn this section, we present the earlier design alternatives we investigated forOne-step Zoom, Multi-step Zoom and gaze-supported panning and explainthe reasons they proved to be inefficient either through tests performed withend users or through an analysis of interaction.4.6.1 Alternative 1This design alternative for OZ, MZ and panning in both zoom techniqueswas designed and tested with end users during the first iteration of thisuser-centred design approach. Detailed results can be found in Appendix D.The layout diagram is shown in Figure 4.12, the interaction state diagramis shown in Figure 4.13, and an illustration of the interaction is shown inFigure 4.14.One-step Zoom We adopt the interaction presented in [2], named DTZ(Dual-to-Zoom), which uses one button to zoom into a point of interest,defined by the location of the point of gaze, and another button to zoom out.We dedicate a third button for panning within the zoomed image, as will724.6. Earlier Investigated Design AlternativesFigure 4.12: The Interface Layout of Design Alternative 1: the active inputelements are the trackball, push button 1, push button 2, push button 3,and push button 4.734.6. Earlier Investigated Design AlternativesFigure 4.13: The Interaction State Diagram of Design Alternative 1: thetotal number of states are reduced by omitting the sub-states of Zoom andPre-zoom.744.6. Earlier Investigated Design AlternativesFigure 4.14: An Illustrated Interaction of One-step Zoom, Multi-step Zoom,and Panning of Design Alternative 1.754.6. Earlier Investigated Design Alternativesbe detailed later. In our design, the first zoom action (120%) is performedbased on the point of gaze. The consecutive “zoom-in” actions zoom into the centre of the visible frame by a factor of 30%. “Zoom-out” alwaysbacktracks with every button click until the original image is restored.Limitation Although this design alternative proved to have potential inthe earlier work investigated [2], adopting it in ultrasound machines will re-quire extra hardware (push buttons) that are dedicated for the OZ function.Given that we aim to integrate our gaze-supported interaction design withthe currently-available premium to high-end ultrasound machine interfacesused in diagnostic sonography, we decide to explore a different design forOne-step Zoom that adopts the same input devices used in the targetedultrasound machines.Multi-step ZoomRepositioning the zoom box Once the user enables the zoom mode, azoom box appears on the image and latches to the user’s eye gaze. A simpleaveraging filter is applied, as shown in Figure 4.18, to reduce the jittery effectof rapid eye movements. Furthermore, the opacity of the box intensifies asthe user gazes longer into a particular region and goes transparent again asthe user rapidly looks away.Resizing the zoom box The user can freely adjust the size of the zoombox through the trackball. Once the user starts resizing the box, the boxstops following the eye gaze movement to allow for precision. In other words,the zoom box is not latched to eye gaze as long as the user is activelychanging the size of the box. The user also has the option to press a buttonto manually “hold” the box in place.Confirming the zoom area Finally, once the dimensions of the box areset and the location of the area of interest is held with a button, the usersimultaneously presses another button to confirm the zoom action. Thesame zoom button is pressed again to zoom out and restore the originalimage.Limitation Through testing with end-users, we find that the combinationof button presses required for confirming the zoom function is cumbersome764.6. Earlier Investigated Design Alternativesto learn and perform, let alone to perform repetitively every time the sonog-rapher zooms into a target. Latching the zoom box to eye movement withouta simultaneous button press caused a lot of distractions, as reported by sono-graphers and further detailed in Appendix D on the user study from the firstiteration.One of the alternative interface designs we thought of, which combinesthe visual feedback idea from the zoom box, but eliminates its distractinglarge shape, is replacing the zoom box with a cross-hair or a point that movesalong with the eye. However, in addition to the previously discussed positivefeedback loop issue, when the idea was presented to some of the end users,they pointed out that it might not be effective since some targets might beof an irregular shape and do not have a “centre” where the pointer shouldalign with. This will cause additional cognitive load to the sonographer thatwill lead them to try and find the centre of a certain target and align thepointer with it before zooming into it.Pan Regions For both OZ and MZ, once in the zoom state, pan regionsare defined at the boundaries of a zoomed image, as shown in Figure 4.19.If the point of gaze falls within one of these regions while simultaneouslyholding the pan button, the image scrolls in that direction. This approachhas been deployed in earlier gaze-supported zoom interactions, such as thatpresented in [2].4.6.2 Alternative 2This design alternative was explored in the second iteration of this user-centred design approach. We decide not to proceed with testing the in-teraction design with end users due to the limitations we realize throughanalyzing the interaction.This interaction design alternative, shown in the input layout diagram inFigure 4.15, the state diagram in Figure 4.16, and the interaction diagram inFigure 4.17 greatly simplifies the interaction by reducing the total numberof states the system transitions between as Zoom and Pre-zoom do not havesub-states anymore (compared to the original design in Figure 4.7). Thisis achieved through completely delegating all the position-related 2D inputfunctions to the eye gaze input. Thus, panning the image in the Zoom stateis simply achieved through holding a button and looking at the edges ofthe image. Similarly, moving the zoom box in the Pre-zoom state is simplyachieved through holding a button and looking at the intended area.774.6. Earlier Investigated Design AlternativesFigure 4.15: The Interface Layout of Design Alternative 2: the active inputelements are the trackball, the gaze button, and the clickable zoom knob.Figure 4.16: The Interaction State Diagram of Design Alternative 2: thecombination of inputs is reduced compared to alternative 1, displayed inFigure 4.13, for some functions.784.6. Earlier Investigated Design AlternativesFigure 4.17: An Illustrated Interaction of One-step Zoom, Multi-step Zoom,and Panning in Design Alternative 2.794.7. Gaze-supported Features: Implementation DetailsLimitations Theoretically, the design holds potential in simplifying theinteraction and will inevitably reduce the repetitive interaction, especiallyby eliminating the need to toggle between sub-states. However, throughpilot testing of the interaction, we realize that it is not feasible due to thefollowing challenges:1. Zoom state panning: looking at the edges of the image is counter-intuitive to what a sonographer (or any user performing a task requir-ing image inspection) does. When the user looks at the edge of theimage and presses the pan button, the image moves to the left andtherefore brings the object of interest inside the field of view. How-ever, centring the object is not possible as the user must look backand forth between the edge of the image (to perform panning) and thetarget (to track its position).2. Pre-zoom state box movement: although it works for long distancemovements, however, as discussed in many previous eye gaze trackingstudies and as observed in the results from the first iteration, fineaccurate motions of objects on screen based on eye gaze input is notpossible due to the eye’s jittery movements.4.7 Gaze-supported Features: ImplementationDetails4.7.1 Filtering Gaze Data: Moving-average Filter With aThresholdAlthough the gaze data generated by the eye gaze tracker is filtered, for thepurposes of our control-based application, further filtering is recommended,as observed during our initial user study found in Appendix D. Thus, weadopt a simple moving-average filter, as used in earlier HCI studies relatedto gaze-supported interactions [33] [65]. Additionally, we activate the filteronly when the gaze is moving in small distances. For larger distances, thefilter is disabled. The window size and the rest of the constants used in thisfilter were determined through trial and error and a number of pilot tests.An overview of this algorithm is summarized in Figure 4.18. The overallinteraction with the gaze-supported zoom interface, which uses this filter, islater evaluated in Chapters 5 and 6.The following variables are used in the filtering algorithm. The selectedvalues are determined based on trial and error.804.7. Gaze-supported Features: Implementation Details• (xold, yold) are the coordinates of the previous unfiltered fixations.• (xnew, ynew) are the coordinates of the current unfiltered fixation.• d list is the list of distances between successive fixations.• d win size is the window size of d list, we select a value of 10.• d threshold is a limit for the average distance between fixations. Anaverage of d list higher than this threshold indicates a high jump ineye movement.• fix list is the list of averaged (filtered) fixations within a distance lessthan d threshold.• fix win size is the window size for the successive fixations withind threshold, we select a value of 100.The algorithm below explains the filtering mechanism.1. Block invalid fixations: fixations with an invalid flag (Validity = 0) orfixations outside the area of the screen are filtered.2. Calculate the average distance between successive fixations for the pastd win size fixations(a) Evaluate dd =√(xold − xnew)2 + (yold − ynew)2 (4.1)(b) append d to the list of fixation distances d list(c) Evaluate d averaged average =∑d listd win size(4.2)(d) If d average > d threshold, clear fix list and d list(e) Else, append fixation (xfiltered, yfiltered) to fix list with the fol-lowing entry:xfiltered =∑fix listxsize(fix list)(4.3)yfiltered =∑fix listysize(fix list)(4.4)814.7. Gaze-supported Features: Implementation DetailsFigure 4.18: The Gaze Data Filtering Algorithm Used824.7. Gaze-supported Features: Implementation Details4.7.2 Gaze-based Simultaneous Pan And ZoomProvided the jittery nature of eye gaze data, zooming into where the useris looking will not yield accurate results: zooming enlarges the target andchanges its central location causing the eye gaze to keep shifting as the useris gradually zooming in. This challenge has been acknowledged in earliereye gaze tracking user interfaces research, such as the work investigatedby Kumar [41], where he explains how the eye gaze error increases withincreased zoom levels for interacting with maps:“If the user was looking at point P, chances are that the eye trackermay think that the user is looking at the point P+, where  is the errorintroduced by the eye tracker. Once the user initiated a zoom action, themap is magnified. Therefore, if the zoom factor is z, then the resultingerror gets magnified to z, which can be considerably larger than the originalerror.”To reduce this effect, we apply simultaneous zooming and panning, wherethe image zooms into where the user is looking, then pans to correct for theshifted target central position. A similar approach has been used in othergaze-supported control applications, such as implemented by [69]. We followthe same gaze-driven camera control algorithm in centring the target afterthe zoom action for a limit of 500 milliseconds.The following variables are used in the simultaneous panning and zoom-ing algorithm:• POG is the filtered input point of gaze• C is the visible image centre• ro is the radius around the centre• IM velocity max is the maximum velocity the image moves at• D is the maximum distance in the visible image to the centre ( image diagonal2 )The following describes the simultaneous panning and zooming algo-rithm based on the work presented in [69].1. Evaluate:d = |POG C| =√(POGx − Cx)2 + POGy − Cy)2 (4.5)θ = atan2(|POGy − Cy|, |POGx − Cy|) (4.6)834.7. Gaze-supported Features: Implementation Details2. If d < ro: image remains in the same position.3. Else:(a) Evaluate:FG =dD(4.7)IM angle current = θ (4.8)IM velocity current = FG ∗ IM velocity max (4.9)4.7.3 Gaze-based Panning Based on Pan AreasFigure 4.19: Gaze-supported Interaction Pan Areas Located at the Edges ofthe Image (8 Areas)Another panning algorithm used in our design is moving the image basedon the area where the point of gaze falls. As described earlier, this techniquehas been used in other gaze-supported pan and zoom work, such as [2] and[46]. Figure 4.19 shows the pan areas as the blocks enclosing the arrows.Each arrow points in the direction of image movement when the point ofgaze falls in that block. If the point of gaze falls in any of these areas,the image translates with a speed equal to IM velocity max (refer to theprevious section).844.8. Custom Hardware Interface ImplementationThe ratios in Figure 4.19 are as follows: x1 = 0, x2 = 0.25W , x3 =0.75W , y1 = 0, y2 = 0.25H, y3 = 0.75H.It is also important to note that IM velocity max is adjusted accord-ing to the zoom level for both panning algorithms based on a third-degreepolynomial: speed ratio = ax3 + bx2 + cx+ d, where x is the zoom ratio.4.7.4 Mechanism of Gaze-supported One-step ZoomFor OZ, a combination of both panning algorithms (simultaneous pan andzoom and panning based on pan areas) is used. After zooming into thelocation of point of gaze, the image automatically pans for 500 millisecondsbased on the following criteria: if the point of gaze does not fall in any ofthe pan areas, pan using simultaneous pan and zoom, otherwise, use thepan areas algorithm to pan.This only applies for zooming in. Zooming out simply zooms out fromcentre regardless of the location of the point of gaze.4.7.5 Mechanism of Gaze-supported Multi-step ZoomFor MZ, the only gaze-supported feature is an explicit action of moving thezoom box in the Pre-zoom state based on the location of the point of gazewhen the gaze button is pressed and held. The same gaze filtering appliedfor OZ is also applied for MZ to reduce the zoom box’s jittery effect due tothe jittery movement of the eye gaze.4.8 Custom Hardware Interface ImplementationGiven that we are only investigating one class of functions, zoom and pan,and that we do not have access to the developer API of the machines ob-served in hospitals and used for routine diagnostic scans, we design andimplement a custom-made ultrasound machine interface closely matchingthe interface design of the targeted ultrasound machines and stream ultra-sound data from another class of ultrasound machines that we have accessto in our lab.The custom hardware interface, shown in Figure 4.20, is created to mit-igate having the results be specific to the Ultrasonix Touch, the availablemachine in our lab with developer-level access. Our hardware interface in-cludes only the relevant ultrasound functions for the evaluation presentedin Chapter 6. The controls box is operated with an Arduino Mega that isconnected to the main computer. The controls box has 5 rotary encoders:854.9. Evaluation of the Presented Designsone of them is for the zoom level and the rest are built to control the mostcommonly-used ultrasound image settings: gain, depth, frequency and fo-cus. This interface is used as a manual input for both user studies describedin Chapters 5 and 6. Ultrasound Images are transferred in real time withno observable delay through a TCP/IP connection.Figure 4.20: The Custom-made Manual Controls Box Used for Both UserStudies Described in Chapter 5 and 6.4.9 Evaluation of the Presented DesignsWe test the proposed interaction design and eye gaze-supported featuresthrough two user studies: the first is presented in Chapter 5 through agame-based user study that tests the system outside the environment ofsonography, and the second is presented in the Chapter 6 through a clinical-based user study performed by sonographers.864.10. Conclusion4.10 ConclusionWe present a state analysis of the traditional manual-based One-step Zoom,Multi-step Zoom and panning in ultrasound machines. For the gaze-supporteddesign, we follow the assumptions that all zoom functions acquire the mag-nified image with no improved quality and panning is enabled for both zoomapproaches. We present the interaction state diagram of the final proposeddesign and the earlier designs explored and the interface layout diagramsand illustrated interactions. Finally, we present the implementation detailsof the gaze-based features: gaze data filtering with a thresholded moving-average filter, gaze-based simultaneous pan and zoom and gaze-based pan-ning based on pan areas. We also explain how these gaze-based features areused in One-step and Multi-step Zoom.87Chapter 5Context-free User Study:Interactive GameWe present in this chapter a game-based user study that tests the systemoutside the environment of sonography. From this study, we collect prelimi-nary results to form an initial understanding of the gaze-supported interac-tion to help us better design the context-focused clinical-based user study,presented in Chapter 6, and anticipate the potential advantages and risksof the gaze-supported interaction before testing with end users.This approach has the benefit of performing basic interaction testing :tasks involved in sonography have several factors external to the basic zoominteraction that could influence the performance of the user, such as man-aging the bimanual interaction, probe positioning, image analysis and com-munication with the patient. Running a study that is solely focused onzoomed targets acquisition will help us understand the interaction of thesonographers with the system better as it will help isolate effects due to thegaze-supported interface design from effects due to sonography tasks.This user study is divided into two separate experiments: One-step ZoomExperiment (OZE) and Multi-step Zoom Experiment (MZE). We presentthe game design and structure, the experiment design, tools and results.We conclude by explaining the related aspects of the results of this userstudy to the next user study.5.1 GoalThis chapter presents an exploratory study designed to get an overview ofthe performance of the gaze-supported features by qualitatively evaluatingthe cognitive load and manual repetitiveness and quantitatively evaluatingtime on task and accuracy. It compares the designed gaze-supported systemfor ultrasound machines to the manual-based system by keeping the samebasic interactions, software and hardware interfaces of the zoom interactiondesigned for ultrasound machines and changing the target application to885.1. Goalan area that is more simpler and suitable for a more general set of users.By referring to the work presented in [15], which describes the fidelity ofsimulations into two components: physical fidelity, which is composed ofequipment fidelity and environment fidelity, and psychological fidelity, whichis composed of task fidelity and functional fidelity, the prototype tested inthis user study implements the equipment and functional fidelity dimensions.The equipment being used in this user study is the same, or resembles asclose as possible, the equipment of an ultrasound machine interface. Thefunctional fidelity is the same as we are testing the same gaze-supportedzoom functions that we developed for use by sonographers in ultrasoundmachines.This game study is designed to resemble as close as possible the targetsand the environment of the clinical user study we present in Chapter 6, butwith higher focus on the system interaction. Therefore, the tasks in thisuser study are not typical ultrasound imaging tasks, but resemble the sameinteraction techniques in the following aspects:1. Imaging Task. A sonographer’s task is typically to capture an imageof the target of interest by showing the organ as clearly as possible andas large as possible within the image frame. In this study, the task ofthe participant is to locate the target of interest and zoom into it untilits size takes up at least 80% of the image frame.2. Right-handed/Left-handed Setup. In sonography (in most typesof ultrasound exams), all sonographers are trained to hold the probeand scan with their right hand and use the manual controls with theirleft hand, even if they are left-handed. In our user study, we follow thesame approach by restricting the interaction with the image controlsto the left hand, regardless of the participant being right-handed, left-handed or ambidextrous.3. Targets. Our clinical user study, as described in the next chapter,is designed for sonographers to perform a Common Bile Duct (CBD)scan using our proposed interaction designs. The CBD is typicallyonly one target within the visible range of an ultrasound image. Thetasks in this game user study are designed so that one target at a timeshows up for the participant to zoom into and capture.4. Continuous Motion and Disappearance of Targets. One of thechallenges in sonography is finding the target, fixating the probe atthe correct position and holding it to capture the image at the right895.2. Experiment Designtime as the patient is breathing. As the patient inhales and exhales,the image is distorted by fading in and out, as the organ being imagedenters or exits the ultrasound imaging plane, or by having the targetmove around as the internal organs are in constant motion. This addsa certain level of difficulty to the image zoom and capture task, so wedesign the game targets to have similar properties, to some extent, bydisappearing periodically.5.2 Experiment DesignBased on a few pilot tests of the designed interactions, we find that training auser, with no background in ultrasound interfaces, on both zoom techniquesis too overwhelming for a one-hour long study. We observed that users oftenmixed up the two interactions, which affected the results as users did nothave sufficient training. Therefore, we decide to separate this user studyinto two experiments to test the interactions independently: the One-stepZoom Experiment (OZE) and the Multi-step Zoom (MZE). The structure,design and tasks for both experiments are the same. The subtle differenceswill be explained in the following sub-sections of this chapter.This user study is approved by the Behavioural Research Ethics Board atthe University of British Columbia, under UBC CREB number H15-02969.5.2.1 Game DesignThe designed game is similar in concept to the classic arcade game of “SpaceInvaders”, as the main task of the player is to target and shoot the alieninvaders. Similar art and sound effects were adopted in this game [18]. Thissimilarity provides participants with more connection with the game as mostof them are familiar with it. The following are the main differences betweenthis game and the classic arcade game:1. The player is provided with one alien (or group of aliens) and is re-quired to eliminate them one at a time, instead of gradually eliminatingmultiple targets.2. We adopt the “first-person” view instead of the “third-person” viewimplemented in the original game.3. The game interface is re-designed to focus on the elements of the in-teraction being evaluated.905.2. Experiment DesignGame Software InterfaceFigure 5.1 shows the software interface of the game. The “Space Battle Area”is where the targets show up to the participant, such as the green alien atthe top. The “Timer” is a progress bar that fills up as more time is elapsedduring the task. The “Remaining Lives” section displays the number of livesremaining before the game is over. Each life is represented as a blue heart.Each time a task times out, one life is lost. When all lives are lost, the gameends and the screen displays “Game Over”. The “Level Progress” bar fillsup as the player destroys more aliens during the game. Once the progressbar fills up completely, the player moves to the next level in the game. The“Current Level” box displays the level of the gamer in the game, as will beexplained later in this section. The “Eye Gaze Enabled/Disabled” icon showsup when the system is actively using the user’s eye gaze for some controlinput. For OZE, this icon is always displayed during the gaze-supportedsession. For MZE, this icon is displayed only when the user presses andholds the gaze button to move the zoom box. The “Context View” box is adisplay of the overall view, which shows the position of the zoom box afterthe user zooms in, as shown in Figure 5.2. The area for “Trackball Function”is used only with MZE. It shows a different icon based on the function ofthe trackball, whether it is for repositioning, as shown in Figure 5.3, or forresizing, as shown in Figure 5.4.Game Hardware InterfaceFigure 4.20 shows the controls box used in this user study. It is the samecontrols box used later in the user study presented in Chapter 6. Out of allthe hardware interface elements, only five are used in this user study: theclickable zoom knob, toggle button, eye gaze button, trackball and capturebutton. The calibration button is also used, but only by the researcher. ForOZE, the zoom button press, toggle button and eye gaze button are disabled.For MZE, the zoom knob rotation is disabled. Like the ultrasound machineinterface, and as explained in Chapter 4, the zoom knob rotation performsOZ, the zoom knob press loads the zoom box and the toggle button switchesbetween resizing and repositioning the zoom box. The eye gaze button, whenheld while the zoom box is loaded, moves the zoom box around based oneye gaze location. The capture button is the equivalent of “shoot” in thegame.915.2. Experiment DesignFigure 5.1: The Game Software InterfaceFigure 5.2: When zoomed into an alien, the context view shows the full-scaleview with a box around the location of the zoom.925.2. Experiment DesignFigure 5.3: When the trackball is in reposition mode, the “reposition” iconis shown at the bottom right of the screen.Figure 5.4: When the trackball is in resize mode, the “resize” icon is shownat the bottom right of the screen.935.2. Experiment DesignGameplay DesignTargets At each level, aliens show up on screen one at a time. A playeris required to shoot the target (for OZE, one alien, for MZE, a batch ofaliens) before the time runs out. The colour of the aliens when they are firstdisplayed is green, and they can be destroyed once they turn purple. Aliensturn purple only when:1. They are fully within the zoomed view. Thus, if part of the alien isoutside the view, it will turn green again.2. They are at a specific zoom level, filling up 80% of the view. Theparticipant is informed to change the zoom level until the alien turnspurple. If it’s too zoomed in or zoomed out, the alien turns greenagain.For OZE, only one alien is the target, and for MZE, a batch of aliensare the target, as shown in Figure 5.5. Hitting the shoot button will destroyone or a batch of aliens altogether, if they are purple.Figure 5.5: OZE Alien Targets (Top) and MZE Vertical and HorizontalAliens Targets (Bottom)As discussed earlier, to simulate difficulty found in sonography tasks,targets appear for 3 seconds and disappear for 1 second continuously in aperiodic pattern. Once the alien(s) turn purple, they stop disappearing and945.2. Experiment Designstart blinking to a lighter purple colour every half a second. Once an alien(or a batch of aliens) appears on some position on screen, it stays in thesame position until destroyed.OZE Targets The targets presented to the participant are randomly-generated with a maximum size of 300 x 300 pixels and a minimum of 100x 100 pixels. The target position is also randomly-generated with a targetdistance from centre ranging from 2 pixels to 382 pixels. The targets are alsorandomly generated with an equal chance for the first alien shape and thesecond alien shape. Figure 5.6 shows the relationship between target sizeand distance from centre for all the generated shapes during the One-stepZoom experiment for all participants. As the shapes get larger, they growcloser to the centre, due to the boundaries of the screen.Figure 5.6: A graph showing the relationship between the generated targets’size and distance from centre during the One-step Zoom ExperimentMZE Targets The targets presented to the participant are randomly-generated with a maximum area of 44,289 pixels2 a minimum area of 24,900pixels2, a maximum width/height of 400 pixels and a minimum width/heightof 83 pixels. The target position is also randomly-generated with a target955.2. Experiment Designdistance from centre ranging from 6 pixels to 350 pixels. The shapes arealso randomly generated with an equal chance for a horizontal or a verticalorder.Scoring and Leveling Up Every time the player destroys a target (analien in OZE or a batch of aliens in MZE), the level progress bar fills upby one point. A total of five consecutive destroyed targets will fill up theprogress bar completely and will take the player up to the next level. If theplayer misses a target, all the progress made in the level will be lost and theplayer must restart the level. If all lives are lost, the player loses the game.5.2.2 Setup and StructureFigures 5.7 shows the setup of the context-free user study discussed in thischapter. The user study took place outside our labs, in a separate roomreserved for running user studies at the University of British Columbia.The setup consists of two monitors connected to the same machine: onefor displaying the game for the user, and the other for the researcher toobserve the game in real time and the eye gaze tracker performance throughGazepoint Analysis [20]. The participant’s monitor has the eye gaze trackermounted at the bottom. The participant is provided with the controls box,headphones for the game sound effects and soundtracks and a seat withan adjustable height. The researcher is seated away from the participant,keeping minimal distraction. Participants were recruited through mailinglists. Participants who emailed back expressing their interest in participatingin the user study were asked to provide their responses to the followingeligibility questions:1. Do you wear glasses?2. Do you wear bifocal/gradual glasses (the ones for both far-sighted andshort-sighted vision correction)?3. Do you have any abnormal (whether diagnosed or undiagnosed) eyecondition? (e.g. lazy eye after fatigue)4. Do you have any left arm/hand/fingers injury? Do you have any painassociated with the movement of your left arm/hand/fingers for anyreason?5. Do you have any previous experience with operating ultrasound ma-chines? (operating the machine itself, not being the patient)965.2. Experiment DesignIn addition, a copy of the consent form was attached in the email andthey were asked to choose a time within the period of their best perfor-mance during the day, if possible, as they will be performing tasks thatrequire learning some new computer interaction techniques. If any of theiranswers to questions 2, 3, 4 or 5 were “yes”, then they are disqualified fromparticipating in the user study. If they answer question 1 with “yes”, theyare not disqualified, but they are recommended to wear contact lenses, ifpossible, as there had been some difficulties with eye gaze tracking with afew participants who wear glasses in previous pilot studies.Figure 5.7: The Setup (Left) and Layout (Right) of the Context-free GameUser StudyTable 5.1 contains the structure of the user study. As soon as the par-ticipant arrives, the participant signs a consent form, fills a demographicsform (found in Appendix F), and receives a participation gift card reward.In addition, the eye gaze tracker is tested and calibrated with the gazetracker’s default 5-point calibration before the user study sessions to makesure there are no eye gaze detection issues. In addition, the headphones are975.2. Experiment Designalso tested to make sure the volume level is comfortable for the participant.Eye gaze calibration is routinely tested before every sub-session that re-quires eye gaze input to ensure performance quality. A complete user studyscript is found in Appendix E. Demographics collected on the participantsinclude their age range, being right-, left-handed or ambidextrous, their de-fault setting of the touch pad scroll direction in their laptops, eye/visionconditions, their frequency of use of image editing or design software thatrequire frequent zooming, and their situational level of mental tiredness.The only difference between sessions 1 and 2 is the input modality. Thisuser study is counter-balanced, so half of the participants for each experi-ment had the gaze-supported interaction alternative for the first session andthe manual-based interaction for the second session. The other half of theparticipants had the reverse order. Participants are permitted to interactwith the researcher and ask questions only during the introduction, demoand post-experiment discussion sessions. During the training and recordedsub-sessions, the participant is instructed to completely focus on the game.The researcher ignores any remark or question made by the participant dur-ing the training and recorded sub-sessions, and interferes only in the case ofa technical issue.Training Sub-sessionsDuring the training sub-session, the participant keeps leveling up until theylose the game. There is no winning condition. The time limit per targetfor level 1 is always 20 seconds for OZE and 70 seconds for MZE. Thesevalues were determined through pilot tests. The time limit for each levelthat follows is equal to the average time elapsed for the five consecutivesuccessful trials of the previous level. Therefore, the time limit for level 2for participant 1 is different from participant 2, as the time limit is dependenton the user’s performance. This level design aims to bring the participantto a level of saturated performance, where a participant cannot improvebeyond their limits. This ensures there is are no carry-over effects fromone session to the next, as the participant would have reached a maximumlevel of performance before the recorded sub-session for each input modality.There are 14 lives per level.It is important to note that the mechanism of the training sub-session isnot revealed to the participant before the user study, as it might cause theparticipant to intentionally slow down in the first few levels to keep winning,and thus go beyond the time limit of the user study and cause fatigue beforethe recorded sub-session. This was experienced during one of the test runs985.2. Experiment DesignTable 5.1: The Game User Study ProcedureSession Sub-session Task(s)Introduction The researcher introduces to the participant the userstudy procedure, the game software and hardware inter-face, winning conditions and tasks.Session 1 Demo The researcher runs at least 3 tasks forthe participant and assists, if needed.Training Tasks The participant completes multiplelevels of the game as detailed in theTraining Sub-session.Recorded Tasks The participant completes one level ofthe game, as detailed in the RecordedSub-Session.Break The participant is given a break for a few minutes.Session 2 The same as session 1, using the other input modalityalternative.Discussion The researcher discusses a few topics with the partici-pant, as detailed in the Post-experiment Discussion Ses-sion.995.2. Experiment Designin the lab before setting up the user study.Recorded Sub-sessionsDuring the recorded sub-session, the participant plays only one level of thegame. The finishing condition for the level is to destroy 14 targets. Contraryto the training sub-session, missing a target and running out of time doesnot cause losing all the progress made in the level, as the aim of the recordedsub-session is to measure the participant’s accuracy and time limit and notto train the participant up to a specific level of performance. However, likethe training sub-session, missing a target and running out of time causes theplayer to lose one life. Similarly, a player has 14 lives during the recordedsub-session level.Post-Experiment Discussion SessionAfter the experiment, the researcher discusses with the participant the gameexperience. First, the participant is given space to provide their general feed-back and questions about the game, if any. Second, the researcher requeststhe participant to list, if possible, three advantages and three disadvantagesof each interaction modality. Finally, the researcher discusses the sources offrustration the participant listed down in the qualitative evaluation form, tounderstand them in depth. For MZE, the participant is further requested toexplain the developed strategy for using the eye gaze feature, as it is enabledonly upon the user’s explicit button press and hold.5.2.3 ApparatusThe software and hardware apparatus used in this user study is the same asthe apparatus used in the clinical user study described in Chapter 6, minusthe ultrasound machine and the real-time ultrasound image streaming.The eye gaze tracker we use is the Gazepoint GP3 [20] Tracker, whichhas a visual angle accuracy of 0.5 to 1 degrees, a sampling rate of 60 Hz,and head box dimensions of 25 cm (horizontal movement), 11 cm (verticalmovement) and ±15 cm (depth movement). We use the default 5-pointcalibration for all our experiments. We also use the open standard APIby Gazepoint [19] to implement the communication between the eye gazetracker and the developed software.The eye gaze tracker transfers in real time the participant’s eye gazedata. The custom-made controls box is operated with an Arduino Megathat is connected to the main computer as well. Figure 4.20 shows the1005.3. Analysis Toolscontrols box, which has 5 rotary encoders. One of them is for the zoomlevel.In addition, the trackball is surrounded by the toggle button (used inzoom modes) and the eye gaze button (to enable/disable eye gaze features,as explained in Chapter 4). Finally, the interface also includes a capturebutton to capture the alien.The software interface is fully written in Python using PyQt4 and Pyqt-graph. The game design and graphics were all designed by the researcherusing Adobe Photoshop, in addition to the graphics used from the originalSpace Invaders game [18].The monitor dimensions are 57.2 cm (width) × 41.8 cm and the viewablesize is 23.6 inches. The configured screen resolution for all experiments is1920 × 1080 pixels.5.3 Analysis ToolsQuantitatively, we look at the number of timeouts, the times elapsed pertask and the learning curve per session. Qualitatively, we use evaluationtools, such as the NASA TLX [28].5.3.1 Background on the qualitative tests usedNASA TLXTo evaluate the sources of task load, we adopt the NASA TLX evaluationand modify the description of each source of task load to fit the nature ofthe task performed by the participants. The description of each of theseelements as presented to the participants can be found in Appendix G.5.4 OZE Results5.4.1 DemographicsA total of twelve participants were recruited for the study. Results fromtwo participants were discarded due to eye gaze tracking difficulties. P3had a noticeable case of amblyopia (lazy eye), which interfered with theeye detection by the eye gaze tracker. The reason for P7 is still unknown,like P3, the eye gaze tracker often stopped detecting the participant’s eyesduring the experiment. The rest of the statistical analysis is dependentonly on valid results collected from ten participants: five males and five1015.4. OZE Resultsfemales. None of the participants wore glasses during the study, but onlytwo participants wore contact lenses that did not interfere with eye gazetracking. The age range of the participants is 18 to 45, as shown in Figure5.8a. Two of the participants identify as left-handed, one ambidextrous andthe rest identify as right-handed, as shown in Figure 5.8b. Right before theexperiment, participants were asked to report on their situational level oftiredness. The majority of the participants reported as energetic, one fullyenergetic, one tired and two in-between, as shown in Figure 5.8c.As the order of sessions is counter-balanced, Table 5.2 includes the orderof sessions per participant. Order A means the participant completed themanual-based interaction session first then the gaze-supported interactionsession second. Order B is the opposite to order A.In terms of vision or eye conditions, P1 reported having a slight formof strabismus when tired, P2 and P12 had PRK vision correction surgeriesand P5 had a Lasik vision correction surgery.(a) Age Ranges of the OZE partici-pants(b) OZE Participants’ Handedness(c) Tiredness Level of OZE Partici-pantsFigure 5.8: Demographics of OZE Participants1025.4. OZE ResultsTable 5.2: OZE Participants’ Session OrderParticipant # Session Order1 A2 A3 (discarded) A4 A5 A6 B7 (discarded) B8 B9 B10 B11 B12 A5.4.2 Quantitative EvaluationTraining Sub-sessionNumber of Needed Levels First, we look at the number of traininglevels participants needed for each input modality, we find that there isno clear pattern, as some participants require more training for the gaze-supported interface and some require more training for the manual-basedinterface, as shown in Figure 5.9a.(a) Based on Session Name (b) Based on Session NumberFigure 5.9: Number of Training Levels per Participant per Session for OZE1035.4. OZE ResultsCarry-over Effects Thus, we test for contamination effects between ses-sions; that the order by which the input modalities were introduced to theparticipant influenced the number of training levels required. In the first ses-sion, participants are introduced to the game and the manual controls forthe first time, regardless of the input modality. This might pose an effect byincreasing the amount of learning needed. However, by plotting the numberof training levels needed per session, we fail to see any pattern again. This isshown in Figure 5.9b. We notice that some participants needed more train-ing levels for the first session and some others needed more training levelsfor the second session. The average number of training levels needed forboth sessions is 3 levels. By running Mann-Whitney U-test on the numberof training levels needed per participant based on session order, we do notfind a significant difference.(a) Line Plot (b) Box PlotFigure 5.10: Number of Failed Trials (Timeouts) per Training Session Num-ber for OZETo examine further, we count the number of fails and successes of trials,i.e. the number of times the task timed-out. We find that in most cases,session 2 had more or equal amount of failed trials than session 1, whichcould be due to the general fatigue of the participant. However, we cannotgeneralize, as it is not the case for participants 9, 11 and 12, as shownin Figures 5.10a and 5.10b. Moreover, the average number of successfultraining trials is not significantly different between sessions based on a Mann-Whitney U-test, as shown in the plots in Figures 5.11a and 5.11b. This showsthat there is a minimal chance of contamination of data between the sessionsand that participants require re-learning the interface when introduced with1045.4. OZE Resultsa new modality of input.(a) Line Plot (b) Box PlotFigure 5.11: Number of Successful Trials per Training Session Number forOZEAccuracy Figures 5.12a and 5.12b show the total number of successesand fails for all training levels classified by input modality. We find that thegaze-supported interface learning session has a higher number of successeson average compared to the manual-based interface learning session. Theseresults show potential for the gaze-supported interface in an increased ac-curacy, as the average successes are higher and the average fails are lowerthan the manual-based interaction. However, we also acknowledge the highstandard deviation in both fails and successes.The Learning Curve As intended by the user study design, we observeda progressing learning curve, which appears to follow the power law of learn-ing, for both input modalities across the training levels. Figures 5.13a and5.13b show the average time limit per level for all participants, per inputmodality (Figure 5.13a) and by session order (Figure 5.13b). The numbermarked at each data point represents the number of participants that ex-perienced this level. We observe that all participants passed level 1 and allparticipants started level 2. Only 8 participants passed level 2 and leveledup to level 3 using the gaze-supported interaction and 9 using the manual-based interaction. Only 2 participants passed level 4 and required a 5thlevel using the gaze-supported interaction. By looking at the same curve,but per session order (Figure 5.13b), we find that one participant from eachgroup leveled up to level 5. In general, the time limit for session 1 is longer1055.4. OZE Results(a) Successful Trials (b) Failed (Timed-out) TrialsFigure 5.12: Trials per Training Session Input Modality for OZEthan the time limit for session 2, however this cannot be generalized as thepattern was reversed at level 4.(a) By Session Input Modality (b) By Session NumberFigure 5.13: Participants’ Learning Curve During OZE: The number markedat each data point represents the number of participants that experiencedthis level.Recorded Sub-sessionTime on Task We find that the average time limit for all participants inthe recorded session is higher for the gaze-supported modality, as shown inFigure 5.14b. Figure 5.14a shows the time limits per participant for each1065.4. OZE Resultsmodality. However, we also observe that the time limits are influenced bythe user group, as participants in group A tended to have longer time ontask during the first session (manual-based interaction), and participants ingroup B tended to have longer time on task during the second session (gaze-supported interaction). This shows that the time allocated for the user studyof one hour, even with an extensive training session per interaction, mightnot be sufficient for conclusive results.(a) Line Plot (b) Box PlotFigure 5.14: Time Limits in the Recorded Sessions of OZEAccuracy Similar to the training sub-session, results from the recordedsub-session suggest that, on average, the number of timeouts for the manual-based interaction is higher than for the gaze-supported interaction. Resultsare shown in Figures 5.15a and 5.15b. These results also support the poten-tial for a more accurate interaction using a gaze-supported interface.5.4.3 Qualitative EvaluationThrough discussions, participants reported generally higher cognitive re-quirements for the gaze-supported interface, as will be explained in the Post-Experiment Discussions sub-section. However, by examining the recordedqualitative Task Load Index results, we find out that participants rated bothinput modalities very similarly. Figure 5.16a shows the overall TLX scorefor each interaction modality. We find that the mean of the manual-basedgroup is slightly higher than the gaze-supported group. However, there isalso large variation. Earlier work in eye gaze-supported interaction also1075.4. OZE Results(a) Line Plot (b) Box PlotFigure 5.15: Timeouts in the Recorded Sessions of OZEused TLX as a qualitative evaluation for the sources of workload and foundsimilar results. The work investigated by Klamka et al. [39], shows thatTLX ratings did not have significant difference across the tested interac-tion alternatives. Similarly, Mental Demand and Frustration scored higherwith the multi-modal gaze-supported interaction compared to the traditionalmanual-based interaction.(a) Overall TLX Score (b) TLX Frustration ScoreFigure 5.16: Box Plots of the Qualitative Evaluation Results of OZEFigure 5.17 shows a breakdown of the sources of task load. We findthat both modalities are almost equal in all sources of task load, exceptfor physical demand, where the manual-based system scores higher, and1085.4. OZE Resultsfrustration, where the gaze-supported system scores higher.Figure 5.17: Sources of Task Load for OZE per Input ModalityMann-Whitney’s U-tests were performed on all the sources of task load.Results show that there are no statistically significant differences betweenthe two modalities in any of the sources of task load. However, we discussbelow the small differences observed and how they triangulate with the post-experiment discussion with the participants.Sources of FrustrationAs shown in Figure 5.16b, there is difference between the manual-basedsystem and the gaze-supported system in terms of the frustration level.When asked to report the sources of frustration for each system, 30% ofthe participants reported the time limit as a source of frustration, followedby 22% of the participants reporting trackball motion, followed by 19% ofthe participants reporting the physical interface layout. The rest is shownin Figure 5.18a.On the other hand, when asked about the sources of frustration in thegaze-supported system, 27% of the participants reported the eye trackingaccuracy as a source of frustration. This is followed by 27% for the difficultyof detecting the purple aliens, followed by 18% for the game difficulty dueto time limit. Details are shown in Figure 5.18b.1095.4. OZE Results(a) Manual-based Interaction(b) Gaze-supported InteractionFigure 5.18: The Reported Sources of Frustration during OZE1105.4. OZE Results5.4.4 Post-Experiment DiscussionsDuring the discussion session, participants were asked for their general feed-back on both systems, and the advantages and disadvantages of both. Giventhe challenging nature of the game, participants were also asked to elaboratefurther on their sources of frustration to be able to separate the interaction-related challenges and the gameplay-related challenges.Advantages of the Gaze-supported InteractionReduced Physical Demand Participant 9 reported that using the manual-based interaction requires “twice as much the physical effort” as it requiresrepeatedly zooming and moving the trackball to pan. Participants also per-ceived higher speed with the gaze-supported system as they had “less manualcontrols to worry about”, as reported by P2. Some participants did not usethe trackball during the gaze-supported tasks and preferred to zoom in anout with eye gaze-supported input repeatedly as an alternative to panning.A Potential for a Higher Focus on Tasks P10 reported that it is moredifficult to use the manual interface elements when focused on the targetand prefers to use the eye gaze-supported system instead as it does notrequire a shift in attention from the screen to the manual controls as often.Some participants also reported that this reduction in physical interactionpotentially introduces more focus on tasks on screen.Disadvantages of the Gaze-supported InteractionUnfamiliarity Participants described the manual-based system as consis-tent, reliable and overall simpler and easier to learn. This can be attributedto many reasons, including the familiarity of the participants with manual-based systems, in comparison to gaze-supported systems.Higher Mental Demand The gaze-supported system was also describedas more mentally-demanding as participants had to be always aware of theirgaze location when the system is taking in their eye gaze input for control.Another disadvantage of the gaze-supported system is the false perceivedreliability of the system. P2 reported she expected during some tasks thatlooking at the target alone will zoom in and capture the target as the inter-action is more automated than the conventional manual-based interaction.This overly-automated system could create some false expectations by theusers.1115.4. OZE ResultsInaccuracies at Higher Zoom Levels Assume a target’s centre covers5 pixels at full scale, at twice the magnification, the target’s centre will cover10 pixels. This might not be an issue when using a manual system to zoomin, as the zoomed area is being constantly corrected by consecutive manualpanning actions. However, when an eye gaze input is used, even if the useris instructed to look at the centre of the target to zoom in, the centre keepsgetting larger and eventually there is no “real centre” anymore for the userto look at, which causes unwanted shifts in the zoomed area that will causethe target to be placed away from the centre of the magnified image. Thisshows that there are no improved accuracies at higher levels of zoom betweenthe gaze-supported and manual-based interactions.Unintended “Midas Touch” Effect The Midas touch effect in eye gazetracking, as discussed by earlier research [32], refers to the situation when auser using eye gaze as a direct means of input accidentally issues a commandwherever she looks, which is counter-intuitive to the typical function of theeyes. Having the manual input elements used during the zoom task notco-located, posed some challenges. In the manual-based system, the user isfree to look away from the screen as she performs zoom actions, since thegaze is not an input to the location of zoom. The gaze-supported system,on the other hand, requires the user’s gaze to lock onto the screen whilezooming. In the case of gaze-supported interaction, the user does not havethe freedom to look away and zoom in to the target, as the system has noinput to where the target is. In case the user looks away while zooming usingthe gaze-supported system, one of the two following scenarios is bound tohappen: either the system zooms into the default centre, which might notcontain the target, or accidentally zooms into a lower portion of the screenthe user has momentarily looked at as she moved her gaze from the screento the controls and continues to zoom, which will also result in zooming intoa false position.Interaction-related ChallengesPhysical Layout Design With the manual controls layout designed likea common ultrasound machine layout, many participants found it very chal-lenging to perform gaming tasks, which is expected, given the nature oftasks in the game user study represent an extreme case of zooming intoand capturing of targets. This extreme case is not encountered in routinesonography with the same amounts of time pressure. Participants, especiallythose with smaller hands, found it difficult to zoom by rotating the knob1125.4. OZE Resultsand moving the trackball at the same time. Thus, zooming and panningsimultaneously was not feasible using the fully manual system.Manual Movements at Higher Zoom Levels One issue reported byparticipants is the difficulty in applying manual movements at higher zoomlevels, as the movement speed does not decrease based on zoom level. I.e.the panning speed is the same for all zoom levels, which makes a single panaction at zoom level of 400% move twice as many pixels as the image at azoom level of 200%. Participant 4, for example, reported the usage of thetrackball only if necessary at low zoom levels. To compensate for this issueat high zoom levels, during the gaze-supported session, he tended to use thezoom knob to zoom in and out to perform “panning with eye gaze”.Gameplay-related ChallengesFlickering Targets Like the reported challenges by sonographers of thedifficulty keeping the target in the view due to its frequent movement anddisappearance, participants in this game-based user study reported difficultykeeping the target (alien) in the filed of view due to its frequent disappear-ance. This challenge was meant to be experienced by the users as part ofthe experiment design.Time pressure The perceived time pressure by the participants is dueto several reasons. The adaptable time limit design in the learning sessionprovided a sense of increasing game difficulty. Although participants wereinstructed to evaluate and report their feedback only on the 1-level recordedsession, many of them perceived the second interaction, whether it is manual-based or gaze-supported, as more time-limited as they falsely-perceived, inmost cases, the learning sub-session being shorter due to their familiaritywith the game and the controls. For instance, participants who performedgaze-supported tasks in their second session reported a higher sense of in-creased speed of target acquisition due to eye gaze tracking compared tothose who performed gaze-supported tasks in their first session. This falsesense of speed is since they have already learned the manual interface andgame challenges in the first session.5.4.5 Discussion of ResultsWe analyze the results from the training levels and the recorded level andrelate those findings to observations of user behaviour and post-experiment1135.4. OZE Resultsdiscussions with participants.We find that there is no clear pattern in the amount of training levelsrequired per participant between the gaze-supported or the manual-basedinterfaces. We also find no clear pattern when compared the number ofrequired training levels per input modality order. This shows that thereare no potential carry-over effects between the sessions and that the resultsper input modality are independent. However, some participants reportedfalsely-perceiving less required training for the second input modality, whereit is gaze-supported or manual-based. We acknowledge an observed effecton time on task based on session orders during the recorded sub-sessions.Although we find high standard deviation in terms of accuracy betweenthe two input modalities during both training levels and the recorded level,the average number of timeouts for the gaze-supported input modality islower than the number of timeouts for the manual-based input modality.Participants reported higher frustration overall with the gaze-supportedinteraction, which could be attributed to the additional sources of cognitiveprocessing introduced with this multi-modal interaction. They also reportedhigher physical demand associated with the manual-based interaction.Advantages of the eye gaze-supported interaction, as mentioned by par-ticipants, include a higher sense of focus on the tasks at hand and a loweroverall physical demand. Disadvantages include the general unfamiliaritywith gaze-supported systems, higher mental demand and inaccuracies athigher zoom levels, in addition to required changes in the manual interfaceto better suit a multi-modal interaction.The potential areas where a gaze-supported system might contributein improving are a reduced physical interaction and a higher attention tothe main tasks. However, with it comes a risk for other challenges, suchas higher mental workload, and higher frustration due to multiple reasonsincluding lack of familiarity with the multi-modal interaction. We examinethese effects later within a clinical context in the user study presented inChapter 6.Task Load is Different for Each Interaction Participants reportedexperiencing task load in both systems. When asked to elaborate further,it was found that the type of task load is different for each system. For thegaze-supported system, the task load is due to high mental demand. Whilefor the manual-based system, it was due to high demand of physical inputscoordination. Although gaze-supported systems alleviate the need for man-ual inputs coordination, as it requires less manual inputs, they introduce an1145.5. MZE Resultsincreased mental demand requiring complete focus on the target on screen.Eye Gaze Input Integration Requires a Change in the PhysicalLayout As evident by some of the challenges faced, to efficiently integratean eye gaze tracking system into an existing manual-based interface, changesin the physical interface should be made. Physical interface elements usedduring an activity that requires eye gaze input should be designed to be co-located, which minimizes the need for the user to shift her attention from thescreen to the manual inputs and thus minimize “unintended Midas Touch”effects.5.5 MZE Results5.5.1 DemographicsA total of nine participants were recruited for the study. Results from twoparticipants were discarded: P6 due to eye gaze tracking difficulty and P9due to an inconsistency in providing instructions to the participant with therest of the sample, which directly affected the performance. The rest of thestatistical analysis is dependent only on valid results collected from sevenparticipants: four males and three females. Four of the participants woreprescription glasses during the study and the rest had no vision correction.The age range of the participants is 18 to 35. All the participants identify asright-handed. Right before the experiment, participants were asked to reporton their situational level of tiredness. Most of the participants reported asenergetic, one fully energetic, and one tired. Since the experiment orderis counter-balanced, Table 5.3 shows the order for each participant, whereorder A is for manual-based session followed by the gaze-supported session,and order B is the opposite.We did not proceed to recruit more participants for this experiment aswe observe a pattern of carry-over effects between the two sessions per par-ticipant, as will be explained throughout this section. Therefore, we use theresults from this experiment to report on the general observed user behaviourwith the gaze-supported features with the carry-over effects limitations ofthis study.1155.5. MZE ResultsTable 5.3: MZE Participants’ Session OrderParticipant # Session Order1 A2 B3 A4 B5 A6 (discarded) B7 B8 A9 (discarded) A5.5.2 Quantitative EvaluationTraining Sub-sessionLike the results from OZE, we find that there is no clear pattern in termsof number of training levels between input modalities, as some participantsrequire more training for the gaze-supported interface and some require moretraining for the manual-based interface, as shown in Figure 5.19a.(a) Line Graph (b) Box PlotFigure 5.19: Number of Training Levels per Session for MZEThe box plot in Figure 5.19b shows the number of training levels neededfor participants per modality. We see that participants in general requiredmore levels for the gaze-supported session compared to the manual-basedsession, although the difference is with a high standard deviation in both1165.5. MZE Resultsmodalities.We observe during the user study that participants’ pattern of using theeye gaze input changed over the levels. Figure 5.20a shows the amount ofusage of the gaze feature during the gaze-supported learning session. We findthat, except for participants 1 and 4, participants generally reduced theiruse of the gaze feature as they level up. Figure 5.20b shows the averagenumber of times the gaze feature was used per level for all participants.The number at each data point represents the number of averaged datapoints (participants) at that level. This shows that the gaze feature is foundpotentially inefficient with increased time pressure.(a) Per Participant (b) Averaged per Level: The number at eachdata point represents the number of averageddata points (the number of times the gazefeature was used) at that level.Figure 5.20: Number of Times the Gaze Feature was Used During the Gaze-supported Session of MZEInterestingly, we find that the learning curve is almost identical for bothinput modalities, as shown in Figure 5.21a, which could be since in manytasks during the gaze-supported learning session, participants did not usethe gaze feature at all. On the other hand, we find that the learning curveis in fact different based on the order of the session. Figure 5.21b showsthe learning curve by session order. We find that participants, on average,had shorter time limits in the second session compared to the first session,regardless of the input modality. This shows a possibility of carry-overlearning effects from one session to the next.Figures 5.22a and 5.22b show the learning curves for each individualparticipant per session.1175.5. MZE Results(a) By Input Modality (b) By Session OrderFigure 5.21: Average Training Sessions’ Learning Curve per Participant forMZE(a) Gaze-supported Training Session (b) Manual-based Training SessionFigure 5.22: Average Training Sessions’ Learning Curve for MZE1185.5. MZE ResultsRecorded Sub-session(a) Line Graph (b) Box PlotFigure 5.23: Time Limits in Recorded Sessions for MZE(a) Trials With/Without Gaze Input: Par-ticipants tend to use the gaze feature fortargets at least 100 pixels away from thecentre.(b) Failed/Successful Trials with Gaze In-putFigure 5.24: Trials Based on Target Size and Distance From Centre for MZELike the analysis of One-step Zoom, we look at the time limits for eachinput modality session. Figure 5.23a shows the time limit per participantper session. Apart from participants 1 and 5, we find that participantshad longer time limits for the gaze-supported interface compared to themanual-based interface with much higher deviations. This could be due tothe complexity of the gaze-supported interface due to the additional gaze1195.5. MZE Resultsinput, which, as described in the participants’ post-experiment discussion,takes time to switch to and back to manual. Figure 5.23b shows a box plotfor time limits per input modality.Since participants did not consistently use the gaze feature for all tasksin the gaze-supported recorded sub-session, we base the rest of the analysison tasks which users used the gaze feature and tasks that didn’t. We findthat the percentage of times participants decided to use the gaze featureduring the gaze-supported session is only 15.91%. Out of these trials, whichused gaze, 57.14% of them succeeded and 42.86% of them failed.Table 5.4 shows the mean time limit for tasks that used gaze and tasksthat did not. Both are during the gaze-supported sessions only.Table 5.4: Mean Time Limit for Gaze-supported Recorded Sessions for MZEUsed Gaze Did Not Use GazeMean Time Limit (s) 8.06 7.13Standard Error 0.51 0.23In addition, Table 5.5 shows in detail the percentage of use of the gazefeature per participant, and how many of these tasks timed out.Table 5.5: Use of Gaze Feature During Recorded Sessions of MZEParticipant # Used Gaze (%) Timed Out (%)1 65.22 46.672 13.33 0.003 17.65 33.334 0.00 X5 0.00 X7 0.00 X8 3.85 100.00We find a highly inconsistent behaviour among participants. For in-stance, three of them decided not to use the gaze feature at all. One ofthem used the gaze feature for most of the tasks (65%), another used thegaze feature and had no timeouts and another rarely used the gaze featureand timed out every time.By inspecting the characteristics of targets which users decided to usethe gaze feature for, as shown in Figure 5.24a, we find that participantstended to use the gaze feature for targets that are at least 100 pixels awayfrom the centre. To inspect further, we plot in Figure 5.24b only the tasks1205.5. MZE Resultsin which users used the gaze feature, and highlight the failed trials. Wefind that most of the failed attempts are the smaller targets, which couldbe due to the fact that smaller targets need more time for zoom box sizeadjustment.5.5.3 Qualitative EvaluationDue to the small sample size, we report average results in tables and charts.Table 5.6 shows the total TLX scores for both input modalities.Table 5.6: TLX Scores for Each Input ModalityManual-based Gaze-supportedAverage SE Average SEMental Demand 12.50 6.94 18.13 12.92PhysicalDemand17.75 5.62 25.75 5.22TemporalDemand54.25 11.47 58.00 11.50OverallPerformance49.13 9.50 46.88 10.14Effort 40.38 7.67 51.25 5.92FrustrationLevel14.50 10.19 7.63 3.21Overall Score 12.57 0.96 13.84 0.55We find that the overall cognitive load score for the gaze-supported in-terface is higher than the manual-based interface. By looking at the sourcesof cognitive load, also graphically illustrated in Figure 5.25, we find thatthe temporal demand and overall performance contributed most in makingthe task cognitively demanding for both input modalities. We also noticethat the amount of effort participants had to put into the game is higher forthe gaze-supported interface, in comparison to the manual-based interface.Contrary to expectations, the amount of physical demand is rated higher forthe gaze-supported interface compared to the manual-based interface. Thiscould be since an additional step must be taken to enable eye gaze.We also find that the manual-based system was more frustrating, onaverage than the gaze-supported system, with high variability. Consideringthe sources of frustration, as shown in Figure 5.26, we find that game design-related difficulties caused most of the participants’ frustration, such as the1215.5. MZE ResultsFigure 5.25: Sources of Task Load for MZEdetection of the purple aliens and the time limit. The trackball motionand the physical interface layout also contributed to 26% of the sources offrustration. Participants listed other sources of frustration (20%) as “tryingto get the zoom box positioned around the aliens accurately” and “usingonly the left hand”.For the gaze-supported system, we find that the lack of accuracy of theeye gaze input forms the highest source of frustration (26%), followed bygame design-related sources. Participants listed other reasons as well, suchas “toggling between resize and reposition”, “trying to get the zoom boxpositioned around the aliens accurately” and “using only the left hand”.5.5.4 Post-Experiment DiscussionsDuring the discussion session, participants were asked for their general feed-back on both systems, and the advantages and disadvantages of both. Giventhe challenging nature of the game, participants were also asked to elaboratefurther on their sources of frustration to be able to separate the interaction-related challenges from the gameplay-related challenges. In addition, par-ticipants were asked to explain their strategy of using the eye gaze input,since, unlike the One-step Zoom experiment, enabling the eye gaze inputfeature is optional during the gaze-supported interface session.1225.5. MZE Results(a) Manual-based Interaction(b) Gaze-supported InteractionFigure 5.26: The Reported Sources of Frustration During MZE1235.5. MZE ResultsAdvantages of the Eye Gaze-supported InteractionThe only advantages, mentioned by 3 out of 7 participants, is the potentialfor this eye gaze-supported interface to reduce repetitive stress injuries, asthey experienced a lot of repetitiveness with the manual-based interface. Infact, P2 mentioned that he decided to enable eye gaze whenever the gamegets frustrating as it provides a little “physical repetitiveness break” from thetypical manual control. P3 mentioned that “scrolling is cumbersome: youwill have to do two or three scrolls to get somewhere, where you can get therewith a button press with eye gaze.” However, this cannot be generalized toall targets, but only to those targets faraway from the centre of the screen.A few other participants mentioned that, had this interaction been appliedto larger screens, it would have potentially saved time.One more advantage of using eye gaze, in this gaming context, is the“fun” factor. Given the added challenge with eye gaze input, participantsfound it interesting to try out.Strategies for Using Eye Gaze InputWhen asked about the strategies participants used to win the game andwhen did they decide to use eye gaze as an input, the following were thecategories of responses:• “I used eye gaze as an input only when the aliens are outside the zoombox.”• “I used eye gaze as an input only when the aliens are at the edge ofthe screen.”• “I tried to change the size of the zoom box manually and move the boxwith my eye gaze simultaneously.”• “I used eye gaze as an input only because it is fun and novel.”• “I used eye gaze as an input only once or twice before I found out itdoes not really help.”Disadvantages of the Eye Gaze-supported InteractionWe observed that in this user study the disadvantages of using eye gaze inputoutweighed the advantages. Since this design requires the user to press anextra button to move the zoom box, it adds an extra step, which causes an1245.5. MZE Resultsextra context switch. The mental effort required to perform this switch canmask the benefits of a faster interaction with eye gaze, as the switch itselftakes time, as noted by P2.Once the user has “ideally” used the eye gaze input to perform coarsemovements, the user must let go and perform an additional context switchback to the trackball to perform fine movements. This doubles the amountof switching discussed earlier, rendering the eye gaze input unsuitable fortime-sensitive tasks.One of the participants also noticed the little delay at the beginning ofenabling the eye gaze input, which is caused due to the initiation of the eyegaze moving average filter, as it requires processing a few fixation pointsbefore generating the first filtered fixation point, as explained in Chapter 4.Like the situation of sonographers, where it is difficult to change theirbehaviour with the interface they have trained on for years by introducinga new input modality, it is roughly the same case for gamers. One of theparticipants, who identified as a regular gamer, brought up an importantpoint regarding motor control: “For gamers, introducing eye gaze for suchcontrol takes you out of your comfort zone. I expect gamers will typicallyresort to switching to the manual controls because that is what they havebeen trained on for years to win games” - P5.5.5.5 Discussion of ResultsAlthough this experiment took longer to finish (MZE took 70 minutes onaverage per participant, while OZE took 50 minutes on average per par-ticipant), results from this Multi-step Zoom user study are not reliable todetermine the effectiveness, in terms of time on task and accuracy, of thetested interfaces. A longer, or perhaps a longitudinal, study will be moreeffective due to the complexity of the gaze-supported multi-step zoom inter-action.Despite the carry-over effect between sessions, the only advantage foundin this system is the reduced physical repetitiveness as reported by someparticipants. However, this is only replaced by another form of physicalinput that takes up longer time switching to and higher cognitive resourcesfor coordination. Additionally, we find that, in general, participants donot prefer using the gaze feature to win the game, given the added time ittakes to activate the gaze feature, in addition to their unfamiliarity with theinteraction.As users gradually decreased their use of the gaze feature during thetraining sessions, followed by using the gaze feature only 15% of the time1255.6. Conclusionon average during the recorded session, we expect that this design will nothave promise in terms of an improved interaction. Given these results, wedecide not to put emphasis on the MZ interaction in the upcoming clinicaluser study design as it has not been sufficiently tested in a context-freeenvironment.5.6 ConclusionWe design a context-free user study to test the proposed gaze-supportedinteractions (OZ and MZ) independent of external sonography-related ef-fects. The goal of this user study is to collect preliminary results to help usunderstand the strengths and weaknesses of the proposed interaction designin terms of time on task, accuracy and user behaviour.Based on these results, we can anticipate the user behaviour for theupcoming clinical user study targeted for sonographers: if the results fromthis user study turn out to be in favour of eye gaze-supported input, thenthere is potential for gaze-supported interaction within a more cognitively-demanding context, such as sonography.We design a multi-modal gaze-supported game, where participants arerequired to zoom into and destroy targets at particular zoom levels andcriteria. We use the same hardware interface layout as ultrasound machinesand set similar restrictions regarding left-handed-only interaction. Eachparticipant performs similar set of tasks over two counter-balanced sessions:gaze-supported interaction and manual-based interaction. For each session,participants are required to perform a training task to reach an optimal levelof performance before performing the main tasks.Results from both experiments, OZE and MZE, show a potential in areduced physical repetitiveness as some manual functions are replaced witha gaze input. However, this reduction comes at the cost of an increasedmental demand for both zoom functions and an increased context switch forMZ. In addition, some participants of OZE reported a higher focus on themain task when using the gaze-supported alternative as they manage lessmanual controls. Quantitatively, both experiments showed high variationsin terms of time on task and accuracy. Therefore, we do not have conclusiveevidence regarding these metrics.Results from this context-free user study also revealed some interestinggaze-supported interaction challenges, such as inaccuracies at high zoomlevels, an “unintended Midas touch effect” and, for MZ, a reduction in theusage of the gaze feature over time. Thus, we expect to encounter these1265.6. Conclusionchallenges during the context-focused user study. Nevertheless, we expectthat testing the interaction with users who are already familiar with themanual interface will eliminate some of the frustration sources faced by theparticipants of the game user study regarding the unfamiliarity with thebase system and cumbersome trackball usage. We also decide to design thecontext-focused user study with less emphasis on MZ as we were unableto sufficiently evaluate it due to the observed carry-over effects during thetraining sessions.127Chapter 6Context-focused User Study:Clinical Experiment6.1 Goal and HypothesesThe goal of this study is to test the proposed interaction within the intendedcontext of sonography with end users. In this study, we test the time ontask and other eye gaze metrics and relate these quantitative measures topost-experiment discussions regarding the presented system to assess theinteraction. We test the following hypotheses:• H1: There is a difference in terms of time on task between the manual-based and the gaze-supported interactions.• H2: There is a difference in terms of cognitive load between themanual-based and the gaze-supported interactions.6.2 Background on Study TasksTo design this user study, we first design the tasks to be performed bythe participants. The tasks are selected to capture the capabilities of theproposed gaze-supported interaction presented in Chapter 4 within a clinicalcontext. We design this user study with two types of tasks: a realisticultrasound scan of a healthy volunteer and a number of ultrasound scans ofcontrolled targets.Replicating a realistic sonography scenario requires selecting a specificultrasound scan to be performed by the sonographer participants. Afterdiscussions with two expert sonographers, an ultrasound scan was selectedto be part of this user study based on the following criteria:• Does not require ill patients, so a healthy volunteer will be suitablefor the ultrasound scan.1286.2. Background on Study Tasks• The ultrasound scan is common knowledge to all sonographers whoperform general ultrasound scans.• Does not require using a large variety of ultrasound functions, so thatthe main focus will not diverge from the zoom function.• The order and number of steps needed to perform this scan does notgreatly vary from one sonographer to another.• Requires capturing a zoomed target.The selected ultrasound exam is the abdominal routine scans of thecommon bile duct (CBD) and the common hepatic duct (CHD). This typeof exam is performed when a patient requires a general abdominal scan.Figure 6.1 shows the location of the CBD and CHD and the surroundingstructure. The CHD is located before the common hepatic artery and theCBD is located after. In sonography, the goal of this ultrasound exam istypically to measure the size of the CBD and the CHD. The measurementis taken at the largest diameter visible for both ducts. The default size ofthe CBD and CHD varies based on the gender, age and medical conditionof the patient.The zoom function in this type of exam is important since it is requiredto perform accurate measurements. The common bile and hepatic ducts arevery small structures (3.3 ± 1.1 mm to 6.8 ± 1.1 mm); therefore, measure-ments should be done at the largest scale possible of the image.In addition to using the zoom feature, the sonographer frequently changesthroughout the exam the depth of field, the gain and the focal zone to obtainthe best image. Sometimes the frequency is also changed based on the pa-tient’s BMI (Body-Mass Index). The number of focal zones typically used inthis type of exam is only one, since the scanned structure is small, horizon-tal and doesn’t stretch vertically across the image. An informal ultrasoundexam was observed and video-recorded to identify the exact steps involvedin scanning the CBD and CHD, as detailed in Appendix H.There is no standard or maximum number of images taken of the CBDand CHD to be sent to the physician. The sonographer continues to takeimages whenever a better view comes up. However, a minimum of 2 imagesshould be taken to show the best measurements.In addition to testing the interaction through an ultrasound exam per-formed with a healthy volunteer, we also provide the participant with amedical multi-purpose ultrasound phantom [14]: a specially-designed ob-ject used in medical imaging to test the performance of devices in a more1296.2. Background on Study TasksFigure 6.1: An ultrasound image showing the location of the CBD and CHD.1306.3. Experimentcontrolled condition instead of using real tissue that is subject to change.Using a phantom also simplifies the task by eliminating the overhead com-munication between the sonographer and the patient including instructionsto change position or hold breathing to obtain better images. Additionally,phantoms enable testing a large pre-defined set of shapes to better under-stand and accurately measure the performance of the proposed interactions.We evaluate two different types of phantom target shapes in order tobetter test the two zoom functions and their different capabilities. Figure6.4a shows examples of regular target shapes that only require zooming andpanning and Figure 6.4b shows examples of irregular target shapes thatrequire zooming, panning and resizing, since a shape’s width and height areunequal. In light of the initial results collected from the context-free study,we expect more meaningful results from the OZ interaction, therefore, weprovide the participants with more regular target shapes to be acquired withOZ in both training and phantom sessions.6.3 ExperimentWe aim to minimize the differences between the real ultrasound exam set-ting and the experimental setting, as these differences may cause unwantedlearning effects that will mask the real effects of our system. To efficientlytest the proposed system, part of the experiment is designed to match a typ-ical diagnostic ultrasound exam setting and match, as close as possible, thetypical hardware and software interfaces and room setup of an ultrasoundexam as these factors influence the behaviour of the sonographer.Most of the experiment design decisions were influenced by the imple-mentation and and results of the first iteration of this user-centred design.Details on the structure, procedure, results and lessons learned from thefirst iteration can be found in Appendix D.Given that the zoom function is the focus of our test, it will be imprac-tical to run the user study with a full clinical ultrasound exam requiringthe sonographer to go through the full steps of a common bile duct examwith measurements. For simplicity, only steps 4 through 10 from AppendixH are applicable in this user study, as requiring the rest of the measure-ment steps will take away from the focus of this experiment of testing thegaze-supported zoom functions proposed.As mentioned earlier in Chapter 4, using a specific brand of ultrasoundmachines in this user study will render the results to be machine-specific, es-pecially that the type of machine available in our labs with an open software1316.3. Experimentinterface for data acquisition is not used in typical diagnostic sonographysettings. Thus, we use the custom hardware interface we created that onlyincludes the relevant ultrasound functions required for the selected portionof the targeted ultrasound exam: zoom, depth, gain, frequency and focus,in addition to multipurpose buttons around the trackball and the imagecapture button.This user study is approved by the Behavioural Research Ethics Board atthe University of British Columbia, under UBC CREB number H15-02969.6.3.1 Setup And StructureFigure 6.2: The Clinical User Study Room Setup: the setup in the labclosely matches the setup of an ultrasound room in a hospital.Figure 6.2 shows the setup of the clinical user study. The user study tookplace at the Robotics and Control Laboratory scan room at the Universityof British Columbia. The setup consists of the ultrasound machine thatstreams ultrasound images to an external computer connected to a monitorshowing the live ultrasound image. An eye gaze tracker (GP3 Eye Tracker,Gazepoint Research Inc. Vancouver, BC) is mounted at the bottom of themonitor and a custom-made controls box is placed at a lower level in frontof the monitor. To the right of the sonographer is the patient bed withthe phantom placed over it at a reachable distance to the sonographer.Table 6.1 contains the list of phantom targets scanned by participants. As1326.3. ExperimentTable 6.1: Clinical User Study Phantom Targets and Instructed Techniquesof InteractionSession Target # Instructed Technique of InteractionTraining 1 to 8 One-step Zoom9 to 12 Multi-step Zoom13 to 16 One-step ZoomRecorded 1 to 6 One-step Zoom7 to 10 Multi-step Zoomrecommended by earlier eye gaze studies [23], eye gaze tracking status isbeing monitored throughout the whole study for each participant, as eyegaze tracking can be lost during the study for many reasons and requirere-setup and re-calibration. The researcher, monitoring the eye gaze of theparticipant in real time on a separate monitor, is located outside the fieldof view of the participant to not cause any distractions.Table 6.2 contains the procedure of the clinical user study. Before run-ning the user study, the room lighting was dimmed to closely resemble theamount of lighting in an ultrasound room. In addition, the settings of theultrasound image are reset to the default values, to avoid carry-over effectsfrom the settings set by the previous sonographer.As soon as the sonographer arrives, the sonographer is introduced to theproject and is requested to sign a consent form, fill out a demographics formand provided with the participation reward. In addition, the eye gaze trackeris tested and calibrated with the gaze tracker’s default 5-point calibrationbefore the user study sessions to make sure there are no eye gaze detectionissues, especially in the case of participants wearing highly reflective glasses,as evident in earlier eye gaze tracking studies [23] suggesting the recruitmentof 10%-20% more participants than is needed as “some eye tracking systemsmay not calibrate well to certain eyes or eyeglasses prescriptions”. In Addi-tion, “while most eye trackers claim to work with eye glasses,” the work in[42] reports, “we have observed a noticeable deterioration in tracking abilitywhen lenses are extra thick or reflective”.Previous work on user-centred ultrasound machine interface design [5]required an analysis of user profiles prior to running the studies, which in-cluded collecting information on users, such as the sonographer’s experiencein ultrasound scanning, i.e. what type of ultrasound scans she performs,1336.3. Experimentexperience level and usage patterns of the specific ultrasound function ofinterest, in this case, the zoom function. We follow a similar approach bycollecting this information through a demographics form, found in AppendixJ, and a follow-up discussion with each participant sonographer.Table 6.2: The Clinical User Study ProcedureSession Sub-session Task(s)Intro Project Introduction The researcher introduces theproject to the sonographerPhantom Exploration The researcher provides the sono-grapher with the phantom andrequires locating a few randomtargets.Gaze-based Interac-tions ExplorationThe researcher demos the twomanual-based zoom interactions(if needed), followed by per-forming eye gaze calibration ofthe sonographer, followed bya demo of the gaze-supportedzoom interactions.Training Perform training tasksusing gaze-supportedinteractions onlyThe sonographer locates, zoomsinto and captures the trainingtargets shown in Figure 6.4 andas instructed in Table 6.1.Phantom Gaze-supported The sonographer locates, zoomsinto and captures the recordedtargets shown in Figure 6.4 usingonly the gaze-supported interac-tions, as instructed in Table 6.1.Manual-based The sonographer locates, zoomsinto and captures the recordedtargets shown in Figure 6.4 usingonly the manual-based interac-tions, as instructed in Table 6.1.Patient Locate the CBD The sonographer explores the pa-tient’s abdomen and locates thecommon bile duct.1346.3. ExperimentSession Sub-session Task(s)Gaze-supported The sonographer uses one of thegaze-supported interaction tech-niques of her choice (or a mixof both techniques) to zoom intoand capture 5 consecutive imagesof the common bile duct of thepatient.Manual-based The sonographer uses one ofthe manual-supported interac-tion techniques of her choice (or amix of both techniques) to zoominto and capture 5 consecutiveimages of the common bile ductof the patient.Discussion The researcher discusses the usability of the presentedsystem with the sonographer.Instructions on Image AcquisitionSonographers are instructed to acquire and capture the zoomed targets fillingup most of the image and as centred as possible. They are also instructed tominimize their interaction with ultrasound image settings other than zoomduring the recorded phantom and patient sessions.To help the participants learn the unfamiliar gaze-supported interactionand uncover its capabilities, participants were instructed on the optimaltechniques to use the gaze-supported features:• For the Multi-step Zoom technique, it is best to use the eye gazefeature only when moving the zoom box for long distances across theimage.• Eye gaze is very jittery in small areas, therefore, it is best to use thetrackball for fine motions of the zoom box around the target.Patient SessionFor consistency of the patient target across all trials and participants, thelead researcher volunteered to be scanned by all sonographers. During the1356.3. Experiment(a) Set 1 (b) Set 2(c) Set 3 (d) Set 4Figure 6.3: Phantom Training Targets1366.3. Experiment(a) Set 1 (b) Set 2(c) Set 3Figure 6.4: Phantom Recorded Targets1376.3. Experimentpatient session, volunteers from the RCL lab with a background in eye gazetracking interaction monitored the eye gaze tracker and conducted the study.Discussion SessionSonographers were instructed to provide feedback only at the end of theexperiment, during the discussion session, to keep the overall time of theuser study under one-hour long.The following are the list questions discussed with each participant sono-grapher:1. What type of ultrasound machine are you familiar with? Did this hard-ware/software layout and functions closely resemble the ultrasoundmachines you typically use in your ultrasound scans?2. What type of zoom do you typically use in your scans? And when?And how frequently?3. What kind of shapes do you normally need to zoom into?4. Are they regular shapes with defined centres?5. How would you describe your own perceived eye gaze behaviour whenzooming into targets in an ultrasound image? Do you mainly focus onthe target or do you keep peripherally scanning the rest of the image?6. What advantages and disadvantages did you find using the proposedeye gaze-supported zoom system?Appendix I includes the full script used by the researcher for the wholeuser study.6.3.2 ApparatusThe apparatus used in this user study consists of the same basic tools andsetup as the context-free game-based user study presented in Chapter 5.Figure 6.5 shows the hardware setup for this user study. An ultrasoundmachine, Ultrasonix Touch, transfers the ultrasound images in real timethrough a TCP/IP connection and communicates the image parametersto the main computer with no observable delays. An eye gaze tracker,Gazepoint GP3, transfers in real time the participant’s eye gaze data. Thecustom-made controls box is operated with an Arduino Mega that is con-nected to the main computer as well. Figure 4.20 shows the controls box,1386.4. Analysis ToolsFigure 6.5: The Clinical User Study Hardware Architecturewhich has 5 rotary encoders capable of controlling the ultrasound image’sdepth level, ultrasound beam frequency, b-mode gain level, the horizontalposition of one line of focus, and the zoom level. In addition, the trackballis surrounded by the toggle button (used in zoom modes) and the eye gazebutton (to enable/disable eye gaze features, as explained in Chapter 4). Fi-nally, the interface also includes a capture button to capture and save thestreamed ultrasound image. One missing feature, which was omitted due totechnical difficulties during the user study, is the freeze button to freeze theultrasound image before capturing it. The software interface is fully writ-ten in Python using PyQt4 and Pyqtgraph. The communication betweenthe computer and the ultrasound machine is facilitated through a Pythonwrapper [59] developed at the Robotics and Control Laboratory at the Uni-versity of British Columbia for Ulterius, a software tool for controlling theUltrasonix ultrasound systems remotely. Eye gaze data is communicatedfrom the eye gaze tracker through Gazepoint’s open Gaze API.6.4 Analysis ToolsA total of 30 trials per participant were collected and analyzed: 20 phantomtrials (10 gaze + 10 manual) + 10 patient trials (5 gaze + 5 manual).1396.5. ResultsThe trials were recorded using Gazepoint Analysis software and manuallysegmented and transcribed per task. Eye gaze fixation data was producedby Gazepoint Analysis and analyzed by the Eye Movement Data AnalysisToolkit (EMDAT) developed at the University of British Columbia [23].For each trial, the following dependent variables were recorded and an-alyzed: time on task and other eye gaze metrics generated by EMDAT: eyemovement velocity and fixation rate. In addition, the mean, standard de-viation and sum of the following eye gaze metrics were collected: fixationduration and path distance. Descriptions of each of these eye gaze metricscan be found in [21] and [60]. Each task starts from the moment the sono-grapher has located the target and fixated on it and ends the moment thesonographer captures the target.As followed and suggested by previous studies in the field of eye gazetracking that collected and analyzed similar eye gaze metrics [60], we useMixed Models for the analysis of variance. Similarly, we apply a Bonferronicorrection with m = 4, according to the number of families of dependentvariables. This is done to make sure we correct for the family-wise error andmultiple-comparisons errors, since there are many dependent variables.6.5 Results6.5.1 Demographics(a) Age Ranges (b) Tiredness LevelsFigure 6.6: Participant Sonographers’ DemographicsA total of ten participants were recruited for the clinical user study. Fiveof them are professional sonographers and five are student sonographers intheir first or second year of their degrees with an experience of operatingultrasound machines and an introductory background of sonography.1406.5. ResultsOut of the five professional sonographers, 4 are females, of varying levelsof experience and age. P1 and P5 have 2-5 years of experience, P4 has 6-10years of experience, P3 has 20-30 years of experience and P5 has over 30years of experience. Their ages ranged from 26 to 65 years of age. All re-cruited professional sonographers routinely perform both general and obstet-ric/gynaecologic ultrasound exams. All participants, except P5, additionallyperform vascular ultrasound exams. P2 and P3 additionally perform MSK,P2 additionally performs cardiac scans, and P4 performs a wider range ofultrasound scans including breast and thyroid imaging.All student sonographers recruited are female sonographers of the agegroup 18 to 35 years of age. All student sonographers have an experience inperforming both general and cardiac ultrasound exams.All recruited participants are right-handed with no known abnormal eyeor vision conditions. On a tiredness scale of 1 to 5, where 1 is exhausted and5 is completely rested, nearly all participants selected their tiredness level at4. None of the participants have used an eye tracker before, except for P3, aprofessional sonographer, who participated in the previous study conductedin the first iteration of this project. Half of the participants performedthe user study wearing glasses, one of the participants was wearing contactlenses, and four had no vision correction. Figure 6.6 visually illustrates therecruited sample.Consistency of The Presented Prototype With TheManual-based Design of Ultrasound MachinesAll recruited participants reported their familiarity with the GE ultrasoundmachines. In addition, GE machines are used in sonography schools fortraining students. Therefore, we can eliminate learning effects due to theunfamiliarity of the participants with the basic interaction of pan and zoomand attribute learning to the added gaze feature.However, subtle exceptions can be made for participants 1 and 5. P1noted that the default vertical resize function of the trackball is reverse tohow it is implemented in a GE machine and it required her some adapta-tion during the training tasks of the user study to map the direction of thetrackball to the reverse mental model of the typical resize function in GEmachine. This issue was immediately fixed and the vertical resize directionwas consistent with the design of GE machines for the rest of the recruitedparticipants in the study. However, it is important to note that the resize di-rection mapping can be customized for some models of ultrasound machines,like how the scroll direction in touch-pads can be customized.1416.5. ResultsOn the other hand, P5 required learning the two base zoom functions,as she does not use the zoom feature in her ultrasound scans. Instead, sheprefers adjusting the sector size and depth parameters to provide a zoomedand clear image with high definition, which still counts as a form of imagemagnification.6.5.2 Observed Gaze-supported Interaction ChallengesWith 10 participant sonographers, a total of 10 x 30 = 300 trial sampleswere collected in total for analysis.To put the analyzed results in perspective, we identified two classesof challenges related to the completed tasks through the processes of re-observing the recorded trials through Gazepoint Analysis and transcribingthem, as summarized in Table 6.3.Table 6.3: Encountered Challenges During The Clinical User Study TrialsChallenge Classification Challenge1. System / Interaction Design 1.1. Calibration1.2. ROI Lost Before Capture2. Participant Behaviour 2.1. Multiple Trials2.2. Unnatural Forced Gaze Input(Related to MZ)2.3. Forced Gaze Input OverSmall Areas (Related to MZ)The System / Interaction Design class describes issues which the userhas no control over and are inherent to the way the system is designed andimplemented.• 1.1. Calibration: refers to a high offset at any area of the screenbetween the point of gaze detected by the eye gaze tracker and theactual point of gaze of the user.• 1.2. ROI Lost Before Capture: we observed that in some cases,the Midas Touch problem [32] is not completely avoided, even afterusing a manual input to activate the eye gaze input. Out of habit,sonographers tend to use some manual inputs and simultaneously lookelsewhere outside the screen, either to glance at the manual controls,the probe or the patient. This occasionally happened as the sonogra-pher rotated the zoom knob or held down the gaze input button and1426.5. Resultslooked away, which caused the system to take a false gaze input andperform zoom into an unintended area, which resulted in losing theregion of interest (ROI) and required re-zooming. This resembles thechallenge encountered during the context-free user study “unintendedMidas touch effect”, presented in Chapter 5.Participant Behaviour challenges are those related to the participant notfollowing the instructions or tips provided during the demo session by theresearcher for zooming into and capturing targets using the gaze-supportedalternatives. These issues could be mitigated by providing the user withenough training to allow a better understanding of the interaction tech-niques.• 2.1. Multiple trials: in some rare cases, the participant confusedone target with another for a particular task and had to zoom out andre-zoom to the correct target.• 2.2. Unnatural Forced Gaze Input: this occurred in some of thetrials of the gaze-supported Multi-step Zoom technique. As partici-pants were provided the option of using the gaze-supported feature tomove the zoom box, they sometimes enabled it even when it wasn’tneeded. For instance, sometimes the participant moved the zoom boxmanually with the trackball away from the target then used the gazefeature to return it back over the target.• 2.3. Forced Gaze Input Over Small Areas: similar to 2.2., thisalso occurred with the gaze-supported Multi-step Zoom technique. Inthis case, participants had the zoom box roughly over the correct area,but used the eye gaze input to perform fine movements, which shouldoptimally be adjusted with a trackball instead, as instructed during thedemo session of the user study. This attempt to perform slight move-ments with eye gaze typically offsets the zoom box to an unwantedarea, which causes frustration and requires the user to re-perform theplacement of the zoom box.In our quantitative analysis, we only take into consideration the lastcorrect trial and eliminate all metrics collected from incorrect targets.The researcher always informed the participant of zooming into anincorrect target and requested re-zooming to the correct one.We also observed that, especially during the patient session, sonogra-phers always place the ultrasound transducer so that the target struc-ture is in the middle of the acquired image, which in turn does not1436.5. Resultsrequire much panning of the image after zooming in. This behaviouralone undermines the potential for using gaze-supported zooming, asone of the major advantages, especially for OZ, is the minimization ofthe required panning as it zooms and pans simultaneously. As a re-sult, sonographers reported not finding a difference in the interactionbetween gaze-supported and manual-based zooming for patient tasksand some phantom tasks.6.5.3 Qualitative ResultsGeneral Feedback on The Presented SystemPotential for Beginner Sonographers One area where the One-stepZoom eye gaze system is found beneficial is in reducing the amount of inter-action with the manual controls. Thus, the sonographer can focus more onassessing the acquired image. Especially for novice users, learning the man-ual controls can take up much of the learner’s cognitive attention. P6 andP9, student sonographers, found the potential in using gaze-supported func-tions in keeping the student sonographer’s main attention on the acquiredimage. As students, one of the frequently occurring scenarios is overly fo-cusing on learning the manual controls, which causes the learner to lose herattention of the probe, causing it to subtly move and lose the area of interestin the acquired ultrasound image.Added Cognitive Load Associated with MZ With the current capa-bilities of the gaze-supported MZ, P1 and P2, professional sonographers, didnot find the added eye gaze feature an improvement. P1 and P9 reportedthat the added step to use eye gaze in the Multi-step Zoom function to movethe box to the approximate region is a burden rather than a simplification asthe sonographer must eventually use the trackball to refine the positioningof the box.Similarly, student sonographers witnessed some disadvantages with theeye gaze system that could halt its advantages. P7 found it an extra burdenthat she must remember to switch on the eye gaze feature when needed.P8 noticed that she occasionally moved the zoomed area of interest to anunintentional location as she subconsciously held the gaze-activation buttonwhile looking elsewhere either to examine some other feature of the imageor to look for the location of a manual input on the manual controls panel.P2 faced some calibration deterioration issues during the use of the sys-tem and stated that the gaze-based system isn’t up to an expected level.1446.5. Results“A new interface has to be 10 times better than what we are doing inorder to change the habit (of using the manual-based system). It has to besuper accurate and very quick.” - P2Negative Reliability One important potential drawback of using eyegaze for fast zooming into images was brought up into attention by P4: itmight speed up the interaction to an unwanted level that the sonographerno longer pays attention to the small details in the image. In other words,using a faster zoom function might not necessarily improve the performanceor the image content quality.“When we scan we are basically looking at the whole organ to find minuteabnormalities. So, if let’s say with the eye gaze it immediately focuses onone thing, you might miss other things.” - P4She also expressed concern for using this function in the long-run, cre-ating an unwanted reliability on the system by the sonographer.“I think it’s helpful but at the same time you can become complacent,because I think that if it is doing the work for me, then I don’t have to reallyfocus too much or really search for abnormalities.” - P4Insufficient Amount of Practice with Gaze-supported Input Giventhe short amount of time sonographers were provided to practice using thegaze-based system, P1, P3 and P5 reported that they found potential inthe system, but only if they were provided a sufficient amount of time topractice using it. P4 found it helpful, but very different from the currentinteraction:“The eye gaze tracking system is hard to learn but also quite efficient.When you zoom in, the target is just there, so you won’t have to keep movingyour eyes around.” - P4Given that two functions were improved with eye gaze (One-step Zoomand Multi-step Zoom), participants often mixed up the steps required to usethe zoom feature of the two functions, which is an expected behaviour in ashort one hour-long user study session.Phantom Targets are Unrealistic P3 stated that she found difficultyin positioning the zoom box over the phantom task that required framingmultiple targets as she had to look in between the two middle targets toposition the box. She clarified that in typical ultrasound imaging tasks,there is only one target with a known centre and having to frame multiple1456.5. Resultstargets in one image is unrealistic. After the user study, P5 followed up withan email expressing similar observations:“It would be unusual for us to zoom in some of the ways outlined in theactivity. We would rarely use a long narrow sample box to zoom multipletarget areas in a line. Most of the areas we zoom on are singular and usea square box. It is also unusual for us to use a very small sample box andzoom in as close up as we did in the activity. The sample box is rarely lessthan a quarter the size of the entire sector. We zoom in a bit but alwaysleave information surrounding what we are focused on to provide relationalinformation.” - P5Sonographers’ Routine in Ultrasound ScansPreferred Types of Zoom for Ultrasound Scans Although the zoomfunction is one of the most frequently-used functions in sonography, basedon our field study results presented in Chapter 3, when asked about theirultrasound scans routine outside this user study, the recruited sample ofsonographers turns away, in general, from using the zoom function providedwith ultrasound machines as the One-step Zoom (known in ultrasound ma-chines as Low-resolution Zoom) degrades the acquired image quality andMulti-step Zoom (known in ultrasound machines as High-resolution Zoom)does not provide an enhanced image quality that competes with other mag-nification techniques, such as decreasing the depth parameter of the acquiredimage. P5 clarified further that she prefers adjusting the sector width andthe depth parameters to zoom into particular targets located near the sur-face instead of using the built-in zoom functions of ultrasound machines.Participants were asked what type of zoom they typically use in theirultrasound scans. Surprisingly, a lot of variations in responses were observedeven with the small sample size interviewed, which could be attributed tothe different types of exams the sonographers typically perform, their ex-perience with ultrasound machines and their age. Out of ten participants,seven (including all participant student sonographers) typically use Multi-step Zoom instead of One-step Zoom, despite the fact that it takes longersteps to get to, as they are most concerned about the quality of the image.Only one participant prefers the One-step Zoom to the Multi-step Zoom andtwo participants do not use zoom at all during their ultrasound scans andprefer to use depth adjustments and sector width instead.“When I was getting my training, we were discouraged from using thezoom function. When you use the zoom function, you cut off the anatomyand you do not see the landmarks.” - P21466.5. ResultsP1 reported the usage of a mixture of both types of zoom, with a ten-dency to use the Multi-step Zoom in her ultrasound scans as it providesclearer images than One-step Zoom. P5 stated that she used the zoomfunction only when she was receiving her overall training on the ultrasoundmachine as a student, but has never used it afterwards. Similarly, P7, a stu-dent sonographer, reported that she typically uses a mixture of both zoomfunctions, especially when zooming into very small features, such as targetsin early pregnancies. For instance, when the Multi-step Zoom cannot zoomany further, she tends to use the One-step Zoom to further magnify theimage and accurately place the calipers for doing measurements.P3’s technique is a little different from the other sonographers inter-viewed: she tends to capture the overall acquired image, then zoom andcapture another image of the area of interest. Therefore, the overall con-textual image is provided in a separate image for the radiologist to analyzethe target within its environment. Supporting the importance of providinga context of the zoomed target, P1 stated “You want a little bit of context,but you do not want too much surroundings” as she zooms in to eliminatebackground noise while leaving some context for the radiologist to locatethe target with its environment.P4 had a completely different preference as she prefers using the One-step Zoom function in her exams instead of modifying the zoomed areaparameters as offered by the Multi-step Zoom function. This is due to thefact that she does not substantially zoom into areas in images, but zoomsonly to differentiate one structure from another.General Preference of Multi-step Zoom Over One-step Zoom Inaddition to maintaining the image quality, another reason for using theMulti-step Zoom is that it provides a full visualization of the image beforeconfirming the zoom action. On the contrary, One-step Zoom cuts off therest of the image as the user gradually zooms in.“Once you zoom in, slight movements would take you out of the area ofinterest by moving the probe. And it’s very hard to track the area once youhave zoomed in already.” - P6In the case of One-step Zoom, sonographers reported that they mainlyuse it when they generally would like to quickly assess an area withouttaking further actions. Also, One-step Zoom is used when the image qualityand detail is not a priority in the produced image, which is a rare case insonography. Lastly, it is used to further magnify when Multi-step Zoomreaches its limits, as in the case reported by P7.1476.5. ResultsFrequency of Use of The Zoom Function The frequency of use ishighly patient-dependent as well as target-dependent. All student sonogra-phers reported that zoom is always used in cardiac ultrasound as it is part oftheir “routine”. Other cases also apply for other types of ultrasound exams.For instance, P1 reported using the zoom function when she would like totake an image of the long shape of the endometrium without the distractingsurroundings. Other examples are often found in obstetrics and gynecology,such as imaging the fetal heart or particular fetal organs and the ovaries.Sonographers also reported frequently zooming into cists or lesions found inthe kidneys, the liver or the breasts. The common bile duct is another ex-ample that requires a small level of zoom to enlarge and centre the commonbile duct in the image. The zoom function can be used as an intermedi-ate step to another function in the ultrasound machine. For instance, P1uses the zoom function concurrently with the colour Doppler function whenshe would like to observe the blood movement in small areas and in smallamounts of blood flow. P2 reported that when he used to use the zoomfunction, it was only to record a zoomed video of the fetal heart to be sentto the radiologist to show that the heart of the fetus is beating.The zoomed targets can be both regular and irregular. For example, cistsand stones are typically circular with a clear centre. P1 stated that somelesions in the liver are highly similar to the phantom targets presented duringthe study. She also stated that linear shapes are not often found unless theyare foreign objects, such as needles or IUDs, or in some cases long blood clotsin veins. P4 stated that even shapes that are typically regular in shape canbe irregular in some cases. For instance, the liver hemangioma is typicallyround, but it could be irregular with speckles inside it. Also, the shape ofthe target is dependent on the posture, position, bowl gas and breathing ofthe patient.Participants’ Perceived Eye Behaviour In terms of eye behaviour, allinterviewed sonographers reported that they tend to scan the whole areathen zoom into a particular target of interest. P2 stated that although thesonographer is paying attention to the centre of the image, the sonographer’seyes are actively looking for abnormalities in the peripheral vision. It is alsoanatomy-dependent: scanning large structures, such as a liver, is differentfrom scanning small structures, such as a thyroid or a lymph node. Whenscanning large structures, a sonographer’s vision is looking everywhere forabnormalities. On the other hand, scanning small structures does not requireas much visual attention. In addition, three of the student sonographers1486.5. Resultsreported that they frequently use the context view to help them localizetheir position in the overall acquired image.6.5.4 Quantitative ResultsFigure 6.7: The Observed Issues in Gaze Interaction During the ClinicalUser StudyOut of all gaze-supported tasks, including phantom and patient tasks,29.6% of the trials exhibited one of the aforementioned observed challenges.Figure 6.7 illustrates a summary.Gaze-supported Features UsageWe observed an inconsistency in the usage of the Multi-step Zoom techniquewhen the participants were instructed to use the gaze-supported methods.Figure 6.8 summarizes these behaviours: in 45% of the trials during thephantom session and gaze-supported sub-session, participants did not preferto use the eye gaze feature at all and performed the task with a fully manual-based Multi-step Zoom technique. In addition, in 40% of the trials duringthis sub-session, participants used the eye gaze input, but incorrectly, orencountered interaction issues. Only 15% of the trials were performed asthe gaze-supported Multi-step Zoom interface is intended to be used.1496.5. ResultsFigure 6.8: Participants’ Behaviour During the Phantom Session, Gaze-supported Multi-step Zoom Trials.Techniques Followed for the Patient TasksAs participant sonographers were given the choice to select either zoom tech-niques during the patient tasks, the majority preferred to use the Multi-stepZoom technique when instructed to use the manual-based interface and theOne-step Zoom when instructed to use the gaze-supported interface. Figure6.9 summarizes the participants’ choices. Note that, similar to the phantomsession, when participants were instructed to use the gaze-supported inter-face, some preferred to avoid gaze input altogether and replaced it with amanual-based input.(a) Gaze-supported sub-session (b) Manual-based sub-sessionFigure 6.9: Participants’ Choice of Input Method and Technique During thePatient Session1506.5. Results6.5.5 Results from the Mixed Models Analysis of VarianceA 2 (interaction: Gaze-supported, Manual-based) x 3 (Zoom Technique:OZ, MZ, Mixed) x 3 (Target Type: Regular Phantom, Irregular Phantom,Patient) Mixed Models Within-subject ANOVA was performed on the col-lected data, with two discarded trials out of 300, due to missing eye gazedata.By regular phantom targets, we refer to targets that have a uniformshape in the phantom, such as the training phantom targets 1 to 8 and 13to 16. By irregular phantom targets, we refer to multiple uniform shapes ina row (or a column), such as the training phantom targets 9 to 12.Table K.1, found in Appendix K, includes a summary of post-processingapplied on the collected data, including transformations and number oftrimmed outliers to correct violated assumptions and reports on the vio-lated assumptions that persisted even after data post-processing. Outliersare trimmed if they are above or below 2.5 of the standard deviation.Effects of Input MethodTime on Task. There was a main effect of Input Method on Time onTask. F (1, 278) = 22.91, p <0.001. Pairwise comparisons showed thatusers spent significantly more time on task using the gaze-supported inputmethod (Gaze-supported M = 21.43 sec. SD = 0.22, Manual-based M =17.38 sec, SD = 0.20), as shown in Figure 6.10a. This result aligns withthe fact that some participants did not find the provided user study timeof one hour in one day enough to train on using the gaze-supported tasks.Therefore, we hypothesize that much of the time difference between the twoinput methods is attributed to the learning effect.There was an interaction effect between Input Method and Target onTime on Task. F (2, 275) = 9.91, p <0.001. Bonferroni post-hoc testsexamining the interaction effects found that there is no significant differencein terms of performance between the two input methods when the targetis the phantom. The source of variation is largely due to the patient CBDtarget, with a difference of ∆ = 8.54 seconds, p <0.001. This means thatwhen sonographers are faced with a patient task, their performance usingthe gaze-supported interaction drops by 54.5%. This drop in performancecould be attributed to a number of factors, including the context switchrequired to communicate with the patient: in the case of gaze-supportedinput, the sonographer has to be aware of her gaze at all times, whichmakes the context switch between the ultrasound image and the patient1516.5. Results(a) Main Effect By Input Method(b) Interaction Effect Between Input Method and TargetFigure 6.10: Time on Task Statistics1526.5. Resultsmore demanding. It also could be attributed to the fact that 30% of theselected techniques used during the patient gaze-supported sub-session isthe gaze-supported Multi-step Zoom, which takes significantly longer to usecompared to the gaze-supported One-step Zoom, as will be discussed in laterresults. By referring to Table 6.4, we further observe that gaze-supportedinteraction performs the best with regular phantom targets, which are oftencaptured with One-step Zoom. A significant difference, p <0.001, betweenscanning regular phantom targets and patient targets when using the gaze-supported input, and an identical mean of time on task performance forirregular phantom targets and patient targets support this finding.On the other hand, Table 6.4 also shows that the manual-based interac-tions perform the best with patient targets with a significant difference, p =0.008, from the irregular phantom targets, which could be due to the famil-iarity of the participant with both, the input method and the target. Thesecond-best target scanned with a manual-based input is the regular phan-tom targets, with a significant difference from irregular phantom targets, p= 0.045.This low performance with irregular phantom targets could be due tothe fact that the shape of the irregular phantom targets is unusual for sonog-raphy tasks, as reported earlier by P3 and P5.Table 6.4: Mean Time on Task Based on Input Method and TargetInput Method Gaze-supported Manual-basedPhantom Regular 16.75 s 16.44 sPhantom Irregular 24.21 s 22.01 sTargetPatient 24.21 s 15.67 sEye Movement Velocity. No main effect was found of input method oneye movement velocity. However, there was an interaction effect betweenInput Method and Target on Eye Movement Velocity. F (2, 275) = 5.01, p= 0.028. Bonferroni post hoc tests examining the interaction effects foundthat users’ eye velocities are significantly different across input methodsonly when users were scanning regular phantom targets (p = 0.001). Theinteraction is illustrated in Figure 6.11.Fixation Rate. No main effect was found of input method on fixationrate.1536.5. ResultsFigure 6.11: The Interaction Effect Between Input Method X Target onEye Movement Velocity: with higher zoom levels, using the gaze-supportedinteraction slows down the eye movement velocity.Mean Fixation Duration. There was a main effect of Input Method onMean Fixation Duration. F (2, 274) = 17.28, p <0.001. Pairwise compar-isons showed users’ mean fixation durations are significantly higher usinggaze input methods (M = 483.62 ms, SD = 1.26) compared to manualmethods (M = 427.27 ms, SD = 1.28), as shown in Figure 6.12. This resultaligns with the fact that some participants found the gaze-supported zoommethods more cognitively demanding, as some research in eye tracking sug-gests that longer fixation durations is an indication to higher cognitive loaddue to the allocation of the cognitive capacity to information processing [49][63].Mean Path Distance. No main effect was found of input method onmean path distance.Effects of TechniqueTime on Task. There was a main effect of Technique on Time on Task.F (2, 281) = 5.24, p = 0.024. Pairwise comparisons showed that users spentsignificantly less time using the OZ technique compared to the MZ technique1546.5. ResultsFigure 6.12: Main Effect on Mean Fixation Duration By Input Method(OZ M = 17.18 sec., SD = 0.20, MZ M = 21.09 sec., SD = 0.19). This ismainly due to the fact that MZ requires more steps than OZ to achieve therequired magnified image, which consequently takes longer to achieve.Eye Movement Velocity. There was a main effect of Technique on EyeMovement Velocity. F (2, 282) = 7.13, p = 0.004. Pairwise comparisonsshowed users’ eye velocities are significantly faster using MZ (M = 914px/s, SD = 346.73) compared to OZ (M = 724 px/s, SD = 373.42). Thisis likely to be since the sonographer rapidly moves her eye gaze around theedges of the zoom box to ensure the correct placement around the area ofinterest.Fixation Rate. No main effect was found of technique on fixation rate.Mean Fixation Duration. No main effect was found of technique onmean fixation duration.Mean Path Distance. There was a main effect of Technique on MeanPath Distance. F (2, 270.68) = 10.62, p <0.001. Pairwise comparisonsshowed users’ fixations mean path distances are significantly longer usingMZ (M = 556.67 px, SD = 174.95) compared to OZ (M = 356.60 px, SD =183.66), p <0.001. This is due to the fact that participants are instructedto use MZ with irregular targets, which cover up more space within theultrasound image compared to regular targets that are zoomed with OZ.1556.5. ResultsEffects of TargetTime on Task. There was a main effect by Target on Time on Task.F (2,278)=5.70, p = 0.016. Pairwise comparisons showed that users spentsignificantly less time to capture the regular phantom targets compared tothe irregular phantom targets (Phantom Regular M = 6.56 sec., SD = 0.20,Phantom Irregular M = 23.12 sec., SD = 0.18). Similarly, this is due to thefact that most of the regular targets were zoomed in using OZ, which takessignificantly less time to use than MZ.In addition, differences were found when using the manual input methoddepending on the type of target scanned. Users took significantly longerscanning irregular phantom targets in comparison to regular phantom tar-gets (p = 0.012) and took significantly longer scanning irregular phantomtargets compared to patient targets (p = 0.002). Figure 6.10b illustratesthese interaction effects.Eye Movement Velocity. There was main effect on Eye Movement Ve-locity was by Target. F (2, 277) = 15.54, p <0.001. Pairwise comparisonsshowed that users’ eye velocities are significantly slower when scanning pa-tient targets (M = 650 px/s, SD = 320.02) compared to regular phantomtargets (M = 863 px/s, SD = 401.87). Similarly, velocities are lower scan-ning patient targets compared to irregular phantom targets (M = 1022 px/s,SD = 315.92). This decrease in speed in eye movement could be due to thesonographers’ analysis and close examination of the CBD. On the otherhand, phantom targets do not require much examination as they are notrealistic patient targets that need analysis.Fixation Rate. There was a main effect of Target on Fixation Rate. F (2,421767) = 6.22, p = 0.008. Pairwise comparisons showed users’ fixation ratesare significantly higher scanning irregular phantom targets (M = 1.90 fix/s,SD = 0.32) compared to scanning patient targets (M = 1.60 fix/s, SD =0.37).Mean Fixation Duration. No main effect was found of target on meanfixation duration.Mean Path Distance. There was a main effect of Target on Mean PathDistance. F (2, 278)= 7.15, p = 0.004. Pairwise comparisons showed users’fixations mean path distances are significantly longer scanning phantom reg-ular (M = 424.58 px, SD = 216.83) and phantom irregular (M = 579.54 px,1566.5. ResultsSD = 155.04) compared to patient targets (M = 395.18 px, SD = 183.21),with p values of <0.001 and equal to 0.002 respectively. Again, this is possi-bly due to the familiarity of the sonographer and the nature of repetitivenessof the patient tasks compared to the phantom tasks.6.5.6 Suggested Improvements for Other UltrasoundMachine FunctionsDespite the varied results received from participants, sonographers foundpotential in the technology to improve other functions in the ultrasoundmachine after testing the capabilities of the eye tracking technology inte-grated with the machine. P2 suggested a use for the eye gaze feature tohelp identify the anatomy the sonographer is looking at and suggest anno-tations with the help of some image recognition feature. Another suggestionprovided by P2 is to use the eye gaze- supported features to help inspectlarger ultrasound images, such as panoramic images. P5 suggested usingeye gaze with ultrasound machines as a teaching aid for teaching studentsonographers what areas they should be inspecting and what tissues shouldbe assessed. Similarly, this eye gaze can be recorded in practice and pro-vided to radiologists to highlight the areas the sonographers were inspectingduring the scan. Additionally, the same approach for setting the positionof the zoom window of the Multi-step Zoom function can be adopted formoving the Doppler window around the screen automatically based on eyegaze.6.5.7 Discussion of ResultsBased on the quantitative results presented, we reject the null hypothesisregarding time on task (H1). Similarly, provided the reported qualitative re-sults, along with the longer fixation times, we reject the second null hypoth-esis regarding cognitive load (H2). Through our user study, we found thatgaze-supported interaction takes significantly (23.3%) longer than manual-based interaction. Similarly, participants reported a higher cognitive loadassociated with the gaze-supported solutions, especially for the Multi-stepZoom function. Mean fixation durations are higher by 13% using the gaze-supported input compared to the manual-based input. However, given thenovelty of the interaction, we acknowledge that these results are not con-clusive, as the interaction has been tested for one day, under one hour perparticipant.Using a gaze-supported zooming approach in diagnostic sonography has1576.6. Conclusionits potential only in niche areas. As noted by student sonographers, thiscould be helpful to alleviate the distractions caused by learning the man-ual ultrasound controls for beginners and allow them to focus more on thepresented ultrasound image. For expert sonographers, this benefit makesno difference as they are very skilled at operating the ultrasound machine’smanual controls.In fact, our results regarding longer time on task performance and highermental demands, in addition to the qualitative feedback collected from theparticipant sonographers, suggest that complications arising from using thegaze-supported interface in the long run will hinder the benefits intended bythe design of the gaze-supported solution in terms of lowering the repetitivestrain injuries for ultrasound machine users by dividing the input betweenthe motor control channel and the visual channel.LimitationEvaluating ultrasound machine interfaces is of particular challenge, as foundin earlier studies [4], as there are many factors influencing the interaction,including, but not limited to: the user’s level of experience, the user’s atti-tude to the product, the clinical application, the clinical work flow and thetype of ultrasound system tested. The challenge is even manifested as wehave limited access to users. Therefore, we acknowledge that there could besome external factors influencing our results. The most important factor isthe tendency of the recruited participant sonographers to use MZ over OZ intheir daily routine scans (high-resolution zoom is preferred to low-resolutionzoom) or other mechanisms to magnify ultrasound images.6.6 ConclusionWe present a context-focused clinical user study designed for sonographers toassess the proposed gaze-supported zooming interaction quantitatively andqualitatively in terms of time on task and added cognitive load. We base theuser study design on the routine CBD scan performed by sonographers pro-vided its simplicity and familiarity by all expert and student sonographers.We include phantom targets in the structure of the user study as well toexamine the behaviour of sonographers and the performance of the systemwith a variety of controlled shapes.The user study takes place at our lab with an environment setting closelymatching to the ultrasound room at a hospital. The user study is struc-tured so that each participant receives an equal amount of training using1586.6. Conclusionthe gaze-supported zoom functions before performing the user study tasks.The user study tasks involve an equal number of tasks to be performed withthe manual-based and gaze-supported zoom interaction. They also involvetasks to be performed by One-step Zoom and Multi-step Zoom. The userstudy tasks are followed by a discussion with the recruited participant sono-grapher to qualitatively evaluate the presented gaze-supported system andits potential compared to the traditional manual-based system.A total of five expert sonographers and five student sonographers par-ticipated in the user study with varying levels of experience. Through ourresults analysis, we observe five frequent gaze-supported interactions relatedchallenges, which occurred during 29.6% of the gaze-supported tasks.Sonographers found potential in the tested gaze-supported interactionfor training student sonographers, as it alleviates the need to focus on thephysical input layout and allows for higher focus on the ultrasound image.However, they also report an increased cognitive demand, especially whenusing Multi-step Zoom, due to the novelty of the interaction and the needfor the users to “be aware of where they are looking” at all times when thegaze is being actively used as an input.In alignment to the results obtained from the context-free game-baseduser study, we find that gaze-supported OZ is significantly faster than gaze-supported MZ, used more often by participant sonographers during the gaze-supported patient session and shows no significant difference in terms ofmean fixation duration. Therefore, out of the two techniques, we find thatgaze-supported interaction performs better when it is implicitly integratedas a control input.We observe that there could be external factors influencing our results.The most important factor is the tendency of the recruited participant sono-graphers to use the Multi-step Zoom in their daily routine scans or to avoidusing the zoom function altogether and use alternative mechanisms to mag-nify ultrasound targets. Another factor is the insufficient exposure to thegaze-supported system and the lack of training.As for manual-based interaction outside the presented user study, sono-graphers tend to use Multi-step Zoom over One-step Zoom as it does notdegrade the resolution of the zoomed image. Had ultrasound machines pro-vided high resolution images with the One-step Zoom approach, this issuewould not be a concern anymore for the sonographer’s preference of theMulti-step Zoom interaction over the One-step Zoom. However, some sono-graphers might still prefer MZ over OZ, as it provides higher visualization,as discussed earlier.159Chapter 7Conclusions AndRecommendations7.1 ConclusionsIn this thesis, we follow a user-centred design approach to investigate, designand evaluate two multi-modal gaze-supported zoom interactions, Multi-stepZoom and One-step Zoom, for zooming into the acquired images in ultra-sound machines. We define the zoom functions in ultrasound machines,the High-resolution zoom and Low-resolution zoom, as a subset of a largergroup of functions concerned with image magnification and analyze the userinteraction to create an informed gaze-supported interface design.We present a complete state-based analysis of zoom functions, OZ andMZ, in ultrasound machines and integrate gaze tracking capabilities. Wetest our presented gaze-supported zoom interactions through a series ofuser studies. Results from the context-free game-based user study helpedus identify the potential improvements and challenges of using the investi-gated gaze-supported interaction techniques. We observed that both gaze-supported techniques required lower physical demand at the cost of intro-ducing a higher mental demand compared to the manual-based techniques.In addition, other challenges were observed, such as inaccuracies at high lev-els of zoom and an “unintended Midas touch effect” in both gaze-supportedtechniques and a higher context switch in the gaze-supported MZ.A total of five expert sonographers and five student sonographers partici-pated in the context-focused user study. We observe five frequent interactions-related difficulties, which occurred during 29.6% of the gaze-supported trials.In our results, we find that gaze-supported interaction requires significantlyhigher time on task and longer fixation duration compared to manual-basedinteraction. This indicates that using the presented gaze-supported alterna-tives is slow and cognitively demanding for the selected sonography tasks.Sonographers report an increased cognitive demand due to the novelty of theinteraction and the need for them to “be aware of where they are looking”1607.2. Recommendationsat all times when the gaze is being actively used as an input.However, participants in both user studies reported that they experi-enced a higher focus on the main tasks when using the gaze-supported OZtechnique compared to the manual-based alternative. This is because theirattention was not occupied with managing the manual controls. In addition,sonographers found potential in the presented gaze-supported interaction fortraining student sonographers, as it alleviates the need to focus on the phys-ical input layout and allows for higher focus on performing their main taskof analyzing the ultrasound image.To compare the two zoom-supported interaction, One-step Zoom showshigher potential than Multi-step Zoom, as it is proven to perform faster,depending on the targets. On the other hand, gaze-supported Multi-stepZoom added to the complexity of the task that is attributed to the repetitivecontext switch required to activate and deactivate the eye gaze input to movethe zoom box. In other words, we find that Multi-step Zoom violates therule of using implicit gaze input, as detailed in Chapter 4, as the user ends upactively controlling the position of the zoom box on the screen. Conversely,One-step Zoom does not place that much impact on actively controlling anobject on screen, but implicitly uses the location of the user’s eye gaze tozoom into an area of interest.Since both user studies were performed only for one hour per participant,higher times on tasks could be attributed to the fact that the participantsdid not have enough exposure to and experience using the presented gaze-supported zooming interface. Further evaluation is recommended to test theinteraction independent of this learning effect. We recommend a longitudinalstudy performed with novice sonographers extending to multiple days.Given these results, we find that there is no substantial evidence thatgaze-supported zooming is beneficial in ultrasound machines in terms ofimproving speed and mental workload. However, the observed potentialof the gaze-supported OZ technique in terms of higher focus on tasks andreduced physical strain could be further investigated in followup studies.7.2 RecommendationsTo take this work further, we recommend another iteration with a longitudi-nal study performed with student sonographers in their first year of trainingto test the effect of the interaction over a longer period, as participants inboth iterations were exposed to the gaze-supported system for less than anhour.1617.2. RecommendationsFor the next iterations, we recommend extending the set of eye gazefeatures we looked at to bring more insight and understanding with regard tousers’ cognitive load. For instance, we recommend examining pupil dilation,since it is another known gaze feature that changes with mental workload[7]. In addition, we recommend investigating Area of Interest (AOI) features,which will allow checking for differences in how users process specific regionsof the interface, e.g., target set shapes.In addition, we recommend performing formal evaluation of ergonomicsof the new interaction, as it is of main concern in the field of sonography,as discussed earlier. Ultrasound machine interface ergonomics evaluationtechniques are discussed in [3] and [64], including motion analysis, superficialelectromyography, digital human modelling and observational studies withcamera recording.In terms of system design, one possible modification to the fast zoomfunction would be to reduce the sensitivity gradually (or to stop it alto-gether) once the image reaches a certain amount of zoom, as the error getslarger as the zoom level is higher.In addition, other areas can be explored in the ultrasound machine toinvestigate the integration of eye gaze tracking. Another object draggingtask, although 1-dimensional, that will benefit from eye gaze support, is theautomatically setting the focus level to the area where the sonographer is in-terested in (i.e. where the point of gaze is located). However, automaticallychanging the focus levels based on the eye gaze will lead to the Midas touchproblem: the sonographer might simply be inspecting a particular depth ofthe image, not intending to set the focus levels to it. Therefore, similar to thediscussed object-dragging task in Chapter 4, a muscle group has to be en-gaged while setting the focus levels. This could be implemented simply withdepressing a button to set the focus levels. The task is still 1-dimensional,but the manual input device required is no longer a 1-dimensional input,but a binary input supported with the implicit input of the eye gaze.In Zhai’s MAGIC work [68], hotspots were identified in the user inter-face. If the user’s eye gaze is in the vicinity of a hotspot, it is automati-cally drawn into it, which makes recognition easier for areas where the usermight be looking at. For future work, the same approach can be appliedby performing image processing on the ultrasound images to recognize the“hotspots” or areas that the sonographer is likely to perform ultrasoundfunctions on (magnify, measure, etc.), which could improve the performanceof eye tracking-supported ultrasound functions.Other areas where eye gaze interaction can be of help is in the devel-opment of gaze-based menu selection for ultrasound image parameters for1627.3. Contributionshands-free cases, such as for ultrasound-guided interventional procedures,in case of absence of an ultrasound operator to assist the radiologist inperforming the procedure. Earlier work in eye gaze tracking explored gaze-based menu selection interaction, such as that presented in [36]. A popularapproach for gaze-based selection techniques is gaze gestures, as presentedin [16], which shows how it can be integrated into standard desktop ap-plications. Work such as presented in [52], explores automatic cropping ofimages based on eye gaze fixations. This work is completely gaze-based anddoes not use a secondary input modality to support the interaction. Thiswork can be beneficial to develop automatic zoom methods for hands-freeultrasound machine interaction, such as when used in ultrasound-guidedprocedures and surgeries. Given the frequent context switch that sonogra-phers undergo during an ultrasound scan, it might also be worth exploringeye gaze context switch facilitation techniques, such as that presented in[38], to aid the sonographer in switching from the ultrasound image andback to the same area that was being examined.7.3 ContributionsOur contributions in the work presented include the following:1. We presented results from a field study that includes observations ofroutine diagnostic ultrasound exams at two hospitals, interviews withsonographers, and a survey to members of the British Columbia Ul-trasonographers’ Society. Through the field study, we understand thecontext of sonography, the ultrasound machine functions used duringultrasound exams and the challenges faced by sonographers, includingthe encountered WRMSDs. Thus, we identified the potential advan-tages and the starting point to using a gaze-supported multi-modalultrasound machine interface and the potential risks associated withit.2. We presented an analysis of the magnification-related functions andthe tasks associated with them during ultrasound exams through acombination of results obtained through the field study and the userstudies. We find that magnification is always needed in ultrasound ex-ams. However, the approach to achieving magnification can vary. Ourfield studies show that zoom is one of the most frequently-used func-tions in ultrasound exams. Our interviews with sonographers during1637.3. Contributionsthe context-focused user study show that sonographers also use dif-ferent approaches to magnify targets, including controlling the imagedepth and sector size. In addition, the type of zoom function used,High-resolution or Low-resolution Zoom, highly depends on the appli-cation, the target, and the preferences of the sonographer.3. We presented a state representation of the manual-based zoom func-tions available in the class of ultrasound machines used in routinediagnostic sonography to better understand and visualize the interac-tion. Our state representation shows that the zoom functions analyzedimplement all or a combination of three states: Full-scale, Pre-zoomand Zoom.4. We presented a modified state representation of the zoom functions,which integrates the gaze input, implicitly and explicitly, to create amulti-modal gaze-supported zoom interaction for both High-resolutionand Low-resolution Zoom (or what we refer to as Multi-step Zoomand One-step Zoom). This state representation was achieved throughseveral iterations. We also present the implementation of the proposedgaze-supported interaction.5. We analyzed the interfaces of the recent premium to high-end ultra-sound machines used in routine diagnostic ultrasound and observedduring the field study and built a custom hardware controls interfacethat resembles the same design patterns analyzed and contains onlythe main functions needed for evaluating the gaze-supported zoominteractions.6. We presented results of a context-free game-based user study designedto evaluate the interaction with the proposed gaze-supported interfacein isolation to external effects related to sonography. Results from thisuser study showed a potential for a reduced physical interaction andan improved focus on the main tasks. The results also anticipatedrisks related to an unimproved time on task, accuracy and cognitiveload and identified other observed behaviours and challenges relatedto gaze-supported interaction.7. We presented results from a context-focused clinical-based user studyperformed with sonographers, which showed an increased time on taskand cognitive load and identified other areas of improvements wherethis type of interaction has potential, such as to aide student sonogra-phers in learning the machine controls and focusing on the main tasks,1647.3. Contributionsgiven the reduced amount of manual inputs with the gaze-supportedinterface.165Bibliography[1] Chadia Abras, Diane Maloney-Krichmar, and Jenny Preece. User-centered design. Bainbridge, W. Encyclopedia of Human-ComputerInteraction. Thousand Oaks: Sage Publications, 37(4):445–456, 2004.[2] Nicholas Adams, Mark Witkowski, and Robert Spence. The inspectionof very large images by eye-gaze control. In Proceedings of the WorkingConference on Advanced Visual Interfaces, AVI ’08, pages 111–118,New York, NY, USA, 2008. ACM.[3] G. Andreoni, M. Mazzola, S. Matteoli, S. DOnofrio, and L. Forzoni.Ultrasound system typologies, user interfaces and probes design: Areview. Procedia Manufacturing, 3:112 – 119, 2015. 6th InternationalConference on Applied Human Factors and Ergonomics (AHFE 2015)and the Affiliated Conferences, AHFE 2015.[4] G Andreoni, M Mazzola, S Matteoli, S DOnofrio, and L Forzoni. Ultra-sound system typologies, user interfaces and probes design: A review.Procedia Manufacturing, 3:112–119, 2015.[5] Arlene F Aucella, Thomas Kirkham, Susan Barnhart, Lawrence Mur-phy, and Kris LaConte. Improving ultrasound systems by user-centereddesign. In Proceedings of the Human Factors and Ergonomics SocietyAnnual Meeting, volume 38, pages 705–709. SAGE Publications SageCA: Los Angeles, CA, 1994.[6] Patrick Baudisch, Nathaniel Good, Victoria Bellotti, and PamelaSchraedley. Keeping things in context: A comparative evaluation offocus plus context screens, overviews, and zooming. In Proceedings ofthe SIGCHI Conference on Human Factors in Computing Systems, CHI’02, pages 259–266, New York, NY, USA, 2002. ACM.[7] Jackson Beatty and Brennis Lucero-Wagoner. The pupillary system.Handbook of psychophysiology, 2:142–162, 2000.166Bibliography[8] Aga Bojko. Eye tracking the user experience: A practical guide toresearch. Rosenfeld Media, 2013.[9] Andreas Bulling and Hans Gellersen. Toward mobile eye-based human-computer interaction. IEEE Pervasive Computing, 9(4):8–12, 2010.[10] William Buxton. Lexical and pragmatic considerations of input struc-tures. ACM SIGGRAPH Computer Graphics, 17(1):31–37, 1983.[11] William Buxton. A three-state model of graphical input. In Human-computer interaction-INTERACT, volume 90, pages 449–456, 1990.[12] Ishan Chatterjee, Robert Xiao, and Chris Harrison. Gaze+ gesture:Expressive, precise and targeted free-space interactions. In Proceedingsof the 2015 ACM on International Conference on Multimodal Interac-tion, pages 131–138. ACM, 2015.[13] Ishan Chatterjee, Robert Xiao, and Chris Harrison. Gaze+gesture: Ex-pressive, precise and targeted free-space interactions. In Proceedings ofthe 2015 ACM on International Conference on Multimodal Interaction,ICMI ’15, pages 131–138, New York, NY, USA, 2015. ACM.[14] CIRS. Multi-purpose multi-tissue ultrasound phan-tom. http://www.cirsinc.com/products/modality/67/multi-purpose-multi-tissue-ultrasound-phantom/. [Online;accessed Oct 12, 2017].[15] Yngve Dahl, Ole A Alsos, and Dag Svanæs. Fidelity considerations forsimulation-based usability assessments of mobile ict for hospitals. Intl.Journal of Human–Computer Interaction, 26(5):445–476, 2010.[16] Heiko Drewes and Albrecht Schmidt. Interacting with the computerusing gaze gestures. Human-Computer Interaction–INTERACT 2007,pages 475–488, 2007.[17] David Fono and Roel Vertegaal. Eyewindows: evaluation of eye-controlled zooming windows for focus selection. In Proceedings of theSIGCHI conference on Human factors in computing systems, pages151–160. ACM, 2005.[18] Classic Gaming. Space invaders online resources. http://www.classicgaming.cc/classics/space-invaders/icons-and-fonts.[Online; accessed Mar 1, 2017].167Bibliography[19] Gazepoint. Gazepoint open gaze api by gazepoint. http://www.gazept.com/dl/Gazepoint_API_v2.0.pdf. [Online; accessed Oct 12,2017].[20] Gazepoint. Gp3 eye tracker. https://www.gazept.com/product/gazepoint-gp3-eye-tracker/. [Online; accessed Oct 12, 2017].[21] Github. Eye movement data analysis toolkit (emdat). https://github.com/ATUAV/EMDAT. [Online; accessed May 29, 2017].[22] Hartmut Glu¨cker, Felix Raab, Florian Echtler, and Christian Wolff.Eyede: gaze-enhanced software development environments. In Proceed-ings of the extended abstracts of the 32nd annual ACM conference onHuman factors in computing systems, pages 1555–1560. ACM, 2014.[23] Joseph H Goldberg and Jonathan I Helfman. Comparing informationgraphics: a critical look at eye tracking. In Proceedings of the 3rd BE-LIV’10 Workshop: BEyond time and errors: novel evaLuation methodsfor Information Visualization, pages 71–78. ACM, 2010.[24] S.L. Hagen-Ansert. Textbook of Diagnostic Sonography. Number v. 1in Textbook of Diagnostic Sonography. Elsevier/Mosby, 2012.[25] Yasmin Halwani, Septimiu E. Salcudean, Victoria A. Lessoway, andSidney S. Fels. Enhancing zoom and pan in ultrasound machines witha multimodal gaze-based interface. In Proceedings of the 2017 CHI Con-ference Extended Abstracts on Human Factors in Computing Systems,CHI EA ’17, pages 1648–1654, New York, NY, USA, 2017. ACM.[26] Yasmin Halwani, Tim SE Salcudean, and Sidney S Fels. Multimodalinterface design for ultrasound machines. In Qatar Foundation An-nual Research Conference Proceedings, volume 2016, page ICTSP2476.HBKU Press Qatar, 2016.[27] Dan Witzner Hansen, Henrik HT Skovsgaard, John Paulin Hansen, andEmilie Møllenbach. Noise tolerant selection by gaze-controlled pan andzoom in 3d. In Proceedings of the 2008 symposium on Eye trackingresearch & applications, pages 205–212. ACM, 2008.[28] Sandra G Hart. Nasa-task load index (nasa-tlx); 20 years later. In Pro-ceedings of the human factors and ergonomics society annual meeting,volume 50, pages 904–908. Sage Publications Sage CA: Los Angeles,CA, 2006.168Bibliography[29] GE Healthcare. Logiq e9: Scan assistant. http://www3.gehealthcare.com/en/products/categories/ultrasound/logiq/logiq_e9/video_scan_assistant. [Online; accessed Oct 12, 2017].[30] J Hempel, A Brychtova, Ioannis Giannopoulos, Sophie Stellmach,Raimund Dachselt, et al. Gaze and feet as additional input modali-ties for interacting with geospatial interfaces. 2016.[31] Craig Hennessey. Eye-gaze tracking with free head motion. PhD thesis,2005.[32] Robert JK Jacob. What you look at is what you get: eye movement-based interaction techniques. In Proceedings of the SIGCHI conferenceon Human factors in computing systems, pages 11–18. ACM, 1990.[33] Robert JK Jacob. Eye movement-based human-computer interactiontechniques: Toward non-command interfaces. Advances in human-computer interaction, 4:151–190, 1993.[34] Robert JK Jacob. Eye tracking in advanced interface design. Virtualenvironments and advanced interface design, pages 258–288, 1995.[35] Paul Kabbash, I Scott MacKenzie, and William Buxton. Human perfor-mance using computer input devices in the preferred and non-preferredhands. In Proceedings of the INTERACT’93 and CHI’93 Conferenceon Human Factors in Computing Systems, pages 474–481. ACM, 1993.[36] Yvonne Kammerer, Katharina Scheiter, and Wolfgang Beinhauer.Looking my way through the menu: the impact of menu design andmultimodal input on gaze-based menu selection. In Proceedings of the2008 symposium on Eye tracking research & applications, pages 213–220. ACM, 2008.[37] S Kardan, N FitzGerald, and C Conati. Eye movement data analysistoolkit (emdat) user manual. University of British Columbia, 2012.[38] Dagmar Kern, Paul Marshall, and Albrecht Schmidt. Gazemarks: gaze-based visual placeholders to ease attention switching. In Proceedingsof the SIGCHI Conference on Human Factors in Computing Systems,pages 2093–2102. ACM, 2010.[39] Konstantin Klamka, Andreas Siegel, Stefan Vogt, Fabian Go¨bel, SophieStellmach, and Raimund Dachselt. Look + pedal: Hands-free naviga-tion in zoomable information spaces through gaze-supported foot input.169BibliographyIn Proceedings of the 2015 ACM on International Conference on Mul-timodal Interaction, ICMI ’15, pages 123–130, New York, NY, USA,2015. ACM.[40] Nobuyuki Kobayashi, Eiji Tokunaga, Hiroaki Kimura, Yasufumi Hi-rakawa, Masaaki Ayabe, and Tatsuo Nakajima. An input widget frame-work for multi-modal and multi-device environments. In Software Tech-nologies for Future Embedded and Ubiquitous Systems, 2005. SEUS2005. Third IEEE Workshop on, pages 63–70. IEEE, 2005.[41] Manu Kumar. Gaze-enhanced user interface design a dissertation sub-mitted to the department of computer science and the committee ongraduate studies of stanford university in partial fulfillment of the re-quirements for the degree of doctor of philosophy. 2007.[42] Manu Kumar, Andreas Paepcke, and Terry Winograd. Eyepoint: prac-tical pointing and selection using gaze and keyboard. In Proceedings ofthe SIGCHI conference on Human factors in computing systems, pages421–430. ACM, 2007.[43] I Scott MacKenzie, Abigail Sellen, and William AS Buxton. A com-parison of input devices in element pointing and dragging tasks. InProceedings of the SIGCHI conference on Human factors in computingsystems, pages 161–166. ACM, 1991.[44] Tobii Eye Tracker User Manual. Clearview analysis softwarecopyright c© tobii technology ab. 2006.[45] Jennifer L Martin, Beverley J Norris, Elizabeth Murphy, and John ACrowe. Medical device development: The challenge for ergonomics.Applied ergonomics, 39(3):271–283, 2008.[46] Emilie Mollenbach, Thorarinn Stefansson, and John Paulin Hansen. Alleyes on the monitor: Gaze based interaction in zoomable, multi-scaledinformation-spaces. In Proceedings of the 13th International Conferenceon Intelligent User Interfaces, IUI ’08, pages 373–376, New York, NY,USA, 2008. ACM.[47] Catherine Plaisant, David Carr, and Ben Shneiderman. Image-browsertaxonomy and guidelines for designers. Ieee Software, 12(2):21–32, 1995.[48] Rahul Rajan, Ted Selker, and Ian Lane. Task load estimation andmediation using psycho-physiological measures. In Proceedings of the170Bibliography21st International Conference on Intelligent User Interfaces, pages 48–59. ACM, 2016.[49] Keith Rayner. Eye movements in reading and information processing:20 years of research. Psychological bulletin, 124(3):372, 1998.[50] Andre Russo, Carmel Murphy, Victoria Lessoway, and JonathanBerkowitz. The prevalence of musculoskeletal symptoms among britishcolumbia sonographers. Applied Ergonomics, 33(5):385 – 393, 2002.[51] Anthony Santella, Maneesh Agrawala, Doug DeCarlo, David Salesin,and Michael Cohen. Gaze-based interaction for semi-automatic photocropping. In Proceedings of the SIGCHI Conference on Human Factorsin Computing Systems, CHI ’06, pages 771–780, New York, NY, USA,2006. ACM.[52] Anthony Santella, Maneesh Agrawala, Doug DeCarlo, David Salesin,and Michael Cohen. Gaze-based interaction for semi-automatic photocropping. In Proceedings of the SIGCHI conference on Human Factorsin computing systems, pages 771–780. ACM, 2006.[53] Selina Sharmin, Oleg Sˇpakov, and Kari-Jouko Ra¨iha¨. Reading on-screentext with gaze-based auto-scrolling. In Proceedings of the 2013 Confer-ence on Eye Tracking South Africa, pages 24–31. ACM, 2013.[54] Sophie Stellmach and Raimund Dachselt. Investigating gaze-supportedmultimodal pan and zoom. In Proceedings of the Symposium on EyeTracking Research and Applications, ETRA ’12, pages 357–360, NewYork, NY, USA, 2012. ACM.[55] Sophie Stellmach and Raimund Dachselt. Look & touch: gaze-supported target acquisition. In Proceedings of the SIGCHI Confer-ence on Human Factors in Computing Systems, pages 2981–2990. ACM,2012.[56] Sophie Stellmach, Sebastian Stober, Andreas Nu¨rnberger, and RaimundDachselt. Designing gaze-supported multimodal interactions for the ex-ploration of large image collections. In Proceedings of the 1st Conferenceon Novel Gaze-Controlled Applications, NGCA ’11, pages 1:1–1:8, NewYork, NY, USA, 2011. ACM.[57] Veronica Sundstedt. Gazing at games: using eye tracking to controlvirtual characters. In ACM SIGGRAPH 2010 Courses, page 5. ACM,2010.171Bibliography[58] Yan Tan, Geoffrey Tien, Arthur E Kirkpatrick, Bruce B Forster, andM Stella Atkins. Evaluating eyegaze targeting to improve mouse point-ing for radiology tasks. Journal of digital imaging, 24(1):96–106, 2011.[59] Samuel Radiant Tatasurya. Multimodal graphical user interface forultrasound machine control via da vinci surgeon console: design, devel-opment, and initial evaluation, Aug 2015.[60] Dereck Toker, Cristina Conati, Ben Steichen, and Giuseppe Carenini.Individual user characteristics and information visualization: connect-ing the dots through eye tracking. In proceedings of the SIGCHI Confer-ence on Human Factors in Computing Systems, pages 295–304. ACM,2013.[61] Irene Tong, Omid Mohareri, Samuel Tatasurya, Craig Hennessey, andSeptimiu Salcudean. A retrofit eye gaze tracker for the da vinci andits integration in task execution using the da vinci research kit. InIntelligent Robots and Systems (IROS), 2015 IEEE/RSJ InternationalConference on, pages 2043–2050. IEEE, 2015.[62] Ultrasonix. Touch screen ultrasound system. http://www.ultrasonix.com/. [Online; accessed Oct 12, 2017].[63] Tamara van Gog, Liesbeth Kester, Fleurie Nievelstein, Bas Giesbers,and Fred Paas. Uncovering cognitive processes: Different techniquesthat can contribute to cognitive load research and instruction. Com-puters in Human Behavior, 25(2):325 – 331, 2009. Including the SpecialIssue: State of the Art Research into Cognitive Load Theory.[64] F Vannetti, T Atzori, G Pasquini, L Forzoni, L Modi, and R Molino-Lova. Superficial electromyography and motion analysis technologiesapplied to ultrasound system user interface and probe ergonomics eval-uation. Advances in Human Aspects of Healthcare, 3:227, 2014.[65] Oleg Sˇpakov. Comparison of eye movement filters used in hci. In Pro-ceedings of the Symposium on Eye Tracking Research and Applications,ETRA ’12, pages 281–284, New York, NY, USA, 2012. ACM.[66] Kimberly P Wynd, Hugh M Smith, Adam K Jacob, Laurence C Tor-sher, Sandra L Kopp, and James R Hebl. Ultrasound machine compar-ison: an evaluation of ergonomic design, data management, ease of use,and image quality. Regional anesthesia and pain medicine, 34(4):349–356, 2009.172Bibliography[67] Shumin Zhai. What’s in the eyes for attentive input. Communicationsof the ACM, 46(3):34–39, 2003.[68] Shumin Zhai, Carlos Morimoto, and Steven Ihde. Manual and gazeinput cascaded (magic) pointing. In Proceedings of the SIGCHI Con-ference on Human Factors in Computing Systems, CHI ’99, pages 246–253, New York, NY, USA, 1999. ACM.[69] Dingyun Zhu, Tom Gedeon, and Ken Taylor. moving to the centre: Agaze-driven remote camera control for teleoperation. Interacting withComputers, 23(1):85–95, 2010.173Appendix APixel-angle AccuracyConversion for Eye GazeTracking ApplicationsFigure A.1: Pixel-angle Conversion ParametersIn our user studies, we use Gazepoint eye gaze tracker, which has anangle error of 0.5 to 1 degrees. In this appendix, we calculate the errorin pixels based on the apparatus used and explained in Chapters 5 and 6.We use the equations presented by Hennessy in [31]. An illustration of the174Appendix A. Pixel-angle Accuracy Conversion for Eye Gaze Tracking Applicationsparameters used for the calculations is shown in Figure A.1. A descriptionof those parameters is the following:• POG: Point of Gaze• Ref : Reference Point• ∆X: X Error• ∆Y : Y Error• θ: Visual angle error = 1 degrees• d: Distance To Screen = 60 cm∆Xcm =57.2cm1920px∆Xpx = 0.0298∆Xpx∆Ycm =41.8cm1080px∆Ypx = 0.0387∆Ypxθ = 2 ∗ arctan√X2cm+Y2cm2dAssuming ∆Xcm ≈ ∆Ycm and is represented by ∆Ecmθ = 1 = 2 ∗ arctan√((∆Epx)2(0.0298)2+(0.0387)2)260tan12∗ 60 ∗ 2 =√((∆Epx)2(0.0298)2 + (0.0387)2)65.5563 = ∆Epx ∗√(0.0298)2 + (0.0387)2)∆Epx =65.5563√(0.0298)2 + (0.0387)2)∆Epx = 21.4402pxTherefore, the error in pixels associated with the apparatus used in thisthesis is a radius of 21.44 pixels.175Appendix BSonographers-RadiologistsSurveyAbout the SurveyThe aim of this survey is to understand ultrasound machine interfacedesign from a user’s perspective. We are currently working on studyinghow eye gaze trackers could improve interaction with ultrasound machinesfrom a performance and ergonomic perspective. The questions relate toyour daily use of ultrasound machines, musculoskeletal disorders due towork injuries, and current technologies used to improve the use ofultrasound machines. Your contribution and feedback are highly valued!176Appendix B. Sonographers-Radiologists SurveySection 1: General Information1. Please select your current occupation• Sonographer• Radiologist• Cardiologist• Maternal Fetal Medicine Specialist• Student sonographer• Instructor• Other. Please specify: ......2. In what age group are you?• 20 - 29• 30 - 39• 40 - 49• 50 - 59• 60 +3. Gender:• Female• Male4. Please specify your years of experience as a radiologist.• Not applicable• Less than 2• 2 - 5• 6 - 10• 11 - 20• 21 - 30• > 305. Sonographers: what types of ultrasound scans do you typically per-form? (please select all that apply)177Appendix B. Sonographers-Radiologists Survey• General• Cardiac• Obstetric/Gynaecologic• Vascular• MSK6. Radiologists: what types of interventional procedures do you typicallyperform? ......7. Sonographers: on average, how long is the typical scanning time foran ultrasound scan?• < 10 minutes• 10 - 20 minutes• 20 - 40 minutes• > 408. Radiologists: on average, how long is the interventional procedure?• < 10 minutes• 10 - 20 minutes• 20 - 40 minutes• > 40178Appendix B. Sonographers-Radiologists SurveySection 2: Ergonomics and Work-Related MusculoskeletalDisorders (WRMSDs)1. What is your typical work schedule?• Number of days per week: ......• Number of hours per day: ......2. How often do you experience stress injuries or WRMSDs you believeis due to your career?• I have never experienced any WRMSDs or stress injuries due tomy career.• Once or twice throughout my whole career.• Once every few years.• Once or twice per year.• Multiple times a year.• Continuously for ...... years.3. How severe would you classify your WRMSDs?• Very painful and sometimes I have to take leaves from work.• Painful, but I can still manage the day.• Only a little distracting.• Intermittent• Not applicable.4. What caused most of these injuries/disorders? Check all that applies.• Repetitive movement due to repetitive menu selection and buttoninteractions.• Poor equipment design, such as height and location of the mon-itor, poor transducer grip design, chair height, patient bed loca-tion, etc.• Infrequent work breaks or short recovery time between scans.• Poor or awkward posture due to the type of scans performed.• Sustained force and pressure.• Other. Please specify: .....179Appendix B. Sonographers-Radiologists SurveySection 3: Efficient and Improved Ultrasound Interfaces1. General Ultrasound Machine Usage and Familiarity(a) As an estimate, what is the percentage of settings that you fre-quently use in an ultrasound machine out of all the settings youare familiar with?• 10% - 30%• 30% - 60%• 60% +(b) Which buttons, functions or features do you use most frequently?(i.e. at least once per scan in > 90% of all scans)......(c) Which buttons, functions or features do you use occasionally?(i.e. at least once per scan in 40% - 90% of all scans)......(d) Which buttons, functions or features do you use rarely? (i.e. atleast once per scan in < 40% of all scans)......(e) Based on your own experience, if you were provided with a newultrasound machine with a slightly different interface (in terms ofthe layout of the buttons and/or the software interface) than theone you are used to and have been using for most of your work,how long do you think it would take you to find your way aroundthe different settings you often use in your scans/procedures?i. Less than one working day (I can find my way around a newultrasound machine very easily)ii. About a few days to a week (I can find my way around a newultrasound machine easily, but I face some struggles some-times)iii. More than a week (I find it really hard to adjust to a newultrasound machine interface)(f) Please provide your rating for the following, where 1 = HighlyDisagree, 7 = Highly Agree.i. Scanning anatomical structures that are in constant motioncan be a time-sensitive task that requires an efficient andresponsive user interface.1 - 2 - 3 - 4 - 5 - 6 - 7180Appendix B. Sonographers-Radiologists Surveyii. Ultrasound foot-switches can be helpful in repetitive tasks(such as freeze or print)1 - 2 - 3 - 4 - 5 - 6 - 7iii. Sometimes I have to go through a lot of steps (through in-terface menus) to select a particular setting.1 - 2 - 3 - 4 - 5 - 6 - 7If you selected 5 or higher, please provide examples: ......iv. I switch my attention between the monitor and the ultra-sound interface buttons very often and it gets distractingsometimes1 - 2 - 3 - 4 - 5 - 6 - 7v. I switch my attention between the monitor and the ultra-sound interface buttons very often and it makes me lose focusof important image details sometimes1 - 2 - 3 - 4 - 5 - 6 - 7(g) Please describe the hardware or software interface features insome ultrasound machines you worked with which you think arevery efficient and ergonomically convenient in comparison to otherultrasound machines. (E.g. the keyboard is located on the samepanel as the rest of the buttons, the touch screen provides quicksuggestions for annotations, patient data is loaded automatically,etc.) ......(h) Please describe the hardware or software interface features insome ultrasound machines you worked with which you think areNOT efficient and not ergonomic in comparison to other ultra-sound machines. (E.g. keys are distributed in a non-intuitiveway, using the touch screen frequently is distracting, etc.) ......2. Evaluation of Existing Improvements to Ultrasound Machine Inter-faces(a) Have you used automated ultrasound exam software such as the“Scan Assistant” that is available with some GE machines?• Yes• No (skip to question (e))(b) What are the types of scans that you typically use this type ofsoftware for? ......(c) From your experience, what is the added benefit from using thissoftware? (Check all that apply)181Appendix B. Sonographers-Radiologists Survey• Significantly shortens the duration of an ultrasound exam.• Contributes to lowering the risk of WRMSDs as it lowersthe amount of repetitive movements with physical ultrasoundmachine interface buttons.• Helps me not to forget any steps for a particular ultrasoundexam and organizes my work flow• Helps me focus more on the patient• Other: ......(d) Can you think of any drawbacks of automating the ultrasoundexam steps using software like Scan Assistant? .....(e) Have you used any voice-enabled ultrasound machines? For ex-ample, the speech recognition feature in Philips iU22.• Yes. Could you please describe your experience and why (orwhy not) you would prefer to use this feature during yourscans or procedures: ......• No.3. Hands-free Interaction with Ultrasound Machines (Radiologists)(a) Do any of your scans or procedures require a sterile environmentor sterile equipment?• Yes. Examples? ......• No(b) Is there a need for adjusting ultrasound image settings and pa-rameters during an interventional procedure? If yes, what are thetypical settings changed?• Yes. All types of procedures frequently require it.• Yes, but the frequency of this need changes based on the typeof interventional procedure being performed• No, most of the procedures have image settings pre-set. Theremight be exceptions though.• No, not at all.• Other. Specify: ......(c) Are there any cases where an assistant (e.g. another sonographer)is required for your scans or procedures? (If yes, continue. If no,skip the rest of this section)• Yes182Appendix B. Sonographers-Radiologists Survey• No (skip to the next section)(d) Are there any cases when it is difficult to communicate intent toan assistant?• No, instructions are very straightforward.• It depends on the experience and background of the assistant.• Yes, but it’s tolerable and does not affect the flow of theprocedure.• Could you please provide examples? ......• Yes, it would be much easier if I could change the parametersdirectly.• Could you please provide examples? ......(e) Would having some hands-free control method to the ultrasoundmachine reduce your need for an assistant?• Yes, significantly.• Yes, but the assistant might still know better in terms ofultrasound machine settings control.• No, I do not prefer to interact with the ultrasound machineat all.Section 4: Efficient and Improved Ultrasound InterfacesDo you have any other observations, comments. thoughts, suggestions, orideas you’d like to offer on these topics that might be of use in ourresearch? ......183Appendix CGeneral Survey Feedbackfrom SonographersThis appendix lists down all the answers received to the last question inour sonographers survey, as discussed in Chapter 3: “Do you have anyother observations, comments. thoughts, suggestions, or ideas you’d like tooffer on these topics that might be of use in our research?”.• “It’s definitely time for a major change! I think voice recognition com-mand with a headset would be very beneficial, in addition to reinventingthe foot pedal for freeze and cine and finding ways to compress dataand simply record exams rather than freezing and printing images fornormal exams.”• “Gel warmer on right side.”• “Toggles are easier to work with than knobs.”• “Probe design that is lightweight, wireless if possible and fits into handswell.”• “Does the monitor have to be attached to the machine? For someexams, a ceiling suspended monitor may be more convenient.”• “Ultrasound exams by their nature do not follow the same order. Mul-tiple methods need to be looked at in order to address the wide spectrumof exams. Scan assist, foot switches, and voice command have all beenused. Each of these technologies have uses and drawbacks. Simplifiedkeyboards with multi-use keys have been used on several systems. Thismethod has not been very effective as most technologist prefer the abil-ity to see every available parameter. Single optimization buttons havehelped but cannot completely replace the technologist’s ability to changeparameters to suit every patient body type. Effective layout of a key-board with the ability to adjust based on body type of the technologist isthe best option. Areas currently needing consideration are VR goggles,184Appendix C. General Survey Feedback from Sonographerscordless transducers and robotic exoskeletons. These technologies arecurrently being developed or researched. In the development of thesenew technologies, engineers need to consider the interaction betweentechnologist and the patient. Important information is always discov-ered during the casual conversations during the exam. Care must betaken in order to keep barriers to a minimum. Adoption of new meth-ods has always been problematic. Technologists as a rule develop theirown personal routines for exams. These routines ensure that noth-ing is missed during the exam. New technologies should be developedto enhance their ability while allowing technologists to maintain theircurrent work flow. This could improve the adoption of new technolo-gies.”• “It will be great with hands free typing program like voice recognition.”• “The buttons shouldn’t be hard to press. The panel of the machineshould be able to adjust with ease. The wheels on the machine shouldbe smooth and easy to steer. The screen can be tilted to any angle.”• “Voice recognition would be useful for interventional work. the abilityof the system to know when you would want to make an adjustmentsuch as depth of field, focus or overall gain. The image optimizationbutton does this to somewhat of extent using voice for printing insteadof a button. The image should not be blurry as it is sometimes. Storageof video clips via voice (especially fetal echoes).”• “Voice recognition might be nice, but only if you can get a patient to bequiet. We do children and infants, so not sure how that would work.”• “It certainly sounds very interesting. Again, I would like to see afeature of the machine that either measure circumference either bytouch screen, or auto measure (like NT measurement package on theGE feature).”• “The machines need to cater to taller sonographers. It would be niceto get the machine closer to the bed. Also more efficient design of keyson the console would be helpful.”• “It is difficult when a machine is not conducive to being moved to eitherside of a bed. Doppler of the arm is an example when this is needed,or ICU portables. Machines need to have more similar interfaces sothat one can easily move from one type of machine to another.”185Appendix C. General Survey Feedback from Sonographers• “Reduce the size of the machine. Make sure all user parts are wireless(transducer, monitor to cart, ...etc.) Ask the manufacturers to reallyleverage computer and mobile phone technologies in designing ultra-sound machines. There is no need for 100kg machines, or transducersthat need a “fishing pole” to hold the cable.”• “A machine that has a pull out/retractable area for your feet to reston. Even better ability to scan large patients without using all of ourown arm strength. I have already had to have RTC surgery due to mywork.”• “The machines need to be smaller (or maybe they are, but my hospitaldoesn’t have the money to get them) and lighter. It would be nice tosqueeze a machine into an ICU room between the 2 IV poles, the vent,and all the other equipment that is on the side of the bed that I wantmy machine on. A lighter machine that can be moved with one handwould help with every DVT study where you start at the groin, butneed to move all the way to the calf. Good luck with your research, Ihope we (sonographers) see the benefits in the years to come.”• “Probe cords are heavy: a cordless probe would be nice but only if theprobe itself was as light and small as it is right now.”• “I am 6’2 The machines tend not to go high enough for me to standat a comfortably level.”• “I have had trouble with Lt elbow pain. I relate to extending my armfor annotations, Rt arm/neck pain from pressing with probe. If themachine can penetrate with less force on obese patients it would help.Ergonomically, the ports need to be easier to access and more of themso you do not need to bend and change as often. Lighter body for mov-ing and locking mechanisms that are sturdy. And maybe machine/bedcombo that has a cut out bed, in order to be closer to patients and havethe keyboard so you are looking ahead instead of arms spread and bodyturned. Good luck with that!”186Appendix DFirst-iteration Clinical UserStudyIn this appendix, we present our user study with sonographers during thefirst iteration of our user-centred design.To evaluate the designed systems, we designed a controlled user study tocompare both of the design alternatives. The aim of the study is toobserve the time on task and interaction repetitiveness between the basesystem (the manual-based system) and the gaze-supported system and toreceive qualitative feedback from sonographers on the introducedinteraction that will guide us for the second design iteration.D.1 Gaze-supported Interface DesignThe interface design is the same as detailed in System Design (Chapter 4)as alternative 1.In this alternative, it is important to note that the user is not receivingany visual feedback of where their current point of gaze is within the imagefor the One-step Zoom alternative. The only feedback that the user isreceiving regarding their eye gaze is whether or not the eye tracker has lostthe user’s eyes through two red circles shown at the bottom of theinterface, as shown in Figure D.1 (a). This feedback is also implementedwith the second alternative.D.2 ApparatusAs for the hardware, we used the Gazepoint GP3 eye gaze tracker [20] withthe accompanying Open Gaze API. The system was implemented anddisplayed on the screen of the Ultrasonix touch [62] machine. Theultrasound image was streamed through Ulterius API. The software waswritten in Python and the interface implemented with PyQt.187D.2. ApparatusFigure D.1: The Ultrasound Machine and System Components Used for theImplementation of Our Systems and User Study (Left). The Control KeysPanel of the Ultrasound Machine Used in Our Users Study (Right).It is important to note that this ultrasound machine is not the typical typeof machine used in diagnostic sonography, but is often used in point of careand by non-sonographers due to its simple touch-based interface with afew physical buttons and limited capabilities. To resemble themultipurpose buttons interface of the ultrasound machines typically usedin diagnostic sonography, we prototyped and 3D-printed a set of buttonsaround the trackball of the Ultrasonix Touch machine that captured thesecharacteristics, as shown in Figure D.1. Figure D.1 also shows thecomplete setup and the position of the eye tracker relative to theultrasound machine monitor.The functionalities of the buttons are slightly different between OZ andMZ. Table D.1 shows the mapped functions to each button on theultrasound controls panel.188D.3. ProcedureTable D.1: The Dedicated Buttons’ Functionalities Based on the SystemUsedSystem Button FunctionalityOne-step Zoom (OZ) A Zoom inD Zoom outMulti-step Zoom (MZ) A Zoom in / outD HoldTrackball ResizeOZ and MZ S Scroll (Pan)G Capture ImageUltrasonix System A, D, S, G InactiveTrackball Move / resizeCursor button Toggle between Move /resizeRefresh button Zoom in / zoom out1 Capture ImageAll Systems Freeze button Freeze imageD.3 ProcedureA discussion with each participant followed the experiment to receive theirgeneral impression of the new eye tracking-based system and identify itsstrengths and drawbacks given their experience and background insonography.A total of 6 participants, 1 male and 5 females, took part in the user study.Four of the participants completed the user study wearing glasses and onecompleted the user study wearing contact lenses. The participants wereall, except one student sonographer, expert sonographers who performeither obstetric exams, general exams or both.For simplicity, and due to the availability and the evidence from previousliterature [66] on the use of phantoms for ultrasound machine interfaceusability evaluation, the tasks were performed with a CIRS [14] qualityassurance phantom. A phantom is a specially-designed object used inmedical training in place of a patient made of material that mimics realtissues. It contains several targets that can be acquired with US imaging.For each different system tested, the participant sonographer was firstgiven the time they needed to get familiar with the interface by freely189D.4. ResultsFigure D.2: The Quality Assurance Phantom Used in the User Study and theCorresponding Ultrasound Images of the First, Second and Third Targets.exploring the phantom and acquiring and zooming into targets.Afterwards, the participant was asked to zoom into and capture threepredefined targets, as shown in Figure D.2.The participant could change the image parameters related to TGC andgain, but only before starting the zoom acquisition tasks, so that the zoominteraction could be isolated from other ultrasound machine-relatedinteractions for later analysis.Each sonographer carried out all three tasks on all three systems. The firstsystem the participants used was the conventional ultrasound machineinterface, which we will refer to as the “base system”. This acts as abaseline for comparisons with subsequent interactions with OZ and MZ.The second system alternated between OZ and MZ: half of the participantstested OZ as the second system and MZ as the third and the other halfvice versa.D.4 ResultsFor each system, the average completion time and button hits, includingthe use of the trackball, of the three tasks was observed.190D.4. ResultsD.4.1 Time on TaskWe observed that the average task completion time using the base systemis the shortest (M = 14.45, SD = 5.72) followed by OZ (M = 21.15, SD =12.1) and the longest is MZ (M = 40.65, SD = 18.36). Higher timeperiods associated with the gaze-supported alternatives could be due tothe participants’ unfamiliarity with the system when using themulti-modal gaze-based interface.D.4.2 Button Hit RateProvided the nature of the eye gaze-supported design, we observed thatthe rate of button presses using the base system is the highest (M = 0.51,SD = 0.12) followed by OZ (M = 0.3, SD = 0.07) and the lowest is MZ(M = 0.25, SD = 0.05). To provide a deeper understanding of the inputrate, table D.2 shows the average of the all the tasks for the number ofbuttons hit, completion time and input rate for each of the systems tested.Table D.2: The averages of participants’ tasks results for the number ofbuttons hit, completion time and input rate for each of the systems tested.Technique Button hits Time (s) Input rateBase 7.83 15.45 0.51OZ 5.72 21.15 0.27MZ 8.78 40.65 0.23D.4.3 Qualitative Feedback and DiscussionsIn terms of qualitative feedback received from the participants, they haveshown their preference to OZ over MZ. P3 stated:“I liked OZ the most of out the three. I found it to be the leastvisually-distracting compared to MZ. When we are scanning, we are doinga lot of visual assessment of the tissue itself. Extra overlays that take usaway from seeing tissue pathology might be a negative distraction.”Latching the zoom box movement to the eye movement, even with filteringand workarounds to reduce the distraction factor as explained in thedesign section, was still perceived as highly distracting by all participants.Another phenomenon we observed is the participants’ struggle to perfectlyplace the window around the target before confirming the zoom action,despite the fact that they can later refine the location through the panning191D.5. Improvements for the Second Iterationfeature. Although the zoom box is present in the base system, it movesonly in direct response to trackball movement, which is intentionallyinitiated by the user; thus, no participants reported distraction with thebase system.In terms of panning, there has been a variety of feedback regarding theusefulness of the feature. The majority of the sonographers found it veryhelpful as it reduces the need to adjust the probe to move the image oncethe image is zoomed in. On the other hand, other sonographers found it alittle unnecessary since they prefer using the probe to move the imagearound.As for the time measurements, we notice very high standard deviationfigures for all interfaces, which suggests that there might be other factorsinfluencing the time taken to complete a task that we did not take intoaccount, such as the user’s adaptability or tiredness.One of the interestingly observed behaviours of the participants during theuser study is their change in posture when using the gaze-based systems incomparison to the base system. Participants seemed more aware of theirposture to keep their head within the field of view of the eye tracker. Someparticipants did not glance at the keys at all, as they did not want the gazetracker to lose their gaze while glancing elsewhere. This is an example ofan unnatural behaviour resulting from using eye tracking systems, whichrequires gaze trackers with higher field of view or interface enhancementthat is less sensitive to movements. Nevertheless, P4 found that as apositive result of using gaze-based systems, which can implicitly alert thesonographer to always stay in an upright posture to avoid occupationalback injuries.As for qualitative feedback, participants preferred simpler gaze-basedsystems that do not have any visual feedback of where they are looking. Inother words, they prefer to “trust the eye tracker” to determine where theyare looking and not provide distracting feedback surrounding areas ofinterest.D.5 Improvements for the Second IterationWe observed several issues with the design, user study structure andevaluation from our first iteration. In our second design and evaluationiteration, we target to improve the following challenges we faced:192D.5. Improvements for the Second IterationApparatus Providing the participants with an ultrasound machineinterface with full capabilities has its advantages and drawbacks. Theadvantage is designing a more realistic user study environment and gettingmore realistic results of the user interaction. However, a drawback, whichoutweighed the advantage at this stage of research, is the participants’distraction from the intended gaze-supported interaction we are testing:the participant sonographers spent some of the user study and task timeinteracting with other ultrasound features to adjust the image parameters,which took away from their focus on the zoom feature being evaluated.One way to overcome this challenge is limiting the physical interface to anumber of ultrasound functions we are interested in testing, which wedesign in our second design iteration.Additionally, the physical interface layout of the base interface is differentfrom the physical interface layout of the gaze-supported systems: we addedextra buttons that are not normally part of the ultrasound machineinterface and that added some confusion to the participants.We also decide to remove in our second iteration the gaze indicators, as wefound that participants rarely used them and mainly relied on instructionsfrom the researcher regarding their optimal posture for eye gaze detection,given the little amount of time they are allowed to interact with and theirunfamiliarity with the system.User Study Structure Involving the end users in the design andevaluation iterations is essential in the user-centred design we adopt.However, we realize that we might be able to better identify and isolatethe gaze-supported interaction effects by running evaluations separate fromthe context of sonography. This is due to the discussed cognitive loadfactors in sonography including concurrent image analysis and patientinteraction, which could mask the real effects of the newly-introducedinteraction design. In our second iteration, we design a user study thattargets lay users and abstracts the interaction away from sonography. Thepremise is that if the abstract gaze-supported interaction yields positiveresults with lay users, then running a user study with sonographers haspotential in an improved interaction. Otherwise, if the gaze-supportedinteraction in a simple abstract setting is not successful, then it will nothave great potential with the added sonography-related cognitive load.Another issue with the user study structure is the sample size and thenumber of targets acquired per participant. We acknowledge that theanalyzed data set is too small and that our statistical analysis is at a very193D.5. Improvements for the Second Iterationpreliminary stage that does not allow for reliable statistical conclusions yet.User Study Tasks One important issue with our user study design isthe lack for defined and uniform training tasks across all participants.Allowing the participants to interact with the system “as long as they needto learn it” is not a useful instruction for users of a system for the firsttime. We observed that participants committed multipleinteraction-related errors while performing the tasks. These errors could besignificantly reduced if we define training tasks sufficient for theparticipants to follow and learn the system interaction.Evaluation Although we measure the button hit rate, we do not takeinto account the amount of time a button is held down, which is anotherform of physical interaction. Additionally, repetitive interaction is not asuitable measure, as other issues rise with low repetitiveness, such asunnatural postures and extended focus on the gaze interaction, we suspectanother type of mental cognitive load rising as the physical repetitivenessdecreases.194Appendix EGame User Study ScriptE.1 Participant Recruitment EmailHello awesome participant!Thank you for your interest in participating in the user study!Before I sign you up, I have to first make sure you are eligible toparticipate. Please reply back to this email with answers to the followingquestions:1. Do you wear glasses?2. Do you wear bifocal/gradual glasses (the ones for both far-sighted andshort-sighted vision correction)3. Do you have any abnormal (whether diagnosed or undiagnosed) eyecondition? (e.g. lazy eye after fatigue)4. Do you have any left arm/hand/fingers injury? Do you have any painassociated with the movement of your left arm/hand/fingers for anyreason?5. Do you have any previous experience with operating ultrasound ma-chines? (operating the machine itself, not being the patient)Please select a time from the available times below. It is recommendedthat you choose a time within the period of your best performance duringthe day, if possible, as you will be actively learning and applying newsimple tasks (no previous knowledge required).Time Schedule: https://doodle.com/poll/gh7eke9hgexk232kThe consent form describes the experiment procedure of the general largerproject, which is aimed at enhancing the interaction of clinicians withultrasound machines. The experiment procedure for this user study isessentially the same, but designed for non-clinicians with no background ofoperating ultrasound machines (therefore it is an interactive gameinstead). Attached is your copy of the consent form for your reference.195E.2. OZ Preparation SettingsThank you again for dedicating the time to play video games for scienceand research!E.2 OZ Preparation Settings• Disable the zoom knob press + gaze button press• Python constants: GRAPHICAL EXPERIMENT = 1• Run g nob fastZoom.pyE.3 MZ Preparation Settings• Disable the zoom knob rotations• Python constants: GRAPHICAL EXPERIMENT = 2• Run g nob detailedZoom.pyE.4 Before the Participant’s Arrival1. Raise the participant’s monitor to a viewable level.2. Switch the researcher’s monitor away (to not distract the participantwhile using the system).3. Place the controls box in front of the participant + make sure it’sstable.4. Hardware: connect the eye tracker and the controls box.5. Prepare the consent form, evaluation form, reward form, researcher’sform and participant reward.6. Block any direct light source to the eye gaze tracker.7. Run the following software:(a) Gazepoint Analysis(b) Gazepoint Control(c) Windows Media Player (for the game sound tracks)196E.5. After the Participant’s Arrival(d) Pycharm: and run the python program of the particular experi-ment (Make sure LOG RESULTS FLAG is set to True)(e) Delete all previous temporary data from the previous participant.E.5 After the Participant’s Arrival1. Provide consent form2. Provide participation reward3. Provide demographics survey4. Check if eye tracker can see eyes and can calibrate!5. Start timer6. Introduce user study “you are here to play space invaders!” + intro-duce structure (2 sessions: each has 2 games. After each session youwill fill out the following form)7. Present evaluation form + give the participant a minute to read theTLX reference + mention that the participant is not being personallyevaluated and all data will be kept confidential8. Adjust seating in front of screen:(a) Left hand can reach controls(b) Eye tracker can see eyes(c) Ask the participant to clean their glasses if wearing ones(d) Seat height is comfortable9. Test calibration10. Test volume11. Start screen capture12. Turn off lights above experiment area + close curtains13. Prepare laptop with word document for discussion + any other notesduring the sessions197E.6. Manual-based Interaction SessionE.6 Manual-based Interaction Session1. Run demo(a) Software:i. Explain the gameii. Show targetsiii. Show timeriv. Show levelv. Show level progressvi. Timeout results in: losing a life + repeating the whole levelvii. Shooting and scoring results in: level progress bar going up(b) Hardware:i. Zoom in and outii. Trackball to paniii. Red button to shootA. “You can shoot monster only when it turns purple”B. “It turns purple only when ”• “The full alien is within the view”• “The alien is at a particular zoom level”(c) Demo 3 aliens(d) Other instructions:i. “You are only allowed to use your left hand”ii. “I will not respond to any questions or comments during yourgaming sessions”iii. “If you have any questions about the game design, you canask after the experiment is done”2. Re-run program for the training session and the recorded session(a) Run soundtrack(b) Calibrate(c) Run training(d) When done, pause soundtrack + go to next soundtrack(e) Run soundtrack(f) Calibrate198E.7. Break Session(g) Run recorded(h) When done, pause soundtrack + copy log + go to next soundtrack3. Provide participant with evaluation form and mention: “only considerthe last game you played for these questions. E.g. the temporal de-mand of the last game”.E.7 Break SessionOffer the participant some chocolate and ask the participant to relax for afew minutes and let the researcher know when they are ready for the nextsession.E.8 Gaze-supported Interaction Session1. Run program for gaze demo(a) Calibrate(b) Software: Exactly same as the manual-based interaction session(c) Hardwarei. Turn on gaze modeii. “this is using the position of your eye gaze to specify whereto zoom”iii. “turn the knob and see what happens”iv. “you might run into situations where eye gaze is inconvenientor not fast enough to select the target, you can still use thetrackball”v. If you move your head, it won’t help, although it might feelnatural to do so(d) Demo 3 aliens2. Re-run program for training and recorded(a) Calibrate(b) Run soundtrack(c) Calibrate(d) Run training(e) When done, pause soundtrack + go to next soundtrack199E.9. Discussion Session(f) Run soundtrack(g) Calibrate(h) Run recorded(i) When done, pause soundtrack + copy log3. Provide participant with evaluation form and mention: “only considerthe last game you played for these questions. E.g. the temporal de-mand of the last game”.E.9 Discussion Session1. Double check forms after participant fills them out (look for any miss-ing data)2. “Do you have any general questions / comments?”3. “For each system, mention 3 advantages and 3 disadvantages (recordvoice + type)”4. “Please elaborate on your sources of frustration (from the evaluationform)”5. “Please keep the contents of the user study confidential until the endof the data collection phase.”200Appendix FGame Participants’Demographics Form1. Please specify your age range.• 18 - 25• 26 - 35• 36 - 45• 46 - 55• 56 - 65• Above 652. You classify yourself as:• Right-handed• Left-handed• Ambidextrous3. What is your laptop’s default touch-pad scroll direction settings? (pleasedouble check with the researcher if you are unsure how to respond tothis question)• Natural scroll (like Mac computers)• Reverse scroll (like most Windows-based computers)4. Do you have any current or past conditions with your vision, includingpast surgeries? (please double check with the researcher if you areunsure how to respond to this question)• Yes– Shortsightedness. Level: ......– Astigmatism201Appendix F. Game Participants’ Demographics Form– Vision correction surgery. Type: ......– Other: ......• No5. Have you used an eye tracker before?• Yes. Please provide details, including your frequency of use: ......• No6. Are you a frequent user of 2D/3D image editing and design softwarethat offer panning and zooming images? (e.g. Photoshop, SolidWorks,... etc.)• Yes– Please mention the name(s) of the software: ......– What is your frequency of usage∗ Daily∗ Weekly∗ Monthly∗ Occasionally• No7. Which of the following best describes your mental tiredness level rightnow?202Appendix GGame Qualitative EvaluationFormThe following table is provided to the participant as a reference.[Mental Demand]Did the game require a lot of thinking?[Physical Demand]Was the game physically easy or demanding, physically slack or strenuous?[Temporal Demand]How much time pressure did you feel due to the pace at which the gameoccurred? Was the pace slow or rapid?[Overall Performance]How successful were you at winning the game? How satisfied were you withyour performance?NOTE: this is your OWN performance, NOT the system’s performance*[Frustration Level]How irritated, stressed, and annoyed versus content, relaxed, and complacentdid you feel during the game?[Effort]How hard did you have to work to win the game?203Appendix G. Game Qualitative Evaluation FormNote Two copies of this form are provided per participant to evaluateeach session independently (manual-based interaction and gaze-supportedinteraction)1. Sources of Load (Weights)Circle the member of each pair that contributed more to the workloadof the task. (E.g. were you more stressed by X or Y?)EffortPerformanceTemporal DemandFrustrationTemporal DemandEffortPhysical DemandFrustrationPerformanceFrustrationPhysical DemandTemporal DemandPhysical DemandPerformanceTemporal DemandMental DemandFrustrationEffortPerformanceMental DemandPerformanceTemporal DemandMental DemandEffortMental DemandPhysical DemandEffortPhysical DemandFrustrationMental Demand2. Magnitude of Load (Ratings)Rate each of the following based on the task performed• Mental DemandLow High• Physical DemandLow High• Temporal DemandLow High• PerformanceLow High204Appendix G. Game Qualitative Evaluation Form• EffortLow High• FrustrationLow High3. What were your main sources of frustration? (select all that applies)• Game difficulty due to time limit• Game difficulty due to limited lives• Aliens don’t turn purple easily• Moving the trackball• Physical interface layout (the control box)• Eye gaze tracking accuracy• Other: .......4. Rate the following statements(a) I thought the system was easy to useStrongly disagree Strongly agree(b) I think that I would need the support of a technical person to beable to use this system (if you were to learn this system on yourown)Strongly disagree Strongly agree(c) I found the various functions in this system were well integratedStrongly disagree Strongly agree(d) I thought there was too much inconsistency in this system (E.g.things do not perform as you expect sometimes)Strongly disagree Strongly agree(e) I would imagine that most people would learn to use this systemvery quicklyStrongly disagree Strongly agree205Appendix G. Game Qualitative Evaluation Form(f) I found the system very cumbersome (difficult) to useStrongly disagree Strongly agree(g) I felt very confident using the systemStrongly disagree Strongly agree(h) I needed to learn a lot of things before I could get going with thissystemStrongly disagree Strongly agree206Appendix HCBD-CHD Ultrasound ScanStepsBelow are the typical steps followed by sonographers to perform generalCommon Bile Duct (CBD) and Common Hepatic Duct (CHD) exams,based on an observation and discussions with two sonographers.1. Ask the patient to lie in supine position.2. Choose the abdominal pre-set on the ultrasound machine, which loadsthe default settings and parameters for abdominal scans (such as dy-namic range, frame rate, frequency, gain, etc.).3. Pick up the curvilinear probe and apply warm gel on it before scanningthe patient.4. Locate the CBD within the abdominal area.5. Since the depth is set to 13 cm by default (on the particular machinethat was being observed), decrease the depth (to about 9 cm) to cap-ture the area closer to the probe in higher definition.6. Interchangeably, change the gain to obtain a brighter image.7. Once located, decrease the depth further and zoom into the area in-cluding the common hepatic and bile ducts.8. Move the zoom window until the target is located at the centre of theimage.9. Place the focal zone at the level of the CBD and CHD.10. Freeze the image.11. Select the measurement calipers.207Appendix H. CBD-CHD Ultrasound Scan Steps12. Place the first caliper at the first edge of the structure to be measured,and the second caliper at the second edge.13. Measure the CHD, then repeat steps 11 and 12 to measure the CBD(or vice versa).14. To get better images and measurements, sometimes the patient isasked to change their position to lie on their left side.15. The same steps are repeated from 4 to 13 and new images and mea-surements are obtained of the same structures.208Appendix IClinical User Study ScriptI.1 Participant Recruitment EmailTo include in the email:1. Location of the user study (attach a map)2. A contact phone number (the lead researcher’s)3. Parking information and instructions4. Instructions to reach the lab5. “Please plan to arrive about 15 minutes before your scheduled timejust in case of any difficulties finding the location”6. “If you wear prescription glasses and own contact lenses, it is recom-mended to wear contact lenses for this experiment as it does not takeas long to calibrate with the eye gaze tracker”7. “A reward of a $10 Starbucks card is offered for participation”I.2 Before the Participant’s ArrivalTo bring to the experiment area:1. Two monitors2. Ultrasound machine3. Interface controls box4. Screw driver (for the controls box, in case its height needs to be ad-justed)5. Printouts of participant forms209I.2. Before the Participant’s Arrival6. Printout of user study structure7. Printout of phantom targets8. An iPad with target ultrasound imagesTo place on the table:1. User study structure2. Gift card3. Gift card sheet4. Consent form5. Demographics form6. Researcher’s formTechnical setup:1. Turn on ultrasound machine and Ultrasonix software at least 30 min-utes before user study session to allow it to load2. Connect ultrasound machine with an Ethernet cable to the computerrunning the client software3. On ultrasound machine, run Pycharm and SERVER main.py4. Organize the patient room5. Prepare towels6. Put away old used towels7. Turn off lights above experiment area (add a sticky note to both sidesof the lab that a user study is currently running)8. Add a sign at the patient room that a user study is in progress, pleasedo not disturb9. computer (client) setup:(a) Setup monitor extended display setting210I.3. Introduction Session(b) Run us machine client interface.py (every time this script stops,the server script has to be restarted for this client script to runagain)10. Place the printout targets over the ultrasound machine monitor11. Place the iPad of the target ultrasound image in front of the partici-pant12. Set the ultrasound machine parameters:(a) Attach the correct probe (Curvilinear C5-2)(b) Preset: abdomen - general(c) Zoom: 100%(d) Freq: 2.5M(e) Focus (at the end of the image)(f) Depth: 14.0cm(g) Gain: 74%13. Delete (or move) all previous data in the temporary results folderI.3 Introduction Session1. The researcher introduces herself2. Provide consent form3. Provide participation reward4. Provide demographics survey5. Start screen capture recording6. Start timer7. Introduce the project(a) “My project is the integration of eye gaze tracking with the in-teraction with ultrasound machines.”(b) “I only worked on the zoom functions: Low-resolution zoom +hi-resolution zoom”211I.3. Introduction Session8. “Since we are focusing on the interaction, the zoom features are alldigital zoom-based functions, so the produced image is never as highquality as the acquired images with the typical high-res zoom functionin ultrasound machines.”9. Adjust seating10. Test calibration11. Start audio recording (if needed)12. Demo the software(a) The ultrasound image(b) The context view(c) The image capture area(d) The available tools(e) The “recording” indicator13. Demo the hardware(a) “This is designed to match the layout as closely as possible totypical ultrasound machines used in sonography”(b) “Try to use zoom on your own using both zoom techniques ... ”(c) The rest of the knobs work as well, as you are familiar with them,the implementation is not comprehensive as I’m mainly focusingon the interaction with the zoom function.14. Demo the gaze-supported features15. Mention the following gaze-supported interaction instructions• “For the Multi-step Zoom technique, it is best to use the eye gazefeature only when moving the zoom box for long distances acrossthe image”.• “Eye gaze is very jittery in small areas, therefore, it is best touse the trackball for fine motions of the zoom box around thetarget”.16. List further instructions...212I.4. Phantom Session• “If you need to change any settings of the image (other than theones available in front of you), you can do that only during thedemo session and before starting the training sessions. Please donot reach to change any image settings during the phantom orpatient tasks if not absolutely necessary.”.• “Keep in mind that this user study takes an hour to complete, soif you have any particularly long feedback during the experiment,please leave it to the discussion session so we can finish all taskson time”.17. Phantom exploration:(a) “For the phantom, I would like you to scan only the edge area ofthe 0.5 dB part of the phantom”.(b) Show phantom targets(c) Place each group of targets one by one on the ultrasound ma-chine’s screen.I.4 Phantom SessionThe same instructions in the introduction follow in the phantom session,only with a different set of targets. The phantom session is further dividedinto the gaze-supported interaction sub-session and the manual-basedinteraction sub-session.I.5 Patient SessionThe lead researcher volunteers to be scanned by the sonographer.Instructions given to the sonographer include:1. “Perform the CBD scan with the default zoom setting and level younormally use in your scans”.2. “For the first 5 image captures, use the gaze-supported interactionalternatives of your choice. For the second 5 image captures, use themanual-based interaction alternatives of your choice”.3. “Between each image capture, please lift your right hand (holding theprobe) and place it again to re-locate the CBD.”.213I.6. Discussion SessionI.6 Discussion SessionThe following are the list questions discussed with each participantsonographer:1. What type of ultrasound machine are you familiar with? Did this hard-ware/software layout and functions closely resemble the ultrasoundmachines you typically use in your ultrasound scans?2. What type of zoom do you typically use in your scans? And when?And how frequently?3. What kind of shapes do you normally need to zoom into?4. Are they regular shapes with defined centres?5. How would you describe your own perceived eye gaze behaviour whenzooming into targets in an ultrasound image? Do you mainly focus onthe target or do you keep peripherally scanning the rest of the image?6. What advantages and disadvantages did you find using the proposedeye gaze-supported zoom system?214Appendix JSonographers’ DemographicsForm1. Please specify your age range.• 18 - 25• 26 - 35• 36 - 45• 46 - 55• 56 - 65• Above 652. Please specify your years of experience as a sonographer.• Not applicable• Less than 2• 2 - 5• 6 - 10• 11 - 20• Above 303. You classify yourself as:• Right-handed• Left-handed• Ambidextrous4. Do you have any current or past conditions with your vision, includingpast surgeries? (please double check with the researcher if you areunsure how to respond to this question)215Appendix J. Sonographers’ Demographics Form• Yes– Please provide details: .....• No5. Have you used an eye tracker before?• Yes. Please provide details, including your frequency of use: ......• No6. What types of ultrasound scans do you typically perform? (Pleaseselect all that applies)• General• Cardiac• Obstetric/Gynaecologic• Vascular• MSK• Other. Please specify: ......7. Which of the following best describes your mental tiredness level rightnow?216Appendix KPost-processing of theCollected Data from theClinical User StudyTable K.1: Summary on Clinical User Study Data Post-ProcessingFamily Measure DataTransfor-mationApplied#TrimmedOutliersViolated As-sumptionsAfter Post-processingSum Time Log10 4 NoneNumber of Fix-ationsLog10 5 NoneAbsolute PathAnglesLog10 5 NoneFixation Dura-tionLog10 6 NonePath Distance Log10 9 Input method(Manual):Kurtosis =1.11, K-SNormality = 0Technique(MZ): Skew-ness = -1.11,Kurtosis =3.02Relative PathAnglesLog10 5 NoneRate Absolute PathAnglesNone 3 None217Appendix K. Post-processing of the Collected Data from the Clinical User StudyFamily Measure DataTransfor-mationApplied#TrimmedOutliersViolated As-sumptionsAfter Post-processingRate Eye MovementVelocityNone 3 Technique(Mixed): Kur-tosis = -1.82Fixations None 5 Technique(Mixed): Kur-tosis = -2.31Relative PathAnglesNone 2 Technique(Mixed): Kur-tosis = -1.78Mean Absolute PathAnglesNone 7 Technique(MZ): Kurtosis= 1.66Fixation Dura-tionLog10 6 NonePath Distance None 3 NoneRelative PathAnglesNone 5 NoneStandardDevia-tionAbsolute PathAnglesNone 3 Technique(Mixed): Kur-tosis = -1.55Fixation Dura-tionLog10 5 Technique(Mixed): Kur-tosis = -2.316Path Distance None 3 Technique(MZ): Kurtosis= 2.0Relative PathAnglesNone 5 Technique(Mixed): Kur-tosis = -1.5218

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0357171/manifest

Comment

Related Items