Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Designing zooming interactions for small displays with a proximity sensor Ustek, Dilan 2017

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2017_september_ustek_dilan.pdf [ 8.63MB ]
Metadata
JSON: 24-1.0354394.json
JSON-LD: 24-1.0354394-ld.json
RDF/XML (Pretty): 24-1.0354394-rdf.xml
RDF/JSON: 24-1.0354394-rdf.json
Turtle: 24-1.0354394-turtle.txt
N-Triples: 24-1.0354394-rdf-ntriples.txt
Original Record: 24-1.0354394-source.json
Full Text
24-1.0354394-fulltext.txt
Citation
24-1.0354394.ris

Full Text

Designing Zooming Interactions for SmallDisplays with a Proximity SensorbyDilan UstekB.A., Grinnell College, 2014A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinThe Faculty of Graduate and Postdoctoral Studies(Computer Science)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)August 2017© Dilan Ustek 2017AbstractSmall, high resolution touchscreens open new possibilities for wearable and em-bedded applications, but are a mismatch for interactions requiring appreciablemovement on the screen surface. For example, multi-touch or large-scroll zoom-ing actions suer from occlusion and diculties in accessing or resolving largezoom ranges or selecting small targets.Meanwhile, emerging technologies have the potential to combine many ca-pabilities, e.g., touch- and proximity-sensitivity, exibility and transparency. Acurrent challenge is to develop interaction techniques that can exploit the capa-bilities of these new materials to solve interaction challenges presented by trendssuch as miniaturization and wearability such as tiny screens that only one ngerof one hand can t on.To this end, Zed-zooming exploits the capabilities of emerging near-proximitysensors to address these problems, by mapping nger height above a control sur-face to image size. The EZ-Zoom technique adds the pseudohaptic illusion of anelastic nger-screen connection, by exploiting non-linear scaling functions to pro-vide a usage metaphor.In a two-part user study, we compared EZ-Zoom to touchscreen standardpinch-to-zoom on smartphone and smartwatch screens, and found (a) a signi-cant improvement in task time and preference for the smallest screen (equivalenttask time for the smartphone); and (b) that the illusion improved users’ reportedsense of control, provided cues about the interaction’s spatial extent and dynam-ics, and made the interaction more natural. From our experience with the study,we conclude requirements for the development of proximity sensors in order toaord such interactions.Our work goes on to reect on how zed-zooming can be incorporated intoseamless interaction tasks. We aim to identify some characteristics of a zoominginteraction that would need to be considered when designing a complete one,and explore how these characteristics play into a complete and usable zoominginteraction.iiLay SummaryAs displays get smaller, zooming emerges as an important interaction. The mis-match between current zooming techniques and the size of the displays calls fornew zooming interactions. Emerging technologies have the potential to sensengers above and around the display.Zed-zooming is an interaction technique that uses the space above the displayto manipulate the zoom. Users touch the point they want to zoom into, and lifttheir nger to activate the zoom. The EZ-Zoom technique makes the image scalingslow down when the nger reaches a certain height. This synthesizes the feelingof a rubber band connecting the nger to the display; as the nger gets higher,image scaling slows down.We found users were more ecient with EZ-Zoom on smartwatches and pre-ferred it over pinch-to-zoom. We also found the rubber band illusion gave a senseof control and made the interaction more natural.iiiPrefaceThe experiments described in this thesis were conducted with the approval of theUBC Behavioural Research Ethics Board (certicate number H15-02611).Prof. Karon MacLean helped frame, write, and edit parts of this manuscript,and provided supervisory assistance for this research.Haihua Zhang, an undergraduate student in Psychology mainly contributedduring the Winter 1 term of 2016. She contributed to the psychology aspect ofthe study as well as the brainstorming phase. She also helped with some of thegures in the Evaluation (Chapter 6) section in Winter 2 term of 2017.Kevin Chow, an undergraduate student in Computer Science joined in Winter2 term of 2017. He contributed to parts of the implementation, specically theexperiment suite code. He also assisted in carrying out the study. He is currentlythe main contributor for the ongoing work for the Summer 2017 terms (Chapter 7).SPIN lab members also contributed with feedback and early edits for partsof this manuscript. Paul Bucci, specically, contributed to early editing. Lab-mates were also valuable resources for brainstorming and feedback throughoutthe project.Prof. John Madden and Mirza Saquib were in an ongoing collaboration withus throughout the project. They helped us with our understanding of the Gellysensor and its capabilities. They assisted with making us early prototypes of thesensor. Saquib taught us how to fabricate them step-by-step. They both gave usinput on the current and future capabilities of the sensor. Their input in brain-storming were essential in assessing the technical feasibility of our ideas.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1.1 Pseudohaptic Illusion as a Solution to the Loss of HapticSensation . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1.2 Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.1 Near-Proximity Sensing Technology . . . . . . . . . . . . . . . . . 42.2 Challenges and Approaches in Surface Zooming . . . . . . . . . . 42.3 Non-Surface Solutions . . . . . . . . . . . . . . . . . . . . . . . . 52.4 Importance of Haptic Feedback . . . . . . . . . . . . . . . . . . . . 72.5 Pseudohaptic Illusions . . . . . . . . . . . . . . . . . . . . . . . . . 7v3 Possibilities of the Gelly Sensor as an Interface Device . . . . . . . 93.1 Current Capabilities of Gelly . . . . . . . . . . . . . . . . . . . . . 93.2 Possibilities of the Gelly Sensor . . . . . . . . . . . . . . . . . . . 123.3 Limitations of the Gelly Technological Approach . . . . . . . . . . 123.4 Limitations of the Current Gelly Prototype . . . . . . . . . . . . . 124 Case Study of Prototyping for New Technologies . . . . . . . . . . 144.1 Functionality Verication . . . . . . . . . . . . . . . . . . . . . . . 144.2 Prototyping for a Future Technology . . . . . . . . . . . . . . . . 184.2.1 Low-delity Prototyping . . . . . . . . . . . . . . . . . . . 184.2.2 High-delity Prototyping . . . . . . . . . . . . . . . . . . 224.3 Rapid Prototyping for the Pseudohaptic Illusion . . . . . . . . . . 245 Interaction Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255.1 Basic Zed-Zoom Component . . . . . . . . . . . . . . . . . . . . . 255.2 LZ-Zoom and EZ-Zoom . . . . . . . . . . . . . . . . . . . . . . . . 275.3 Auditory Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296.1 Research Questions and Objectives . . . . . . . . . . . . . . . . . 296.1.1 Research Question 1 . . . . . . . . . . . . . . . . . . . . . 296.1.2 Research Question 2 . . . . . . . . . . . . . . . . . . . . . 306.1.3 Research Question 3 . . . . . . . . . . . . . . . . . . . . . 306.2 Part A: Illusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306.2.1 Part A Experimental Procedure . . . . . . . . . . . . . . . 316.2.2 Part A Results: Analysis of the Pseudohaptic Illusion . . . 316.2.3 Use of Multisensory Feedback and Pseudohaptic Illusion . 346.3 Part B: Usability and User Experience . . . . . . . . . . . . . . . . 366.3.1 Part B Experimental Procedure . . . . . . . . . . . . . . . 366.3.2 Part B Results . . . . . . . . . . . . . . . . . . . . . . . . . 377 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417.1 Pseudohaptic Elasticity Illusion and its Value . . . . . . . . . . . . 417.2 Zed-Zooming Usability . . . . . . . . . . . . . . . . . . . . . . . . 427.2.1 Limitations in Prototyping and Study . . . . . . . . . . . . 427.3 Ongoing Research: Designing the Full Zooming Interaction . . . 437.3.1 Possible Interactions . . . . . . . . . . . . . . . . . . . . . 437.3.2 Upcoming Study Description . . . . . . . . . . . . . . . . 49vi8 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . 518.1 Findings: Zed-Zoom Promises a Broad Spectrum of Use . . . . . . 518.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518.3 Next Steps in Developing Zed-zoom Interactions . . . . . . . . . . 528.3.1 Full-Zoom Function . . . . . . . . . . . . . . . . . . . . . 528.3.2 Combining Other Capabilities on the Sensor . . . . . . . . 528.3.3 Stylus Input . . . . . . . . . . . . . . . . . . . . . . . . . . 528.4 The Future of In-Air Sensing with Gelly . . . . . . . . . . . . . . . 53Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59A Prior Art Analysis Provided to Qualcomm . . . . . . . . . . . . . . 59B Study Proposal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72C Participant Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . 85D Study Script and Coding Sheet . . . . . . . . . . . . . . . . . . . . . 87E Call For Participation . . . . . . . . . . . . . . . . . . . . . . . . . . 96F Consent Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98G Non-Disclosure Agreement . . . . . . . . . . . . . . . . . . . . . . . 101H Participant Rating Sheet . . . . . . . . . . . . . . . . . . . . . . . . . 106I Thematic Analysis Coding Sheet . . . . . . . . . . . . . . . . . . . . 108viiList of TablesTable 6.1 Study Part A – Phrases used by participants to describe eachpercept, if that percept was felt. (%) indicates items countedin that category, out of all 31 unique terms supplied. Phrasescould apply to multiple categories. . . . . . . . . . . . . . . . . 32viiiList of FiguresFigure 3.1 The Gelly sensor . . . . . . . . . . . . . . . . . . . . . . . . . 10Figure 3.2 Change in capacitance due to a hovering nger at various dis-tances from the top of the sensor. The change in capacitanceupon approach of the nger is negative[32]. . . . . . . . . . . 11Figure 4.1 The nal heatmap generated by the Matlab program to visual-ize touch and proximity capacitiance data. On the right, thereis the legend that shows distance of the nger away from thesensor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Figure 4.2 Heatmaps with three dierent color schemes generated by theMatlab program from the same capacitance dataset. . . . . . . 16Figure 4.3 4x4 grid of capacitance values received by the Java program. . 17Figure 4.4 4x4 grid of capacitance values received by the Java program. . 17Figure 4.5 Zoom interaction prototyping using Principle. . . . . . . . . . 19Figure 4.6 HoverPeek interaction animation simplied to 5 steps. Touchis indicated with a pink thumb. the grey thumbs indicate hover. 20Figure 4.7 FolderPeek interaction animation simplied to 5 steps. Thetranslucent cursor indicates a hover and the white opaquecursor indicates a touch. . . . . . . . . . . . . . . . . . . . . . 20Figure 4.8 QuickRead interaction animation simplied to 3 steps. Thesize of the cursor indicates the height of the nger above thescreen. The larger the cursor, the further the nger from thescreen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Figure 4.9 Zooming interaction animation simplied to 5 steps. All thecursors in this step indicate proximity. The size of the cursorindicates the height of the nger above the screen. As thecursor gets larger, the nger gets further from the screen, andthe image is scaled to be larger. . . . . . . . . . . . . . . . . . 21Figure 4.10 Angry Chicks game animation simplied to 6 steps. The sizeof the cursor indicates the height of the nger above the screen.The larger the cursor, the further the nger from the screen. . 22Figure 4.11 Prototype setup with the Leap Motion Controller on a ringstand and the devices. . . . . . . . . . . . . . . . . . . . . . . 24ixFigure 5.1 (a) Zed-Zooming: The user touches the graphical zoom targetto select it, then moves nger up and down above the screensurface to zoom it. (b) EZ-Zoom: A pseudohaptic illusion ofan elastic connection between nger and screen supplies aninteraction metaphor. . . . . . . . . . . . . . . . . . . . . . . . 26Figure 5.2 (a) A designer-dened scaling function relates zed-axis ngerheight to zoom level; its choice may impact controllability.The two grey lines represent two possible scaling functions:logarithmic and linear. We use the function that is representedby the red line; a linearized-logarithmic function. We choosethis because the knee in the function amplies the eect ofthe slow zooming. (b) Image scaling linked to nger lifting cantrigger the illusory perception of increasing force, most stronglywith the function highlighted in (a). The red scatterplot repre-sents the user’s perception because the perception is approx-imate. It is hard to measure the user’s perception because it isa movement artifact. The plot is our conception of the user’sperception and is not measured empirical data. . . . . . . . . 26Figure 6.1 Study Part A – Number of participants (N=12) who self-reporteda word in a given illusion category, by condition. The cate-gories are derived by emerging patterns. . . . . . . . . . . . . 33Figure 6.2 Study Part A – Average strength of illusion for each categorythroughout all conditions per participant. . . . . . . . . . . . 33Figure 6.3 Study Part A – Incidence and strength of pseudohaptic elas-ticity perception (N=12). Number of participants who per-ceived elasticity, by condition. Binned into three strength lev-els based on ratings for the strength with which self-supplieddescriptions were felt. . . . . . . . . . . . . . . . . . . . . . . 35Figure 6.4 Strength ratings for self-described Elastic illusion, averagedby condition. [0-10]; overall average (4.7). (N=12) . . . . . . . 35Figure 6.5 Study Part B – Average completion times by screen size (zoomextent pooled), 120 observations/bar. 95% condence inter-vals, *sig. at p=0.05. . . . . . . . . . . . . . . . . . . . . . . . 38Figure 6.6 Study Part B – Average completion times by zoom extent (screensize pooled), 120 obervations/bar.95% condence intervals, *sig.at p=0.05. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38Figure 6.7 Number of times participants ranked each zoom condition asthe most (green), 2nd (yellow), and least (red) for the smart-watch and smartphone screensize factors. . . . . . . . . . . 39xFigure 7.1 Approach method vs. Characteristics of Good Zooming Tech-niques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44Figure 7.2 Approach 1: The Balloon Metaphor State Transition Diagram 45Figure 7.3 Approach 2: Push-Pull State Transition Diagram Diagram . . 46Figure 7.4 Approach 3: Levels State Transition Diagram . . . . . . . . . 47xiAcknowledgmentsI thank my supervisor and advisor Prof. Karon MacLean for her support through-out my Masters and for the late nights working with me. I thank her for herenthusiasm and patience.I thank Prof. John Madden and his team for their collaboration, and SPIN/ MUX members for the valuable feedback they gave throughout the project.I specically thank Kevin Chow and Haihua Zhang for their dedication to theproject. I also thank Dr. Dongwook Yoon for his diligent feedback as my secondreader.I thank my parents and Antoine Ponsard for their unconditional support andan endless supply of ice cream without which none of this would have been pos-sible. I thank my friends and my cheerleader Paulina Panek for being only amessage away.Finally, I thank Qualcomm for funding this research.xiiDedicationTo my parents, who gave me the thirst to learn and the encouragement to questioneverything.xiiiChapter 1IntroductionInformation devices with smaller screens, such as those on smartwatches and t-ness trackers, are making their way into users’ everyday lives with ever-wideningpossibilities for application. These devices require users to zoom a lot to see thesmall content and therefore increase the need for usable zooming interactions.However, their small interaction surfaces are a mismatch with many current in-teraction techniques – most notably the standard pinch-to-zoom, which enablesusers to zoom and pan to the center of two contact points [16].Pinch-to-zoom on displays narrower than a few ngers’ width is complicatedby diculty in precise selecting and pinching, visual occlusion of display contentby the ngers [4, 15, 33], and the frequent clutching (repeating a single zoom ges-ture with an intervening lift to reset it) that is required to zoom further than onezoom action allows [4]. The intrusiveness of clutching is exacerbated when usersmust do it frequently, e.g., when switching frequently between content close-upand overviews [4]. Meanwhile, multitouch gestures on hand-sized displays, suchas smartphones, are tricky in one-handed use [15].We oer a new approach that involves zoom control in the space above thecontrol surface. To zed-zoom, near-proximity technology senses user movementsin the zed-dimension above the screen’s surface. The user initiates zooming bytouching the center of a region of interest, then controls zoom amount by movingthe nger up and down above and orthogonal to the display surface (Figure 5.1).This method has several advantages over pinch-to-zoom. It is easier to select smallzoom targets, nger occlusion is avoided, and large zooms are achievable with asingle continuous gesture. Single-nger zooming facilitates one-handed interac-tion.It is important to consider that the loss of surface contact in mid-air inter-actions can compromise eciency, accuracy, and in some cases, intuitiveness.Haptic sensation allows users to avoid relying only on proprioceptive feedbackby providing tactile guidance. With haptic feedback, users can perform actionsmore eciently and accurately [27] as opposed to performing actions withoutany tactile guidance. For example, in the case of pinch-to-zoom, the user receivesfeedback as their ngers are “pinching” relative to one other along a surface.11.1 Approach1.1.1 Pseudohaptic Illusion as a Solution to the Loss of HapticSensationTo rectify the lack of haptic sensation, we investigated the helpfulness of a pseu-dohaptic illusion of a physical connection between nger and screen. We reasonedthat it could restore this proprioceptive zoom-extent cue, indicate informationsuch as an outer spatial limit to proxemic sensing range, and provide a metaphorto underlie the direct manipulation concept.We examined various relationships between nger height and graphical im-age scale (scaling functions – Figure 5.2a) for their ability to trigger an illusion ofan elastic connection, as an artifact of coordinated physical-graphic movement.The most successful of these is a piecewise-linearized logarithmic function (Fig-ure 5.2a).In the implementation here, the image grows larger as the nger “pulls” itupwards. Beyond a sensed threshold, the connection might “pop” loose, the imageresetting to its initial size; alternatively, it could “lock” at maximal zoom. Theillusory “force” occurs as an artifact of coordinated nger-image movement: theperception appears to be of the change in force, and thus this perception does notmanifest at standstill.1.1.2 PrototypingWe grounded our design ideas for zed-zooming using sensors from the currentresearch in the materials engineering (electroactive polymers) lab at UBC [32].These materials raise the near-term possibility of simultaneously sensing touchlocalization, proximity, shear, and pressure with a transparent and exible sur-face (Chapter 3). Specically, the "Gelly" sensor is currently at an early-stageresearch, and only oers touch localization and near-proximity together in onesensor. These capabilities together might oer a space in which to design betterinteraction solutions to small-screen zooming. While not yet at a state allowingdirect use in interactive prototypes, their existence gives us guidance into whatkind of sensing (parameters, resolution, accuracy and bandwidth) may be mostfeasible to deploy in a 1-5 year time frame, and hence make smarter choices aboutproactive interaction development.We therefore simulated the capabilities of such sensors using a proxy tech-nology: the Leap Motion Controller. We utilized the hand tracking device as away to input nger distance away from a smartwatch and smartphone. This way,users were able to manipulate the scale of the content on the screen seamlesslyas though a proximity sensor was embedded in the devices.2This thesis describes an exploration into the design space that such technol-ogy will aord, making use of creative prototyping techniques to simulate itscapabilities in specic contexts.1.1.3 EvaluationTo understand our interaction technique’s viability, and its eectiveness in ad-dressing the loss of haptic sensation in mid-air zooming, we investigated the illu-sion triggering and the usability of the interaction technique. We ran a two-partstudy. The rst part aimed to investigate any pseudohaptic illusions that partic-ipants may feel, how the strength of the illusion is aected by audio and visualcues, and whether the illusion was helpful for users in terms of getting cues on theextent of the interaction space. The second part of the study measured the utilityof zed-zooming techniques compared to pinch-to-zoom and users’ preferences.1.2 ContributionsFrom our technique design and evaluation, we contribute:• A near-proximity interaction that facilitates zooming on small touch dis-plays; and with phone-sized screens, performs comparably to pinch-to-zoom.• Insight into how auditory feedback and image content determine the extentto which users perceive a pseudohaptic illusion of elasticity with EZ-Zoom.• Design recommendations based on a usability study comparing EZ-Zoomvariants to the current standard touch-display technique, pinch-to-zoom• Recommendations for the developmental direction of the Gelly sensor.1.3 OverviewPrevious work relevant to this research is summarized in Chapter 2. Chapter 3gives detail to the capability and limitations for one instance of the emerging classof low-cost, embeddable proximity sensors, to ground our explorations. Chapter 4outlines the process we followed to prototype for a future technology. Chapter 5explains the design of the interaction. Chapter 6 presents a two-part study and itsresults. Chapter 7 discusses the results of the study and ongoing research buildingupon them. Finally, chapter 8 summarizes our ndings and outlines directions forfuture work.3Chapter 2Related Work2.1 Near-Proximity Sensing TechnologySome consumer devices already support proxemic interactions in nger-range-scale proximity, such as the Samsung Galaxy S4 (1.5cm range; [31]) and FogaleSensation (3.5cm; [10]). Other sensors in development, such as the "Gelly" sensor,oer touch and proximity capabilities (e.g., with electroactive polymers) and aretransparent, stretchable, and bendable [32] with a current proximity range of 2cm.More details on this sensor can be found in Chapter 3.2.2 Challenges and Approaches in Surface ZoomingSurface pinch gestures date to Wellner’s Digital Desk, produced at Xerox in 1991 [36];multitouch zooming functionality evolved over the next 15 years. Pinch-to-zoomas we know it was commercially deployed in 2007 with the Apple iPhone and ispossibly the most successful and ubiquitous interaction gesture for touchscreendevices, the principal zoom method for most commercial phones and tablets.However, pinch-to-zoom has problems on new screens that approach the scaleof a few nger widths, a point at which multiple touches do not t or have room toslide. Beyond the obvious issue of content occlusion, a leading complication comeswhen signicant rescaling is needed: each movement movement must be small,leading to clutching (multiple pinches). Transitions present another challenge,particularly when frequent; e.g., when users are trying to identify something on aninterface they may zoom in and out multiple times, switching between details andhigh-level context [4, 17, 19, 28]. This workow requires an easier way to switchbetween zoom and pan (translation) modes, and a smoother transition betweenthe detail and overview scales.Additionally, pinch-to-zoom may be dicult for one-handed use. Zeleznik etal. [37] claim the importance of one-handed gestures with the "sandwich problem"in which people feel that one-handed interactions are more natural as they mayuse their other hand for something else, such as holding a sandwich.Pinch-to-Zoom-Plus avoids the need to clutch, by translating the rate of pinch-4ing and spreading to zoom extent, following an automatic pan to zoom-center.Small but quick movements can accomplish larger zooms, and thereby reduce theneed for clutching [4]. However, this method still requires two-ngers on-screen,with occlusion on watch-scale screens.Other techniques expand the interaction area outside a small display’s sur-face, permitting larger gestures and avoiding occlusion. SkinTrack senses a ngerwith watch-embedded infrared sensors; users zoom with sliding movements ontheir arm, a technique that can work when the device is situated on the skin [38].SideSight lets users pinch-to-zoom on the surface around a smartphone using in-frared sensors embedded long each side of the device [7], and is more suitable fora device resting on a larger surface than for handhelds.These techniques oer interesting alternatives. They share the loss of a di-rect engagement with the screen; as with a mouse, there is a level of indirectionbetween the user’s hand and the screen [11], with a potential for loss of usabilityor control. Further, sliding on elastic, compliant surfaces such as the skin may beless controllable or predictable than on glass.EZ-Zoom takes a dierent approach. The in-air nger minimizes occlusion,and accesses a control range above the surface that is independent of screen size(although dependent on sensor range). It can further reduce clutching with ascaling function optimized to zoom tasks typically performed on the device. Zed-zooming oers multiple implementation paths to transitions, although these arenot addressed in this thesis.2.3 Non-Surface SolutionsVarious studies pertaining to non-surface gesture solutions on small displays ex-ist. Harrison et al. (2009) claim that conventional input mechanisms such as but-tons and touch screens cannot be scaled to smaller displays [14]. They explain thatthis limits the benets users get out of their devices. They aim to solve this prob-lem with non-surface gestures. In order to do this, they utilize a magnetometeron the back of the device and mount a small magnet on a nger. The approach,however, requires that users manage an additional small object on their nger,and therefore results in a bulky prototype.Kratz et al. (2009) equipped mobile devices with distance sensing capabilities[18]. They used infrared sensors to sense coarse movement based hand gesturesand static position based sensors. Sweeping hand movements and hand rotationswere used to indicate scrolling and selection on a mobile device. They concludedthat Around-Device Interaction has the potential to solve occlusion problems onsmall-screen devices. We therefore build upon this concept, but aim for more5ne-grained gestures that new proximity sensors can provide.Zooming interactions with non-surface solutions have also been a topic ofvarious studies. By embedding a depth sensor on a wearable device, Sridhar et al.(2017) developed a prototype to sense on and around the skin nger input [34].The device had the ability to sense mid-air and multitouch input of ngers on theback of the hand while wearing the wearable device. The prototype was used forvarious applications; a music controller, virtual reality/augmented reality input,a map on a watch, image exploration on a large display, and controlling a game.The researchers were able to demonstrate the capabilities of a device that receivesinput away from the device’s screen. However, the setup with a depth camera doesnot allow for a feasible prototype in the near-future. This demonstrates the needfor a close-range proximity sensor to interact with small devices.The Apple Watch’s Digital Crown is a side knob that controls zoom level [3].This type of solution avoids pinch-to-zoom’s issues on small displays, but the needto move from screen to zoom-control for dierent interactions can impede uidity.Marquardt et al. (2011) highlight the rich interaction space “above the sur-face” [23]. We use this space to mitigate occlusion; however, above-surface inter-action may have weakened proprioceptive cues. Nancel et al. (2011) attributedthe slowness of mid-air circular movements versus linear movements to the lackof guidance [27]. Air+Touch (2014) describes two in-air zooming techniques thatwould similarly suer from lack of guidance based on [27]’s ndings: a) liftingthe thumb high above the control surface toggles zoom / pan modes before a tap,followed by scrolling to pan or zoom in/out – like a virtual slider; b) pan by touch,and zoom with in-air cycling [8].Transture (2015) was motivated by insucient small-screen space for pinch-to-zoom gestures [12]. To trigger zooming, the user circles in-air and continuescircling to zoom; movement outside the initial circle registers as panning. Theauthors found that “participants wanted to disable panning function in the zoom-ing zone”. This might imply that zoom and pan were dicult to handle simul-taneously; in-air gestures with limited proprioceptive feedback could be a factor.Zed-zooming is modal, and will rely ultimately on a smooth zoom/pan transition.Harrison et al. (2008) magnied content by how close the user leaned towardsthe screen using a camera and found that this direct manipulation was natural andintuitive [13]. We also take a direct manipulation approach, using local ngermovements.Past studies on non-surface solutions make it evident that in-air techniquescan provide solutions to the real-estate problem that comes with small-screeneddevices. However, a usable and feasible zooming interaction that reduces occlu-sion is yet to be designed.62.4 Importance of Haptic FeedbackMine et al. (1997) highlighted the importance of haptic feedback for users. Hu-mans rely on haptic feedback and physical constraints to execute precise interac-tions and to prevent fatigue [24].Nancel et al. (2011) reiterated the importance of haptic feedback for users[27].They compared freehand and device-based techniques in the context of mid-airzooming and panning interactions on large wall-sized displays. They compareda gesture with a 1D-path movement on a physical device, a gesture with a 2D-path movement on a physical device, and a gesture that had freehand movementwithout any physical device guidance. They concluded that freehand techniques,which do not provide haptic feedback for users, exacerbated fatigue and decreasedaccuracy. Gestures in freehand interactions were less ecient than input gesturesthat had added guidance.Due to the lack of guidance of haptic feedback, non-surface interactions cancause fatigue and ineciency.2.5 Pseudohaptic IllusionsTo address the loss of direct haptic sensation inherent in in-air interaction, weintroduced illusory haptic feedback.Pseudohaptic illusions simulate haptic input by integrating multimodal feed-back [29]. The result may dier from that of real haptic sensory input – e.g.,fainter, and/or apparent only during motion; yet may still fundamentally alterthe sense of an interaction [22]. Moreover, adding physically realistic behavioursto graphics can improve controls usability and precision [1]. When it also pro-vides a metaphor, it can make the interaction more intuitive [1]. As an example,humans’ relatively poor position acuity can lead to greater reliance on visual in-put when visual and proprioceptive cues conict. Exploiting this produced thesense of “bumps and holes” on a screen by accelerating or decelerating a mousecursor [20].Mandryk et al. aimed to reduce a cursor’s inadvertently crossing screens inmulti-monitor displays when a user is trying to access a widget on the bound-ary [22]. If a user is moving their mouse quickly towards the target widget, thecursor will slow down while over the widget, making the interaction feel stickyand preventing an unwanted leap to the next screen.Lee at al. present an interaction where a circular cursor can “squeeze” asthough it is made of rubber when it hits display borders and is pushed further bythe user [21]. Physical simulation improves precision and realism, and contributes7to a more engaging and learnable experience.These examples were generated in the context physical contact with a touch-screen or mouse. In this thesis, we sought to trigger a pseudohaptic elasticityillusion without contact, by manipulating the amount of zoom per distance thenger travels in the zed-axis with supporting auditory feedback.8Chapter 3Possibilities of the Gelly Sensoras an Interface DeviceWe utilize the "Gelly" sensor (Figure 3.1) as an architype of an emerging class ofsensing while designing interactions. This was useful in grounding our explo-rations and making them technically feasible. We situated our constraints in aproduct we would expect to see emerge out of eorts like those in John Madden’slab in the next 5 years.It is a mutual capacitance based touch/proximity sensor with hydrogel as ex-ible electrodes, and PDMS (polydimethylsiloxane) as the substrate and dielectricto enable stretchability of the device. It is currently being developed by Prof. JohnMadden and his team in the Electrical Engineering Department of UBC[32].The fabrication of Gelly requires the use of hydrophilic hydrogel material forthe electrodes of the sensor. Layers with grooves are made by creating a metalmould and then pouring a the PDMS polymer and co-polymer with a catalyst thenleaving it to cure in 80°C for an hour. This layer with grooves is then bonded to auniform dielectric layer to form channels. A mixture of salt, water, and acrylamidemonomer, initiator and accelerator is made, and injected into the channels formed.A sigma-delta ADC type CDC (capacitance digital converter) is used to convertthe capacitance to a digital output using a 32kHz signal.In this chapter, we will state the current capabilities of Gelly, discuss its limi-tations and trade-os to consider while designing products with Gelly.3.1 Current Capabilities of GellyGelly can accurately detect the position of a nger up to 5 mm above the surfaceof the sensor. It can detect the presence of a nger above the surface up to 20mm.Its frequency of polling for proximity and touch is currently roughly 700 ms, butthis can be increased in the near future with the use of a better CDC. Its touchsensing resolution is 5mm.Although some functionalities are not yet implemented, Gelly has the theo-retical potential to simultaneously sense pressure, shear, and stretch sensing ca-9Figure 3.1: The Gelly sensor10Figure 3.2: Change in capacitance due to a hovering nger at various distancesfrom the top of the sensor. The change in capacitance upon approach of the ngeris negative[32].pabilities in the near future. These features may need be able to exist in one sensorat the same time. There are currently certain tradeos to the availability of thesefeatures.Gelly’s current features are the following;• Flexibility; It can be bent without breaking.• Stretchability; It can be stretched up to 300% its original size.• Transparency; Rather than wires, the electrodes are lled with hydrogel,which is transparent. This results in a transmittance of 90% which is ap-proximately the same as clear glass.• Cheapness; The material costs roughly $1 per meter square.• Scalability; The sensor can make up to hundreds of meters square and it isstill expected to work.• Thinness; It can theoretically be made approximately 100 microns which isas thick as a sheet of paper.11• Multitouch; It can sense gestures requiring multiple ngers and a singletouch with localization. This gesture could require any number of ngers.As long as the ngers are 5mm apart, the sensor will be able to sense dif-ferent touch points.• Proximity sensing; It can sense a nger up to 20mm.3.2 Possibilities of the Gelly SensorWhen Gelly’s proximity range is increased, the sensor can be used to increase theinteraction space of small displays. The space can be increased to above, around,or behind devices. Bend sensing capability of the sensor will add possibilities ofwearables for active wear and various sports gear. Gelly is particularly suited forexible devices and therefore have a lot of potential to provide value for wearabledevices. Its main limitation will likely be the trade-o between its proximity rangeand horizontal resolution.3.3 Limitations of the Gelly Technological ApproachThere are certain compromises that may be inherent to Gelly’s technological ap-proach while extending the current capabilities and adding new capabilities.Gelly proximity sensing requires users to interact with it using skin or con-ductive materials. As Gelly is a capacitance-based sensor, users must use theirnger directly on or above the sensor or make use of conductive materials. Thisis a limitation of most capacitance-based sensors.Increasing the vertical range of proximity sensing results in a decrease in hor-izontal resolution. To increase vertical range, the electrodes in the sensor need tobe larger. This results in fewer electrodes on the sensor array and therefore adecrease in spatial resolution. All sensors with hydrogel electrodes have this lim-itation. This trade-o between vertical range and horizontal resolution is criticalwhen designing products with proximity capabilities using Gelly.Finally, the addition of shear sensing capabilities would make the sensor lesstransparent. This may be solved with more engineering in the future, but it is acurrent limitation of the technological approach.3.4 Limitations of the Current Gelly PrototypeThe Gelly prototypes available at the time of this research suered from a numberof limitations which made it non-representative of the technology’s true potential.12These are limitations of the current prototype but are planned to be improved inthe near future.The surface of the sensor feels highly frictioned. This is a disadvantage ofthe current sensor because it will be used to overlay the device’s screen and willmake the touch feel unpleasant to the user. If a layer was added to the outerlayer, it might aect the exibility and the stretchability of the material. Dierenttechniques still need to be explored to nd a solution to this problem.The sensor’s polling frequency is currently not high enough for most humanactivities as it is about 1.4 Hz. This would need to be increased to a minimum of50 Hz to be humanly usable. Although Gelly’s developers have noted that this ispossible to x, we have not yet tested and validated an improved prototype.Lastly, the encapsulation of the hydrogel inside the sensor is not yet per-fected. Gelly currently uses parylene, a biocompatible coating, for the hermeticseal. However, parylene can crack when it is thick, and if it is too thin, it will notbe an eective encapsulant. This means that the sensor rapidly loses accuracy.The developers are looking to replace parylene by UV curable sealants that areexible and stretchable, in order to position Gelly to be usable in the future.We therefore used a proxy technology to simulate future the capabilities ofGelly. The Leap Motion Controller oered a cheap and accurate solution as ex-plained in Chapter 4. In several years, Gelly and many other sensors will beappropriate for wearable device integration. Meanwhile, interaction techniquefunctionality can be prototyped with existing technology.13Chapter 4Case Study of Prototyping forNew TechnologiesAs Gelly is at an early development stage, we used rapid prototyping techniquesto simulate it using proxy technologies. This chapter thus demonstrates our casestudy of prototyping for new technologies that are not available for user studies.This chapter gives detail on the process we took to prototype the interac-tion techniques we designed to take advantage of capabilities like proximity andpressure sensing. We rst veried the present functionality of the technology bycollaborating with its development team (section 4.1). This involved making thesensor and testing its capabilities with various programs.We then used low-delity sketching to rapidly prototype our interaction ideas(section 4.2.1). This is another level of simulation of the capabilities of technol-ogy that cannot be used for studies. The prototypes are not interactive; they areused to animate ideas. The purpose of using rapid prototyping was to quicklyget an idea of the value of concepts we were generating without doing a lot ofengineering and development.Once the interactions were more robust, we used a proxy technology to gen-erate a high-deliy prototype (section 4.2.2). We used similar rapid prototypingtechniques for the design of the pseudohaptic illusion (section 4.3).4.1 Functionality VericationTo get started with Gelly, we learned about the capabilities of the current sensorfrom Prof. John Madden and his team. In order to verify the basic functionality ofGelly, we used a Processing program that outputted capacitance values. We thenwrote two programs to help us understand the current usability state of Gelly.First we wrote a Matlab program that logged capacitance values for a givenduration of time. It then visualized these capacitance values using a heatmap on a4-by-4 grid as seen in Figure 4.1. The program was useful to visualize changes incapacitance for a saved lapse in time. Given a le of capacitance values, it gave ananimation of what the heat map looked like. The heat map encoded touch points14Figure 4.1: The nal heatmap generated by the Matlab program to visualize touchand proximity capacitiance data. On the right, there is the legend that showsdistance of the nger away from the sensor.with a dark blue hue and encoded proximity with saturation. The more saturatedthe red hue was, the closer the nger was to the sensor.As shown in Figure 4.2, we tried several colors to nd the best possible visu-alization that would encode proximity and touch visibly. We then nalized theheatmap generated by the Matlab program as shown in Figure 4.1. On the rightof the heatmap, there is a legend that shows the distance of the nger away fromthe sensor.The second program was written in Java. This program was initially writtenby Dr. Madden’s team. We changed the given program to provide more infor-mation about the delta change in capacitance, and to encode the proximity of anger from the sensor with saturation. We also added controls to reset baselinevalues to account for sudden jumps in capacitance values.This program provided a real-time visualization of capacitance values. It mappedthe 4x4 grid in the Gelly prototype directly onto a visualization that had two15Figure 4.2: Heatmaps with three dierent color schemes generated by the Matlabprogram from the same capacitance dataset.modes; a line plot as seen in Figure 4.3 and a real-time heat map as shown inFigure 4.4.The line plot was useful to obtain raw capacitance data. As a nger ap-proached a cell in the Gelly grid, the capacitance of that cell dropped until it hit itslocal minimum (approximately %15) on touch. The visualization provided a 4x4grid of line plots that represent each cell on the sensor.The heatmap was useful in getting a 3D understanding of touch and proximityof ngers. As a hand approaches the a cell in the 4x4 grid, the white cell on thevisualization that represents the cell starts turning more saturated red. Whenit becomes a bright red color this indicates touch. Saturation is used to encodehow close a nger is to the sensor. The 4x4 array grid is annotated in real-timewith raw capacitance values and the delta change in capacitance values from amoving average. A negative value indicates a capacitance drop which signiesthe approach of a hand. The raw data was specically useful to identify problemssuch as sudden jumps in capacitance.When we built sample prototypes of Gelly, the Java program gave us a su-cient understanding of how Gelly worked, and its current capabilities. It became16Figure 4.3: 4x4 grid of capacitance values received by the Java program.Figure 4.4: 4x4 grid of capacitance values received by the Java program.17clear that with the current state of the sensor, we would need to use proxy tech-nologies to simulate the future potential of the sensor.4.2 Prototyping for a Future TechnologyIt is important to prototype for developing technologies throughout the develop-ment process to determine what the technology will be able to aord. This way,not only do we have ideas of what problems they can solve, but we are also ableto direct the development of the technology. We thus searched for ways to sim-ulate this technology in a way that would help us take advantage of its potentialcapabilities and design for this kind of future technology.To do this, we rst found out about its potential capabilities. We worked withthe domain experts; Prof. John Madden and Saquib Mirza. We inquired aboutthe current state of Gelly, and its future possible capabilities. We found out aboutGelly’s current proximity range, which was about 0.5-2 cm high. We also un-derstood that the higher the sensing range in the z-dimension the less resolutionGelly has in the x and y-dimension. We decided to focus on the proximity andtouch capabilities of Gelly, and used these properties to choose our prototypingmethods.4.2.1 Low-delity PrototypingLow-delity prototyping is an early stage prototyping method that is used to pro-duce quick alternative approaches to the design [30]. As standard prototypingmethods for proximity-based interactions are yet to be developed, we followed alow-delity prototyping methodology and began our interaction design with pa-per and whiteboard sketches. With these tools, we were able to generate as manysketches as possible without spending excess time on any one of them.The next step was to bring these interactions to life. We made animated con-cept videos to achieve this. The designs were made using SketchApp and thenby utilizing the power of Principle, they were animated to be medium delityprototypes as seen in Figure 4.5. These tools are not created for proximity proto-typing. They use a circular translucent circle to represent a cursor and the cursorbecomes more opaque to represent a touch interaction like tapping or swiping.We encoded the height of the nger above from the surface of the device to bethe size of the cursor.Lastly, we performed informal evaluations for each iteration. These conceptsencompassed several near-proximity interactive functions.The rst interaction was a navigation method where users could hover on afolder to see the apps that the folder contains (Figure 4.6). They could then click18Figure 4.5: Zoom interaction prototyping using Principle.on the folder to access the app of their choice with one click and without the needto navigate in and out of folders in search of an app.Another application to this interaction is showing the latest notications ofan app on hover. The user can then click on a specic notication to directlyaccess it (Figure 4.7).The second interaction (Figure 4.8) was a reading app for very small displays.In this interaction, the user can make text glide right or left depending on theside they hover over. Their speed is determined by the distance of their ngerfrom the screen. Once the user is happy with the speed, they can move theirnger horizontally and remove it. The text will continue gliding in the specieddirection and speed.The nal interaction was a method of zooming on small displays by usingproximity in order to minimize occlusion of content, and allow for quick contextto detail switching (Figure 4.9). This is the interaction on which this thesis focuses.The animations for these interactions were also used in the process for prepar-ing the Provisional Patent Application (application number: 62/481104).Zooming emerged as a worthwhile focus as there was a need for a betterzooming interaction on small displays and the interaction space was not enoughto perform the current zooming techniques eciently. We therefore went on tohigh delity prototyping for the zooming interaction.19Figure 4.6: HoverPeek interaction animation simplied to 5 steps. Touch is indi-cated with a pink thumb. the grey thumbs indicate hover.Figure 4.7: FolderPeek interaction animation simplied to 5 steps. The translucentcursor indicates a hover and the white opaque cursor indicates a touch.20Figure 4.8: QuickRead interaction animation simplied to 3 steps. The size of thecursor indicates the height of the nger above the screen. The larger the cursor,the further the nger from the screen.Figure 4.9: Zooming interaction animation simplied to 5 steps. All the cursorsin this step indicate proximity. The size of the cursor indicates the height of thenger above the screen. As the cursor gets larger, the nger gets further from thescreen, and the image is scaled to be larger.21Figure 4.10: Angry Chicks game animation simplied to 6 steps. The size of thecursor indicates the height of the nger above the screen. The larger the cursor,the further the nger from the screen.In order to simulate the idea of the pseudohaptic illusion, we also createda game animation, Angry Chicks (Figure 4.10). In the game, there is an elasticslingshot that can be "pulled upwards" using proximity to throw a chick towardsblocks, similarly to the game "Angry Birds".4.2.2 High-delity PrototypingUser studies and performance evaluation demands an interactive prototype witha robust and accurate mid-air sensing capability. Therefore we go beyond thelow-delity by building a “proxy technology”. This prototype is high-delity, isinteractive, and is signicantly more realistic than the previous prototyping thatwe had produced.22High delity prototyping was especially important to run user studies in orderto iterate on and improve our interaction. To accomplish this, we used a Leap Mo-tion Controller to simulate proximity sensing with its hand-tracking capabilities.The Leap Motion Controller is a cheap technology with no hardware overheadand has a high level of accuracy that can sense up to 0.1mm. We used the Leapwith the Orion hand-tracking API (v3.2.0) [25]. The hardware is based on a pairof infrared cameras and three infrared LEDs (60 frames/second sampling). TheLEDs illuminate the scene (λ = 850nm), which is tracked by the cameras to forman inverted-cone-shaped interaction space (150° wide and 120° deep); maximumtracking range is 800 millimetres [9]. Leap’s Javascript API has built-in web sock-ets, for front-end prototyping with web technologies.We also used a smartphone and an Asus smartwatch to simulate the proximity-sensing small device. The touch capability of the display in conjunction with theLeap Motion Controller’s hand tracking was powerful enough to simulate thebuilt-in proximity and touch sensitive displays.We tried various methods of using the Leap Motion Controller to best sim-ulate the proximity-sensing technology. The real proximity-sensing technologywould be integrated on a phone or smartwatch and would sense ngers above it.This means that we needed to place the Leap in such a way that it would mostaccurately sense the hand above the phone. One way to do this is to place theLeap next to the phone on the table, looking upwards. However, this does notwork well because the hand using the phone would be left out of the triangularsensing eld above the Leap.We then used the Leap in virtual reality(VR) mode. The Leap is optimized forVR when it is placed looking “down” at the hands as though the device is mountedin front of a user’s eyes. To simulate this environment, we placed the Leap on aring stand. We then found the optimal angle and hand placement for the setup.Figure 4.11 shows the optimal setup for the Leap Motion Controller to simulate adevice with build-in proximity-sensing capability.Using this high-delity prototype, we evaluate zed-zoom techniques as de-scribed in in Chapter 6. Both study components were performed on the SamsungGalaxy S7 smartphone with a 14cm x 7cm display. Part 2 of the study also in-cluded the Asus ZenWatch 2 smartwatch with a 4.1cm x 4.9cm display. Androiddevices gave access to a smartwatch browser [2].The Leap was mounted on a ring stand to track users’ hands and ngers (Fig-ure 4.11), and tracked the in-air interaction space directly above the device screen.Finger height above the display was sent to sockets using Leap.JS, to control imagescale. To minimize latency, we avoided CSS transitions.Standard pinch-to-zoom functionality was written using HammerJS on thesmartwatch to allow for in-study measurement of zoom levels. Code latency was23Figure 4.11: Prototype setup with the Leap Motion Controller on a ring stand andthe devices.about 2ms.4.3 Rapid Prototyping for the Pseudohaptic IllusionFor the pseudohaptic illusory eect of the interaction, it was also important touse rapid prototyping techniques to help understand both what we were ‘feeling’and how to exploit the intended haptic percepts. We sketched possible pseudo-haptic illusions using stretchy materials such as balloons and springs, followingthe simple rapid haptic prototyping methods developed by Moussette [26] (2011).We employed a rapid prototyping approach to try dierent interactions andsimulate how illusions would feel with minimal engineering eort. We only im-plemented as much as was necessary but were able to receive informal feedbackfrom our labmates and iterate on our designs. It was a low-cost method to improveour illusion and interaction ideas without having to implement all of them.24Chapter 5Interaction DesignSmaller screens aord less space on the device’s display for zooming interactions.By using proximity sensors, we can can add the space above the display to increasethe interaction space. In this chapter, describe our considerations and process forzed-zoom design, and three zed-zoom variants we have implemented. The rstdiers from the others by its scaling function (linear vs. linearized-log). We usedvarious scaling functions to try out possible pseudohaptic illusions. We eventuallytermed the two variants of zed-zoom Linear and Elastic Zed-Zoom (LZ-Zoom andEZ-Zoom) respectively1. The third technique is EZ-Zoom with auditory feedback.While near-proximity sensing technology is under development, early proto-types are not yet suciently stable for interaction design development and test-ing. Our development approach therefore relied on simulating anticipated ex-periences rst with sketching methods, then alternative, less mobile but other-wise appropriate existing technology. This gave our work the additional role ofgenerating technical application specications for further near-proximity sensordevelopment.This chapter will rst explain the design of zed-zoom, LZ-Zoom, and EZ-Zoom.We will then talk about the design choices for the auditory feedback used to aidthe pseudohaptic illusion. Finally, we present the details of the proxy technologyused to simulate the experience of the future technology.5.1 Basic Zed-Zoom ComponentWe chose a 16cm range of motion for our prototype following conrmation thatthe height and resolution sensitivity specications that we derived in this rangeare technically feasible with mobile-friendly near-proximity technology. We triedvarious heights and decided on the 16cm range to allow for users to reach thelimit with one hand motion. Another concern that was taken into considerationwhile deciding on the maximum range was that users have sucient space in thezed-axis to move their nger, as movement is important in interactions that areenhanced with motion artifacts. A large reach for some hands, 16cm allowed us1We refer to the log-scaled zoom method as “elastic” for consistency.25Figure 5.1: (a) Zed-Zooming: The user touches the graphical zoom target to selectit, then moves nger up and down above the screen surface to zoom it. (b) EZ-Zoom: A pseudohaptic illusion of an elastic connection between nger and screensupplies an interaction metaphor.Figure 5.2: (a) A designer-dened scaling function relates zed-axis nger heightto zoom level; its choice may impact controllability. The two grey lines representtwo possible scaling functions: logarithmic and linear. We use the function thatis represented by the red line; a linearized-logarithmic function. We choose thisbecause the knee in the function amplies the eect of the slow zooming. (b) Im-age scaling linked to nger lifting can trigger the illusory perception of increasingforce, most strongly with the function highlighted in (a). The red scatterplot rep-resents the user’s perception because the perception is approximate. It is hard tomeasure the user’s perception because it is a movement artifact. The plot is ourconception of the user’s perception and is not measured empirical data.26to observe how users grounded their gestures (e.g., wrist or elbow braced nearsurface); however, this ultimately was not a focus of our study.In our implementation, zed-zoom mode is triggered with a tap on the intendedzoom target, and can be turned o at any time with a second touch anywhere onthe screen.5.2 LZ-Zoom and EZ-ZoomWe began simply, with nger height proportional to zoom scale (Linear Zed-Zoom, or LZ-Zoom). Content scales linearly with nger height up to the 16cmthreshold, when content resets to its original position. In pilots, this did not tendto produce an elasticity illusion (Figure 5.2, grey lines).We searched the scaling function space for relationships that would trigger anillusion, such as accelerating and decelerated scale speed with nger height. Wefound that decelerating scaling rate (e.g., a logarithmic scale increase with ngerheight) makes users feel that the image is harder to pull as the nger is liftedhigher (and vice versa as the nger descends), and describe this with terms thatsuggest elasticity or springiness. We believe this is because the nger must travelfurther per unit of scale change, thus evoking a sense of eort that increases withnger height (Figure 5.2b). Furthermore, we realized that the feeling of the imagebeing harder to “pull up” could give users a subtle warning about the extent ofthe interaction space.This elastic eect was felt most strongly when the log shape was accentuatedwith a sharper “knee”, achieved with piecewise linearization (Figure 5.2), red line.We termed zed-zoom with this linearized log scaling function EZ-Zoom.Specically, as the nger rises above the surface, screen content is magnieduntil the nger reaches the lower threshold setting (for the evaluation reportedhere, 11cm above the display). As the nger continues to rise, content scales moreslowly (0.3 times the previous speed), making users perceive the image as harderto pull up. At 16cm, the image resets to its original size, representing the "snap"of an elastic connection.5.3 Auditory FeedbackThe auditory feedback we used had two parts: a continuous proportional stretch-ing sound during nger movement, and a pop sound at breakthrough. We chosethe auditory feedback to be mimetic of a real elastic object, to reinforce the elas-ticity metaphor. We used an audio track that had sampled a balloon stretching27almost to the point of bursting. The track started with rubber stretching soundsand got louder as it continued.For the rst 11cm of the interaction, we used the rst part of the audio clip(lower frequency and volume); above 11cm, a louder, more frequent audio clip torepresent the balloon stretching to its limit. Clips were 2s, played continuouslyduring nger movement, and stopped when the user stopped.For the pop, at the breakthrough point we played a single instance of thesampled sound of a snapping elastic rubber band, to reinforce the perception thatthe image snaps back to its original size along with the graphics.28Chapter 6EvaluationWe conducted a two-part within-subjects study in a single session to identify anypseudohaptic illusions, observe what aects their stregth, and evaluate the util-ity of zed-zoom. In Part A (RQ1 and RQ2), we compare self-reported strengthof pseudohaptic eects (as elicited by EZ-Zoom) on a phone screen based on (a)the presence or absence of auditory feedback as described above, and (b) varia-tions in image content. We also qualitatively inspect how the pseudohaptic eectcontributed to the experience and control of zooming. We assumed the illusionstrength would not vary between phone- and watch-sized screens (rather, thatthe relative utility of zed-zoom would), and thus did not investigate screen sizein this part. In Part B (RQ3) we compare user performance and reactions to ourdesigns on a smartphone and a smartwatch.Call for participation was done by email and printed yers. 12 participants(ve female) received $15 for a 1-hour session. They were all right-handed with2+ years of smartphone experience, spanned 11 countries and had experience in-cluding architecture, engineering, and creative writing. All 12 participants n-ished both parts of the study described in this chapter.This chapter presents the study details which involve the participants andexperimental procedure for both Part A and Part B of the study. After each methodis explained for one part of the study, we present its results. The apparatus wasset up as described in Chapter 4.6.1 Research Questions and ObjectivesWe conducted an evaluation of zed-zooming techniques. Our inquiries are basedon the following research questions regarding multimodal inuence on the pseu-dohaptic illusion of elasticity, the utility of the illusion, and the usability of zed-zooming techniques compared to pinch-to-zoom.6.1.1 Research Question 1Is there a pseudohaptic illusion that a majority of the users feel? If so, how is user per-ception of the strength of the pseudohaptic illusion of elasticity impacted by auditory29feedback and image content?Auditory event feedback (e.g., “popping” through a boundary) can modifyhaptic perception of actual compliance [35], so we wondered if it could have asimilar eect on an illusory percept, thus facilitating the elastic illusion. We alsonoticed that dierent graphic content – e.g., round versus square shapes, or photosversus icons – sometimes seemed to “feel” dierent during zooming, suggestingthat some image types could be more facilitory than others. We tested the imageof a round soccer ball and a rectangular photo of a group of people.6.1.2 Research Question 2Proximity sensors have a limited range of detection above the screen of a device. Cana pseudohaptic illusion help inform users of this limit?With the attening scaling function that best triggers the pseudohaptic illu-sion (Figure 5.2a), image scaling slows for vertical movement at the top of the in-teraction space. We posited that this could signal the limit’s approach, for greatercontrol.6.1.3 Research Question 3Does zed-zoom, with or without a pseudohaptic illusion, assist in image rescaling ondierent screensizes compared to pinch-to-zoom? What do users prefer?With pinch-to-zoom, the state-of-the-art zooming technique on small displays,users often must clutch to reach an intended zoom level. Since the range of azed-zoom gesture can be substantially larger than a watch screen or even mostsmartphones’ width, we expect that the need for clutching will be reduced oreliminated, improving task speed and uidity.Other in-air zooming techniques have been proposed before. While they re-duce occlusion and clutching, they suer from the problems that arise with theloss of haptic sensations in mid-air interaction techniques.To investigate our research questions, we conducted a two-part within-subjectsstudy in a single session where all participants performed both parts. The rst partof the study addressed the rst and second research questions on the pseudohap-tic illusion. The second part of the study addressed the third research question onthe usability and user experience of zed-zooming.6.2 Part A: IllusionWe rst investigated the strength of the pseudohaptic illusion, and impact of au-dio cues (multisensory reinforcement) and the zoomed image’s shape and con-30tent. We took a mixed-methodology approach where qualitative interview studyis embedded into a quantitative methodology for insight into hypotheses that,with EZ-Zoom:H1: At least some illusion will be perceived with at least moderate strength fora majority of participants, on average through all conditions (audio andgraphic manipulations).H2: Illusions felt will be stronger for the 3-dimensionally (3D) suggestive ballimage than for the 2D photo.H3: Illusions will be strengthened by auditory feedback.6.2.1 Part A Experimental ProcedureWe conducted a 2×2 {audio, no audio} × {ball, photo} within-subject qualita-tive evaluation with presentation order randomized, employing EZ-Zoom alone –i.e., the scaling function was linearized-log for all 4 conditions. Ball and Photoimages are shown in Figure 6.1; and audio as described above.For each condition, participants were rst asked to zoom in and out for 40 sec-onds, for zed-zooming familiarization. We then conducted a short semi-structuredinterview after each condition, inviting participants to suggest and describe anyreal-world metaphors that seemed to t that zoom experience for them, whilecontinuing to access the prototype. Participants rated the intensity of feeling foreach of their supplied metaphors on a scale of 0 (no eect) to 10 (very strong/be-lievable eect). Last, they demonstrated and explained how they determined thespatial extent of the interaction space. Point height was sampled and logged.6.2.2 Part A Results: Analysis of the Pseudohaptic IllusionAltogether results tend to support:• H1: positive.11/12 (92%) participants felt at least some illusion with at least moderatestrength, averaged over all conditions (Figure 6.2). One illusion dominated:Elastic manifested for 10/12 (83%) participants, felt by those participants atstrength of 7/10 on average (moderate; Figure 6.4).• H2: very marginal.The image content’s impact appeared to be minor, although possibly nonzero.We varied (in confound) shape, dimensionality and content type; more workis needed to establish if patterns do exist.31Elastic (74%) rubber, rubber band, hair tie, hair band, stretchystring, balloon, yoyo, chewing gum, spring,stretchy, bouncy, slimy, elastic, harder to moveat the top then drop, stiness, tension, tightness,spring, force, gravityConnected(30%)yo-yo, rubber bandy string, stretchy string, con-nected, string, not separated, connected withbar/poleSticky (11%) chewing gum, glue, stickyTable 6.1: Study Part A – Phrases used by participants to describe each percept,if that percept was felt. (%) indicates items counted in that category, out of all 31unique terms supplied. Phrases could apply to multiple categories.• H3: positive.Audio presence seemed to inuence the illusion. Participant descriptionsattribute this more to the “pop” sound eect than the “stretching” eect.To evaluate the presence, intensity, and benets of the pseudohaptic illusion,we qualitatively analyzed participants’ self-supplied rich physical descriptions,and rankings of their strength by condition {audio, no audio} × {ball, photo}.Categories:We found three thematic categories: Elastic, Connected and Sticky, shown alongwith representative participant-supplied terms in Table 6.1. We developed theseby considering physical properties, metaphors, semantically related words, andthe sentence context. We started by identifying key phrases in participant quo-tations through anity diagramming [5], then organized these into categoriesby thematic analysis [6]. A single phrase could appear in multiple categories:e.g., “stretchy string” appears under both Elastic and Connected. Categories werecross-checked by lab members.Condition Ratings:Next, we assigned participant’s ratings for their self-supplied terms to these cate-gories to produce an aggregate set of ratings for each condition and each category.For example, if a participant mentioned “stretchy string” and rated the intensityof their experience of a “stretchy string” as 8/10, the 8/10 rating would be aggre-gated in both Connected and Elastic categories for that condition (Figure 6.1).32Figure 6.1: Study Part A – Number of participants (N=12) who self-reported aword in a given illusion category, by condition. The categories are derived byemerging patterns.Figure 6.2: Study Part A – Average strength of illusion for each category through-out all conditions per participant.When participants did not mention a term evoking a given category (e.g., Elas-tic) at all, we set their rating for that category to zero, inferring that that formof the illusion did not occur for them. These individuals might have used otherterms, implying that they were capable of feeling some illusion; or, they mighthave reported no illusion at all.As seen in Figure 6.1, Elastic was the dominant percept, in all conditions.Specically, Elastic was (a) felt by the majority of participants (7.8/12, averagedover all conditions); (b) the most prevalent illusion in every condition (i.e., per-ceived by more participants than others) and (c) relatively insensitive to the mul-tisensory conditions (auditory feedback, graphic geometry and content) that par-ticipants were exposed to.Based on this, we narrowed our analysis to the Elastic illusion.Figure 6.3 counts participants whose ratings t into each of three bins: high(gave ratings of 7-10, indicating they felt a strong to completely believable elastic-33ity illusion); moderate (3-6, a moderately believable illusion); low to no eect (0-2,no illusion or a very slight illusion). Figure 6.4 shows condition-wise averages forall ratings assigned to the Elastic category.6.2.3 Use of Multisensory Feedback and Pseudohaptic IllusionBoundary Estimation:When asked to estimate the reset boundary with EZ-Zoom, all 12 participantscould correctly specify and describe where the image scaled more slowly – “Atsome point, I don’t want to go away any further.” (P12).Some statements specically indicate reliance on graphical and auditory feed-back elements to navigate the interaction space. “I don’t want it to burst so I’mmoving more slowly.” (P4), “The image gets bigger, and you can tell it might be closeto exploding. Sound is getting louder.” (P3). When the sound gets louder there’s moretension in the rubber band because it’s about to fall (P12).Some of these clearly had crystallized into a physical percept. When askedhow they estimated the spatial extent of the interaction space, 9 participants re-ferred to the illusion: “[Spring] gets tighter. It moves less as you move further away”(P10), “As you get further up it’s more eortful” (P8). Two other participants saidthey knew where the boundaries were out of “intuition” (P1, P9); they could notsay exactly why.Auditory Feedback:Most participants found the constant auditory feedback annoying especially aftercontinuous use. Even so, 9 participants demonstrated that they found value in theaudio; e.g., “helpful for getting info about the change in status about whether stringis attached” (P9).The annoyance may have been tied to their perception of the pseudohapticillusion, making them feel more work was needed to pull the image up: "Thestretching sound was not too bad but made me feel like I had to put in more eort"(P9).Audio as Training Wheels:Two participants suggested persistent value: "[with audio] the illusion is stronger.But you may not want to listen to balloon popping for a long time. One cool thing isthat once I listen to it once I felt the illusion stronger. Make a tutorial with the soundand then get rid of it and it would not be as annoying" (P8; similar words from P10).34Figure 6.3: Study Part A – Incidence and strength of pseudohaptic elasticity per-ception (N=12). Number of participants who perceived elasticity, by condition.Binned into three strength levels based on ratings for the strength with whichself-supplied descriptions were felt.Figure 6.4: Strength ratings for self-described Elastic illusion, averaged by con-dition. [0-10]; overall average (4.7). (N=12)356.3 Part B: Usability and User ExperienceTo investigate the utility of zed-zoom, a 4×2×2 factorial within-subject designhad factors of:zoom condition PTZ, LZ-Zoom, EZ-Zoom,EZ+audioscreensize smartwatch, smartphonezoom extent (to hittarget)long, shortWe hypothesized that participants would:H4: Compared to pinch-to-zoom, perform zed-zoom conditions (a) faster (smart-watch); (b) on par (smartphone).H5: Perform (a) zed-zoom conditions with no dierence in time for long andshort zooms; (b) pinch-to-zoom long zooms more slowly than short zooms.H6: Find more control in EZ-Zoom than LZ-Zoom; auditory feedback annoy-ing after long exposure; and thus prefer EZ-Zoom on (a) smartwatch; (b)smartphone.H7: Compared to pinch-to-zoom, nd zed-zoom conditions (a)more useful (smart-watch); (b) as useful (smartphone).6.3.1 Part B Experimental ProcedureDesign:Trials were blocked on zoom condition and screensize, with block order random-ized. Within each zoom condition × screensize block, participants performed verandomized repetitions of each zoom extent, for a total of 4×2×2×5=80 trials perparticipant. Participants began with two familiarization trials, using a surface anda zed-zoom technique respectively (pinch-to-zoom and EZ-Zoom).Task:For each trial, we asked participants to zoom into a solid-colored target square(centred on the screen) until it t a translucent red frame. The trial began whenthe participant touched the screen (for pinch-to-zoom, when the user started the36pinch gesture). When the target was in the frame, the frame turned yellow toindicate that they should remain at that zoom level. When the target was kept atthat zoom level for 0.5s, the frame turned green, signaling trial completion.Metrics and Qualitative Data:We measured task completion time for each trial. After completing all trials fora particular device, participants were asked to (a) verbally compare and rank thezed-zoom conditions versus pinch-to-zoom for that device for usefulness; (b) rankthe three zed-zoom techniques by preference; and (c) give opinions on suitableuse cases for this zooming approach in a short unstructured interview.6.3.2 Part B ResultsTask Completion TimeWe ran a 4×2×2 (zoom condition× zoom extent× screensize) repeated mea-sures ANOVA on task completion time (DV, in seconds). Because sphericity wasviolated for the interactions of screensize×zoom condition, zoom condition×zoom extent, and screensize × zoom condition × zoom extent, we report p-values with Greenhouse-Geisser correction. We report statistically signicant re-sults (p<0.05).H4a,b: accepted. Figure 6.5 illustrates a signicant interaction between screen-size and zoom condition (F3,33=35.25, p<.001). Pinch-to-zoom was signicantlyslower than zed-zoom techniques on the smaller screen (by 56%).H5a,b: accepted. Figure 6.6 illustrates a signicant interaction between zoomextentand zoom condition (F3,33=4.33, p<.05). When participants needed to scale alarge amount, it took signicantly more time to apply pinch-to-zoom than whenthey needed to scale a small amount. Interestingly, for zed-zoom techniques,zooming a long versus short distance was similarly fast on average throughoutthe two screensizes.User PreferencesWe asked participants to rank the three zed-zoom conditions in order of prefer-ence for both of the devices (Figure 6.7).H6b: accepted. For the smartphone, EZ-Zoom was ranked as most preferredby 10/12 participants, LZ-Zoom by 4, and EZ-Zoom+ audio by 1. EZ-Zoom wasranked last by no participants, LZ-Zoom by 3, and EZ-Zoom+audio by 6 (Fig-ure 6.7).37Figure 6.5: Study Part B – Average completion times by screen size (zoom extentpooled), 120 observations/bar. 95% condence intervals, *sig. at p=0.05.Figure 6.6: Study Part B – Average completion times by zoom extent (screen sizepooled), 120 obervations/bar.95% condence intervals, *sig. at p=0.05.38Figure 6.7: Number of times participants ranked each zoom condition as themost (green), 2nd (yellow), and least (red) for the smartwatch and smartphonescreensize factors.LZ-Zoom users found it was dicult to control the zoom at the top of theinteraction space: “[LZ-Zoom] seemed too fast” (P1). Participants found EZ-zoomwith audio unpleasant.For the smartwatch, participants preferred LZ-Zoom and EZ-Zoom (Figure 6.7).For their top ranking, 6 participants chose EZ-Zoom, 8 LZ-Zoom, and 1 EZ-Zoom+audio.EZ-Zoomwas ranked lowest by no participants, LZ-Zoom by 2, and EZ-Zoom+audioby 8. Therefore we cannot say EZ-Zoom was preferred over LZ-Zoom on thesmartwatch, although taking least-liked values into consideration, it is close.We asked participants to (a) rank pinch-to-zoom versus zed-zoom generally,and (b) discuss their relative potential utility in an unstructured post-interview.H7a (smartwatch): accepted. 11/12 participants ranked zed-zooming asmore useful than pinch-to-zoom on a smartwatch. “[Zed-zoom] doesn’t obscurethe image and you can do it in one uid motion rather than multiple small ones”(P8). “Please put [proximity] on the smartwatch, even though it would be a smallaugmentation on smartphone, on the smartwatch it makes the dierence of buyingand not buying one” (P9).H7b (smartphone): accepted. We anticipated that zed-zoom would at leastbe seen as equivalent to the familiar pinch-to-zoom. In fact, the perceived utilityof zooming techniques was split for the smartphone, with 6 participants choosingpinch-to-zoom, 2 choosing zed-zoom, and 4 calling them equal – i.e., half foundzed-zoom at least at-par. Reasons for pinch-to-zoom preference included beingmore familiar with pinch-to-zoom, and nding it more accurate and stable. How-ever, 9 participants added that there were certain contexts where they would pre-fer proximity, e.g., due to occlusion or where large zooms were required. “I’d ratherdo pinch for specically zooming to a point...[proximity would be] better when you’re39swiping through pictures and you want to see something fast.” (P2). It is reasonableto guess that with practice, zed-zoom would become a viable equivalent to thecurrent standard.40Chapter 7DiscussionWe inspect the production and use of a pseudohaptic illusion for zooming, thendiscuss benets of zed-zoom relative to pinch-to-zoom and compare its variants.We then present the ongoing research on designing the full zooming interactionthat is following the study.7.1 Pseudohaptic Elasticity Illusion and its Value11/12 participants felt a pseudohaptic illusion with at least moderate strength. Outof the self-reported metaphors the illusions that emerged were elasticity, sticki-ness, and connectedness. The dominant illusion among these was elasticity.Among zed-zoom conditions, participants preferred EZ-Zoom on the smart-phone, in interviews citing the control it aorded; but rated the zed-zoom meth-ods equivalently on the smaller smartwatch screen. Their scaling functions mayhave been hard to distinguish on the miniature graphic display. One participantpreferred LZ-Zoom because it was “less eort to pull up” and therefore felt faster(P9). This mention references the pseudohaptic illusion, but raises the possibilityof illusory workload.Participants reported nding more control with EZ-Zoom than with LZ-Zoomat the interaction space boundary. However, LZ-Zoom and EZ-Zoom yielded com-parable completion times. Subjective impression of control was improved, basedon participant comments and ratings, but further study is required to verify thepractical utility of the higher precision, and ‘weigh’ its value relative to illusoryeort.Audio was found to be intrusive but useful, and may work in small doses.Despite its low popularity, the audio feedback we used gave users a tacit sense ofthe interaction space, and enhanced sense of control at its boundary. The feedbackcan be rened to be more subtle and/or infrequent.If the contributions of auditory feedback to the illusion persist after it is dis-abled, intermittent audio could support the illusion while managing annoyance.Persistence may be possible because the audio triggers a metaphoric cognitiveframework for the interaction. In a real-world social setting, audio may also be41intrusive. Audio may be equipped into a tutorial and then removed to have su-cient eects on the user’s mental model.7.2 Zed-Zooming UsabilityOur study demonstrated clear utility for zed-zooming, in both performance anduser preference. Unsurprisingly, both factors dramatically favored zed-zoomingfor the smartwatch display, where pinch-to-zoom has obvious diculties.Other options exist for small-screen zooming (for example, the Apple Watch’s‘Digital Crown’ knob). However, such secondary-control solutions are arguablyless uid than zed-zoom, and inconsistent with zoom conventions on larger dis-plays.Tablet users often employ a stylus for certain kinds of input and applications.Styli do not support the multitouch pinch-to-zoom, highlighting another zoomtechnique inconsistency. Zed-zooming should work with a stylus, with potentialfor consistency across a wide range of screen size and use modes.As expected, participants required signicantly more time to zoom larger dis-tances with pinch-to-zoom for both screen sizes. Even on the smartphone display,users had to perform multiple pinches to achieve large distances, compared to lift-ing their nger higher in a single continuous zed-zoom gesture. Therefore, largezooms are achievable with a single gesture.In contrast to pinch-to-zoom, users can zed-zoom with one nger. For appli-cations and contexts where single-touch zooming is important, such as single-handed thumb- or one-nger or stylus interactions, zed-zooming is a solution.This solves the "sandwhich problem" mentioned in Chapter 2, where users maywant to hold another object with their free hand.7.2.1 Limitations in Prototyping and StudyWe simulated a technology that is too early in development to be tested withusers, with a Leap Motion Controller. While generally very eective, the Leaphad occasional glitches, causing the image to jump. Zed-zoom technique usabilityratings likely suered slightly as a result.The zed-zoom variants we evaluated are not yet equipped with ‘pan’ and ’im-age freeze’ functionality, both of which are necessary for eective zooming. Someparticipants cited this omission as a reason for preferring pinch-to-zoom. Usersalso lacked the ability to lock the zooming scale at a certain point. With the cur-rent state of zed-zoom, users need to keep their nger at a certain height to keepthe zooming stable. The ability to transition between panning and zooming, andlocking the zoom scale needs to be incorporated in the full zooming interaction.42Zed-zoom will likely do better with a developed and robust proximity sensor.Both prototyping limits are conservative with respect to zed-zoom. More reliablesensing and eshed-out function will likely improve its position relative to othertechniques.When examining the form of illusion that participants found when EZ-Zoomingunder dierent conditions, we found participants often repeated the rst metaphorthey constructed in later conditions. Followup is needed to clarify if this was atrue individual tendency, or a carryover bias.7.3 Ongoing Research: Designing the Full ZoomingInteractionAs the zooming technique used in the study was not a full interaction, the nextnatural step was to identify some characteristics of a zooming interaction thatwould need to be considered when designing one, and then to explore how theyplay into a complete and usable zooming interaction. This is an ongoing researchproject.We found that most pan and zoom systems need the following three charac-teristics; (1) a method to transition from zooming to panning; (2) the reversibilityof zooming (users should be able to adjust their zoom levels when needed); and(3) a method to zoom into and out of large scale levels (whether or not users needto clutch to reach large scale levels).7.3.1 Possible InteractionsWe designed three interaction techniques to test the aforementioned characteris-tics. We considered dierent methods to transition between panning and zoomingsuch as coupling in-air panning and zooming, or decoupling the two action to panon screen. We also considered having clear delimiters to transitioning such as bylocking the zoom level with a gesture vs. transitioning to panning without a ges-ture. We also considered various ways to do multi-scale zooming and adjustmentof the zoom scales. We designed three interaction techniques to demonstrate thevarious characteristics. Figure 7.1 compares the dierent interactions according tothe three characteristics. The following subsections explain the three interactiontechniques.43Figure 7.1: Approach method vs. Characteristics of Good Zooming Techniques44Figure 7.2: Approach 1: The Balloon Metaphor State Transition Diagram45Figure 7.3: Approach 2: Push-Pull State Transition Diagram Diagram46Figure 7.4: Approach 3: Levels State Transition Diagram47Approach 1: The Balloon MetaphorThis approach starts with a double tap on the center of zooming, and zooms in bylifting the nger up in the zed dimension. As the nger is lifted from the screen,the maximum position of the nger controls the scale of the content. This meansthat lowering the nger does not decrease the scale. The user starts a zoom outaction by double pressing on the screen and keeping contact for as long as theywant to zoom out. The zooming out action increases speed as the user keepscontact with the surface.Zooming to a large scale can be achieved by double tapping and lifting thenger, then repeating this gesture multiple times. This makes it easy to quicklyclutch and reach a high scaling value. However, the zooming action is never re-versible in the other direction. Panning is done on the screen since lowering thenger during zooming does not change the zoom scale.This approach requires clutching to zoom to large scaling levels and is not areversible method of zooming. However, it has a direct and easy transition fromzooming to panning. Figure 7.4 shows the state transition diagram of approach 1.Approach 2: Push-PullThis approach starts with a double tap on the center of zooming, and zooms in bylifting the nger up in the zed dimension. As the nger is lifted from the screen,the position of the nger is directly proportional to the scale of the content.The user starts a zoom out action by double pressing on the screen and keep-ing contact for as long as they want to zoom out. The zooming out action increasesspeed as the user keeps contact with the surface. During in-air zooming, the usercan zoom out by lowering it and perform a long press on the screen to continuezooming out. In this interaction, the zooming action is reversible in the otherdirection.To lock the scale, the user needs to perform a horizontal swipe right gestureany time, including in air. Zooming to a large scale can be achieved by doubletapping and lifting the nger, then locking the scale, and repeating this gesturemultiple times. Panning is done on screen after locking the zoom scale.This approach requires clutching to zoom to large scaling levels and requiresa locking mechanism to transition from zooming to panning. However, it is a fullyreversible zooming interaction. Figure ?? shows the state transition diagram ofapproach 2.48Approach 3: LevelsThis approach starts with a double tap on the center of zooming, and zooms in bylifting the nger up in the zed dimension. However, this time the zed-dimensioncontrols the rate of speed of zooming in. Until a certain height the rate is keptconstant, after that height, the rate is increased to a higher constant speed.The user starts a zoom out action by double pressing on the screen and keep-ing contact for as long as they want to zoom out. The zooming out action increasesspeed as the user keeps contact with the surface.This approach is non-reversible and does not require any clutching. It pro-vides a direct and easy transition from zooming to panning. Figure ?? shows thestate transition diagram of approach 3.7.3.2 Upcoming Study DescriptionThere are many ways to support the three characteristics (transitioning fromzoom to pan, reversibility, zooming to large scale levels) for a one-nger, in-air,zooming interaction for ultra-small displays. We expect that the ways to supportthese characteristics will vary depending on the use case.Throughout the study described in Chapter 6, we discussed with participantswhat applications they would nd zooming to be useful for on the smartwatch.We thus identied three potential zooming use contexts that participants foundto be useful on an ultra-small display device such as a smartwatch; a map applica-tion, a remote-camera application where the user can look at the photo they tookremotely, and an email-reading application.For these use cases, we are interested in how the aforementioned three char-acteristics play into a complete and usable zooming interaction. What are thetrade-os that designers need to make while designing such a usable interaction?Which variation within the characteristics are most and least suited for a par-ticular use context? Are there certain characteristics that are more valued for aparticular use case?We therefore plan to run a 3x3 within-subjects study with 5 participants whereeach participant will be asked to try the three approaches to zooming. For eachapproach, the characteristics above are present in dierent ways.Participants will be asked to try each use context with each of the three in-teraction technique approaches described above in a randomized order blockedby the use context. Participants will rst be asked to explore each case and thinkaloud. They will then be asked to follow tasks that will be designed to be commontasks on a smartwatch. After each case, they will be asked follow up interviewquestions about dierent characteristics. The questions will be targeted on the49participant’s view of the importance and competence of each of the characteristicfor that condition.We will be asking what users think of the combination, and discuss how theywould improve, or combine various features to make the interaction most usablefor each application. We will then apply qualitative analysis to identify patternsamong participants. The details of the study are work in progress.50Chapter 8Conclusions and Future WorkIn this chapter, we summarize the ndings from our study, and give examples ofpossible future applications. We then discuss future work and outline the poten-tial points of improvement for Gelly.8.1 Findings: Zed-Zoom Promises a Broad Spectrumof UseWe designed and evaluated zed-zooming, a novel family of techniques based onemerging near-proximity sensing. We found that zed-zooming enables uid, e-cient zooming on displays of varying size. On displays so small that multitouchinteractions are impeded, like smartwatches, zed-zooming far exceeds the abilitiesof the touchscreen standard pinch-to-zoom.EZ-Zoom facilitates a pseudohaptic illusion of elasticity, which we theorizeenhances proprioceptive position cues in the absence of actual contact. Partici-pants’ comments conrmed the illusion’s presence, that it conveys informationabout the in-air interaction space above the control surface, and suggest that itenhanced their sense of control.A damped region near the control range boundary may assist with ne con-trol, but also might add to a perception of eort. We found that realistic auditoryfeedback on spatial height and breakthrough-point strengthened the illusion, andthis facilitation may persist after the audio is disabled.8.2 ApplicationsOnce integrated into a full suite of zoom functions including pan, image-freezeand appropriate transitions, zed-zooming has obvious application for a broadrange of media – browsing maps, reading text, and perusing photo collections.Users with impaired vision will benet from quick zooming even on larger de-vices. Zed-zoom on larger devices will be useful for stylus interaction, and quickimage editing, same-hand zooming, and quick zooming during social media or51games. In situations where users need quick one-nger zooming and to see thecontent clearly, zed-zooming may become essential.8.3 Next Steps in Developing Zed-zoom InteractionsMany extensions of zed-zooming are possible and promising.8.3.1 Full-Zoom FunctionBoth panning and the transition between panning and zooming are crucial to thereal-world usage of our zooming technique. The interaction technique also needsto support multi-scale zooming, locking the scale at a particular level, and start-ing the interaction by rst zooming out. Obvious transition mechanisms includedwell and quick secondary gestures. While not implemented in the zed-zoom ver-sions evaluated here, we are already exploring a number of approaches. We haveongoing work on the dierent characteristics to consider while designing this in-teraction.8.3.2 Combining Other Capabilities on the SensorWe found that zed-zooming at an image-context level (e.g., zooming into a face)reduced the need for clutching.Having multi-functional abilities in the sensor such as pressure sensing andshear sensing along with touch and proximity would undeniably give the sensoreven more power. For example, what happens when the user wants to start byzooming out to a smaller size? A pressure sensitive surface could provide a quasi-inverse of the nger-lift control, and continue to build the proprioceptive illusionof springiness.With the addition of shear sensing, users could make small panning adjust-ments using the shear capabilities of the surface, rather than larger movementsthat are less ergonomic.8.3.3 Stylus InputZed-zooming is a single-touch input, suggesting stylus input and an interactiontechnique that is consistent across a broad range of screen sizes. However, writingand stroking with a stylus is ergonomically dierent from surface interactionswith the nger; zed-zooming could be as well.528.4 The Future of In-Air Sensing with GellyOur work demonstrates the potential value of a transparent touch and proximitysensor. There is an opportunity for Gelly and other such sensors to increase theinteraction space of small displays with otherwise limited screen real-estate.More work needs to be done on the Gelly sensor before it can be usable onreal devices. Our project has been useful in guiding the development of this newtechnology in a humanly usable way.Our explorations revealed high potential value in sensing the interaction spaceabove the surface to a height of approximately 3-5 cm, with 1 mm resolution. TheGelly sensor currently can detect a nger up to 20 mm away from the screen with5mm horizontal resolution. A priority should be to overcome current trade-osbetween vertical range and horizontal resolution to attain more accuracy in thishigh-value space.The sensing range of proximity sensors is currently too limited for most real-world applications. However, as technology matures they will become more andmore useful. In addition to detecting proximity above screens, these sensors couldbecome ubiquitous on wearables, to compensate for the lack of a traditional touchscreen. In the future, as exible displays become more common, exible sensorssuch as Gelly will make their way into wearables and into our daily lives.53Bibliography[1] Anand Agarawala and Ravin Balakrishnan. Keepin’ it real: Pushing the desk-top metaphor with physics, piles and the pen. In Proceedings of the SIGCHIConference on Human Factors in Computing Systems, CHI ’06, pages 1283–1292, New York, NY, USA, 2006. ACM.[2] appfour. Web browser for android wear. https://play.google.com/store/apps/details?id=com.appfour.wearbrowser&hl=en, 2017. Ac-cessed: 2017-03-31.[3] Apple. Use accessibility features on your apple watch. https://support.apple.com/en-ca/HT204576, 2016. Accessed: 2017-03-17.[4] Je Avery, Mark Choi, Daniel Vogel, and Edward Lank. Pinch-to-zoom-plus:An enhanced pinch-to-zoom that reduces clutching and panning. In Pro-ceedings of the 27th Annual ACM Symposium on User Interface Software andTechnology, UIST ’14, pages 595–604, New York, NY, USA, 2014. ACM.[5] Virginia Braun and Victoria Clarke. Using thematic analysis in psychology.Qualitative Research in Psychology, 3(2):77–101, 2006.[6] Virginia Braun and Victoria Clarke. Using thematic analysis in psychology.Qualitative Research in Psychology, 3(2):77–101, 2006.[7] Alex Butler, Shahram Izadi, and Steve Hodges. Sidesight: Multi-"touch" in-teraction around small devices. In Proceedings of the 21st Annual ACM Sym-posium on User Interface Software and Technology, UIST ’08, pages 201–204,New York, NY, USA, 2008. ACM.[8] Xiang ’Anthony’ Chen, Julia Schwarz, Chris Harrison, Jennifer Manko, andScott E. Hudson. Air+touch: Interweaving touch &#38; in-air gestures. InProceedings of the 27th Annual ACM Symposium on User Interface Softwareand Technology, UIST ’14, pages 519–525, New York, NY, USA, 2014. ACM.[9] Alex Colgan. How does the leap motion con-troller work? http://blog.leapmotion.com/54hardware-to-software-how-does-the-leap-motion-controller-work/,2017. Accessed: 2017-03-24.[10] Fogale. Fogale sensation technology. http://fogale-sensation-technology.com/, 2016. Accessed: 2017-03-25.[11] Neil Norman Group. Mouse vs. nger as input device. https://www.nngroup.com/articles/mouse-vs-fingers-input-device/, 2016. Ac-cessed: 2017-03-25.[12] Jaehyun Han, Sunggeun Ahn, and Geehyuk Lee. Transture: Continuing atouch gesture on a small screen into the air. In Proceedings of the 33rd AnnualACMConference Extended Abstracts on Human Factors in Computing Systems,CHI EA ’15, pages 1295–1300, New York, NY, USA, 2015. ACM.[13] Chris Harrison and Anind K. Dey. Lean and zoom: Proximity-aware userinterface and content magnication. In Proceedings of the SIGCHI Conferenceon Human Factors in Computing Systems, CHI ’08, pages 507–510, New York,NY, USA, 2008. ACM.[14] Chris Harrison and Scott E. Hudson. Abracadabra: Wireless, high-precision,and unpowered nger input for very small mobile devices. In Proceedings ofthe 22Nd Annual ACM Symposium on User Interface Software and Technology,UIST ’09, pages 121–124, New York, NY, USA, 2009. ACM.[15] Khalad Hasan, Junhyeok Kim, David Ahlström, and Pourang Irani. Thumbs-up: 3d spatial thumb-reachable space for one-handed thumb interaction onsmartphones. In Proceedings of the 2016 Symposium on Spatial User Interac-tion, SUI ’16, pages 103–106, New York, NY, USA, 2016. ACM.[16] Ken Hinckley, Mary Czerwinski, and Mike Sinclair. Interaction and mod-eling techniques for desktop two-handed input. In Proceedings of the 11thAnnual ACM Symposium on User Interface Software and Technology, UIST’98, pages 49–58, New York, NY, USA, 1998. ACM.[17] Dominik P. Käser, Maneesh Agrawala, and Mark Pauly. Fingerglass: Ecientmultiscale interaction on multitouch screens. In Proceedings of the SIGCHIConference on Human Factors in Computing Systems, CHI ’11, pages 1601–1610, New York, NY, USA, 2011. ACM.[18] Sven Kratz and Michael Rohs. Hoverow: Exploring around-device interac-tion with ir distance sensors. In Proceedings of the 11th International Confer-ence on Human-Computer Interaction with Mobile Devices and Services, Mo-bileHCI ’09, pages 42:1–42:4, New York, NY, USA, 2009. ACM.55[19] Edward Lank and Son Phan. Focus+context sketching on a pocket pc. InCHI ’04 Extended Abstracts on Human Factors in Computing Systems, CHI EA’04, pages 1275–1278, New York, NY, USA, 2004. ACM.[20] Anatole Lécuyer, Jean-Marie Burkhardt, and Laurent Etienne. Feeling bumpsand holes without a haptic interface: The perception of pseudo-haptic tex-tures. In Proceedings of the SIGCHI Conference on Human Factors in Comput-ing Systems, CHI ’04, pages 239–246, New York, NY, USA, 2004. ACM.[21] Jinha Lee and Seungcheon Baek. Elastic cursor and elastic edge: Apply-ing simulated resistance to interface elements for seamless edge-scroll. InAdjunct Proceedings of the 28th Annual ACM Symposium on User InterfaceSoftware & Technology, UIST ’15 Adjunct, pages 63–64, New York, NY, USA,2015. ACM.[22] Regan L. Mandryk, Malcolm E. Rodgers, and Kori M. Inkpen. Sticky widgets:Pseudo-haptic widget enhancements for multi-monitor displays. In CHI ’05Extended Abstracts on Human Factors in Computing Systems, CHI EA ’05,pages 1621–1624, New York, NY, USA, 2005. ACM.[23] Nicolai Marquardt, Ricardo Jota, Saul Greenberg, and Joaquim A. Jorge. TheContinuous Interaction Space: Interaction Techniques Unifying Touch and Ges-ture on and above a Digital Surface, pages 461–476. Springer Berlin Heidel-berg, Berlin, Heidelberg, 2011.[24] Mark R. Mine, Frederick P. Brooks, Jr., and Carlo H. Sequin. Moving objectsin space: Exploiting proprioception in virtual-environment interaction. InProceedings of the 24th Annual Conference on Computer Graphics and Inter-active Techniques, SIGGRAPH ’97, pages 19–26, New York, NY, USA, 1997.ACM Press/Addison-Wesley Publishing Co.[25] Leap Motion. Windows vr development. https://developer.leapmotion.com/windows-vr, 2017. Accessed: 2017-03-28.[26] Camille Moussette and Richard Banks. Designing through making: Explor-ing the simple haptic design space. In Proceedings of the Fifth InternationalConference on Tangible, Embedded, and Embodied Interaction, TEI ’11, pages279–282, New York, NY, USA, 2011. ACM.[27] Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, andWendy Mackay. Mid-air pan-and-zoom on wall-sized displays. In Proceed-ings of the SIGCHI Conference on Human Factors in Computing Systems, CHI’11, pages 177–186, New York, NY, USA, 2011. ACM.56[28] Matei Negulescu, Jaime Ruiz, and Edward Lank. Zoompointing revisited:Supporting mixed-resolution gesturing on interactive surfaces. In Proceed-ings of the ACM International Conference on Interactive Tabletops and Surfaces,ITS ’11, pages 150–153, New York, NY, USA, 2011. ACM.[29] Andreas Pusch and Anatole Lécuyer. Pseudo-haptics: From the theoreti-cal foundations to practical system design guidelines. In Proceedings of the13th International Conference on Multimodal Interfaces, ICMI ’11, pages 57–64, New York, NY, USA, 2011. ACM.[30] Preece Rogers, Sharp. Interaction Design. Wiley, 2002.[31] Samsung. How do i use air gestures to control my samsung galaxy s4. http://www.samsung.com/us/support/answer/ANS00044009/, 2017. Accessed:2017-03-19.[32] Mirza Saquib Sarwar, Yuta Dobashi, Claire Preston, Justin K. M. Wyss,Shahriar Mirabbasi, and John David Wyndham Madden. Bend, stretch, andtouch: Locating a nger on an actively deformed transparent sensor array.Science Advances, 3(3), 2017.[33] Andrew Sears and Ben Shneiderman. High precision touchscreens: designstrategies and comparisons with a mouse. International Journal of Man-Machine Studies, 34(4):593 – 613, 1991.[34] Srinath Sridhar, Anders Markussen, Antti Oulasvirta, Christian Theobalt,and Sebastian Boring. Watchsense: On- and above-skin input sensingthrough a wearable depth sensor. In Proceedings of the 2017 CHI Conferenceon Human Factors in Computing Systems, CHI ’17, pages 3891–3902, NewYork, NY, USA, 2017. ACM.[35] M A Srinivasan, G L Beauregard, and D Brock. The impact of visual infor-mation on the haptic perception of stiness in virtual environments. In the5th Ann. Symp. on Haptic Interfaces for Virtual Environment and TeleoperatorSystems, IMECE, volume DSC:58, pages 555–559, Atlanta, GA, 1996.[36] Pierre Wellner. Digital desk. https://www.youtube.com/watch?v=S8lCetZ_57g, 1991. Accessed: 2017-03-28.[37] Robert Zeleznik, Andrew Bragdon, Ferdi Adeputra, and Hsu-Sheng Ko.Hands-on math: A page-based multi-touch and pen desktop for technicalwork and problem solving. In Proceedings of the 23Nd Annual ACM Sympo-sium on User Interface Software and Technology, UIST ’10, pages 17–26, NewYork, NY, USA, 2010. ACM.57[38] Yang Zhang, Junhan Zhou, Gierad Laput, and Chris Harrison. Skintrack:Using the body as an electrical waveguide for continuous nger trackingon the skin. In Proceedings of the 2016 CHI Conference on Human Factors inComputing Systems, CHI ’16, pages 1491–1503, New York, NY, USA, 2016.ACM.58Appendix APrior Art Analysis Provided toQualcommThe prior art analysis provided to Qualcomm reviews relevant academic literatureand patents. It also compares Gelly with existing products.591  				The	Gelly	Project:	Proximity-Driven	Human	Interaction	Concepts		Review	of	Prior	Art		Version 1.0 October 25, 2016  Haihua Zhang Dilan Ustek Prof. Karon Maclean Prof. John Madden  	602  Table of Contents  I. Review of Relevant Academic Literature ........................................................................................ 3	Search Parameters .............................................................................................................................. 3	Reference Descriptions ....................................................................................................................... 3	1. “Air+touch: interweaving touch & in-air gestures”, Chen, Harrison et al. UIST'14 (CMU) ...... 3	2. “Transture: Continuing a touch gesture on a small screen into the air”, Han, Ahn et al. CHI'15 (KAIST) ........................................................................................................................................... 3	3. “Pre-Touch Sensing for Mobile Interaction”, Hinkley, Heo et al. CHI'16 (Microsoft Research) ......................................................................................................................................................... 4	4. “The Continuous Interaction Space: Interaction Techniques Unifying Touch and Gesture on and above a Digital Surface”,  Marquardt, Jota et al. IFIP'11 (U of Calgary) ................................ 5	5. “Lean and Zoom: Proximity-Aware User Interface and Content Magnification”, Harrison, Dey. CHI'08 (CMU) ........................................................................................................................ 5	6. “Mid-air pan-and-zoom on wall-sized displays”,  Nancel, Wagner et al. CHI'11 (LRI - Univ Paris-Sud & CNRS) ......................................................................................................................... 5	7. “The perceptual structure of multidimensional input device selection”, Jacob, Sibert, CHI'92 (Naval Research Lab) ...................................................................................................................... 5	8. “Clutch-free panning and integrated pan-zoom control on touch-sensitive surfaces: the cyclostar approach”, Malacria, Lecolinet et al. CHI'10 (Telecom Paris Tech) ............................... 6	9. “AuraSense: Enabling Expressive Around-Smartwatch Interactions with Electric Field Sensing”, Zhou, Zhang et al. UIST'16 (CMU) ................................................................................ 6	10. “3D-Press: Haptic Illusion of Compliance when Pressing on a Rigid Surface”, Kildal. ICMI-MLMI'10 (Nokia Research Center) ................................................................................................. 6	Bibliography ........................................................................................................................................ 7	II. Review of Relevant Patents .............................................................................................................. 8	Search Parameters .............................................................................................................................. 8	A. References of Highest Current Relevance .................................................................................... 8	1. Hover-based interaction with rendered content (Microsoft) WO 2016025356 A1 ..................... 8	2.  Non-occluded display for hover interactions (Amazon) US 20140282269 A1 ......................... 8	3. Input interaction on a touch sensor combining touch and hover actions (Cirque) WO 2014152560 A1 ............................................................................................................................... 8	4. Proximity sensor-based interactions  (MS) US 20150253858 A1 .............................................. 9	5.User input using proximity sensing (MS) US 9063577 B2 .......................................................... 9	6.Hover gestures for touch-enabled devices (MS) WO 2014143556 A1 ........................................ 9	7. One-handed gestures for navigating ui using touch-screen hover events (Motorola) US 20140362119 A1 ........................................................................................................................... 10	8. Multiple hover point gestures (MS) WO 2015100146 A1 ........................................................ 10	B. References of Potential Future Relevance .................................................................................. 11	9.Method and apparatus for hover-based spatial searches on mobile maps (Nokia) WO 2013124534 A1 ............................................................................................................................. 11	10. Hover-over gesturing on mobile devices (Google) US 8255836 B1 ....................................... 11	11.System and method for interacting with a touch screen interface utilizing a hover gesture controller (Honeywell) US 20140240242 A1 ............................................................................... 11	III. Comparisons of Gelly with Existing Products ........................................................................... 12	1. Comparison between Samsung Air View and Gelly ................................................................. 12	 613  I. Review of Relevant Academic Literature  Search Parameters Keywords:	hover,	touch	interface,	small	device,	gesture	recognition,	interaction,	proximity	sensing Primary	venues	searched: ● CHI:	Conference	on	Human	Factors	in	Computing	Systems.	The	ACM(Association	for	Computing	Machinery)	CHI	is	the	premier	international	conference	on	Human-Computer	Interaction.	It	is	held	by	ACM-SIGCHI	(Special	Interest	Group	on	Computer-Human	Interaction).	o [2]	CHI	2015 o [3]	CHI	2016 o [5]	CHI	2008 o [6]	CHI	2011 o [7]	CHI	1992 o [8]	CHI	2010 ● UIST:	The	ACM	Symposium	on	User	Interface	Software	and	Technology	(UIST),	a	premier	forum	for	innovations	in	human-computer	interfaces,	covers	areas	including	graphical	&	web	user	interfaces,	tangible	&	ubiquitous	computing,	virtual	&	augmented	reality,	multimedia,	new	input	&	output	devices,	and	CSCW(Computer-Supported	Cooperative	Work).	o [1]	UIST	2014 o [9]	UIST	2016 Reference Descriptions 1. “Air+touch: interweaving touch & in-air gestures”, Chen, Harrison et al. UIST'14 (CMU) Technology:	a	smartphone	+	a	depth	camera	to	simulate	future,	more	advanced	hover-capable	devices. This	paper	explored	interactions	that	interweave	touch	events	with	in-air	gestures.	They	classified	these	gestures	into	three	types:	Before	touch,	between	touch	and	after	touch. Relevant	gestures: ● Tap	&	Circle	in	the	air	for	continuous	zooming,	applied	in	map	application.	Raising	the	finger	up	before	touching	down	switches	between	pan/zoom	modes.	They	choose	circle-to-zoom	as	a	way	to	solve	the	clutching	problem,	but	research	has	shown	that	linear	gesturing	is	a	more	natural	mapping	for	zooming,	and	it	could	be	difficult	for	users	to	draw	a	circle	in	the	air.	● Proximity	determined	scroll	rate:	flick	to	scroll,	then	finger	height	maps	the	speed	of	scrolling.		Similarly,	in	our	Quickreader	interaction	design,	we	map	the	distance	from	the	screen	to	the	speed	of	scrolling	through	text.	This	shows	to	us	that	it	is	a	viable	technique	to	map	distance	from	the	screen	to	variable	values.	2. “Transture: Continuing a touch gesture on a small screen into the air”, Han, Ahn et al. CHI'15 (KAIST) Technology:	A	concept	of	interaction	on	small	devices.	There	are	no	implementations	yet. 624  This	paper	proposed	the	concept	of	increasing	the	input	space	of	small	screen	devices	by	continuing	the	gesture	in	the	air.	Users	can	start	a	“Transture”	by	touch,	then	hover,	and	finally	end	with	a	touch. The	paper	explored	interaction	techniques	in	mid-air	for	small	screens,	but	compared	to	sticky	zooming,	their	interactions	are	not	efficient	as	they	take	multiple	steps	to	achieve	one	task.	The	interactions	need	to	be	done	in	a	small	space	and	require	precision,	which	is	hard	on	a	small	display.	Eg:	They	divide	a	circular	space	to	three	parts:	one	for	panning,	one	for	zooming,	and	one	is	the	dead	zone.	So	users	have	to	go	to	be	careful	of	which	zone	they	are	in	to	successfully	conduct	different	interactions. ● Panning:	Users	start	with	the	usual	panning	gesture.	Panning	continues	after	leaving	the	touched	state	and	continues	to	move	in	the	same	direction.	● Zooming:	Users	draw	a	circle	in	the	air	to	register	a	zoom,	the	center	of	the	drawn	circle	will	be	the	center	of	a	circular	region,	which	will	be	divided	into	three	circular	regions:	dead,	zooming	and	panning.	Users	then	move	their	finger	to	the	zooming	zone	and	gesture	a	circle	in	order	to	zoom	continuously.	The	direction	of	the	circular	gesture	determines	whether	a	zoom	in	or	out	is	registered,	and	the	distance	from	the	center	determines	the	zooming	speed.	The	user	can	move	to	the	panning	zone	to	pan	and	adjust	zooming	center.	● Marking	menu:	Users	draw	a	V	in	the	air	to	start	the	menu.	When	the	menu	pops	up,	users	can	then	drag	to	4	sides	out	of	the	watch	to	scroll	through	options	continuously.	Drawing	a	V	here	is	in	the	step	of	gesture	registration	because	such	a	shape	occurs	rarely	while	performing	panning	gestures	and	thus	can	serve	as	a	start	point	of	the	interaction.		 3. “Pre-Touch Sensing for Mobile Interaction”, Hinkley, Heo et al. CHI'16 (Microsoft Research) Technology:	Mobile	phone	with	a	self-capacitance	touchscreen	that	can	sense	multiple	fingers	above	a	mobile	device,	as	well	as	grip	around	the	screen’s	edges. The	study	explored	possible	interaction	techniques	on	a	self-capacitance	touchscreen,	and	put	emphasis	on	the	anticipatory	role	of	hovering. They	made	a	table	demonstrating	the	design	space	of	pre-touch	to	show	that	their	contribution	mainly	lay	in	Background	interaction	and	hover	interaction.	Background	interaction	here	means	to	characterize	the	context	of	activity	taking	place	‘behind'	the	foreground—such	as	sensing	the	user's	fingers	approach	the	screen	and	fading	in	a	context-appropriate	interface	to	suit. This	gave	us	the	idea	to	sort	our	contributions	in	a	similar	way	but	using	the	existing	and	popular	interaction	design	on	the	smart	phone	and	smart	watch,	and	compared	them	to	our	design	on	Gelly. Most	of	the	interactions	are	not	suitable	for	applying	to	a	smartwatch,	as	it	often	requires	multi-touch	and	the	detection	of	grip. They	classified	interactions	into	three	categories: ● Anticipatory	reactions	-	Video	controls	that	can	fade	in/out	as	the	finger	approaches/leaves,	while	sensing	the	grip	to	change	the	place	of	the	controls.	The	Calm	web	browser	presents	users	a	clean	web	page;	the	hyperlinks	and	play	controls	only	show	up	when	fingers	approach	the	display.	● Retroactive	interpretations	-	Dispatch	the	tap	to	either	large	or	small	targets	by	inspecting	the	finger	approach	trajectory.	The	same	technique	can	be	used	to	discriminate	flick	vs.	select.	But	their	user	study	showed	this	design	didn't	work	well.	635  ● Hybrid	touch+hover	gestures	-	Users	can	select	a	file	by	tapping	and	menus	will	show	up	at	the	position	where	the	other	finger	hovers.	A	soccer	game	where	users	can	strike	the	ball	by	touching	and	move	over	the	ball	using	proximity.		 4. “The Continuous Interaction Space: Interaction Techniques Unifying Touch and Gesture on and above a Digital Surface”,  Marquardt, Jota et al. IFIP'11 (U of Calgary) Technology:	an	interactive	horizontal	touch-sensitive	SmartBoard	surface(tabletop),	and	a	Vicon	motion	tracking	system.	This	tracking	system	is	composed	of	8	high-speed	infrared	(IR)	cameras. The	continuous	interaction	space	is	the	unification	of	touch	and	hover	interaction	modalities.	The	author	claims	the	space	above	the	screen	is	a	continuum	and	aims	at	exploring	the	space	between	hover	and	touch. They	proposed	a	video	navigating	method:	lifting	the	finger	to	improve	scale	precision.	As	the	hand	goes	higher,	the	slider	will	be	rescaled	to	a	larger	size	and	the	user	can	gain	more	precise	control	of	the	sliding	bar. We	were	designing	a	proximity	controlled	video	player	before,	as	there	is	currently	not	a	video	player	on	most	smartwatches.	However,	this	study	explores	interactions	on	large	displays	while	ours	is	a	small	display,	and	it	could	be	hard	to	achieve	precision.   5. “Lean and Zoom: Proximity-Aware User Interface and Content Magnification”, Harrison, Dey. CHI'08 (CMU) Technology:	a	computer	with	a	built-in	camera	to	calculate	a	user’s	lean	proximity. This	paper	used	a	Lean	and	Zoom	system	to	detect	users'	proximity	to	a	computer	display	and	magnified	the	content	on	the	screen	proportionally	to	the	proximity. In	the	user	study,	they	found	that	users	described	this	technique	as	natural	and	intuitive,	and	could	improve	their	performance	and	comfort.	This	finding	supports	out	claim	that	sticky	zoom	will	a	more	natural	mapping	and	thus	easier	to	use	than	pinch-to-zoom. 	 6. “Mid-air pan-and-zoom on wall-sized displays”,  Nancel, Wagner et al. CHI'11 (LRI - Univ Paris-Sud & CNRS) Technology:	The	display	wall	consists	of	32	high-resolution	30”	LCDs	laid	out	in	an	8×4	matrix,	5.5	meters	wide	and	1.8	meters	high.	A	VICON	motion	capture	system	to	track	passive	IR	retroreflective	marker. In	[8]	the	author	claims	that	use	of	circular	gestures	to	pan	and	zoom	avoids	clutching,	thus	users	would	feel	that	the	interactions	are	more	smooth	and	uninterrupted.	However,	in	this	paper,	they	found	that	linear	gestures	had	higher	efficiency	than	circular	gestures	because	of	the	lack	of	surface	in	guiding	a	circular	gesture. 	 7. “The perceptual structure of multidimensional input device selection”, Jacob, Sibert, CHI'92 (Naval Research Lab) Technology:	Computer	and	three-dimensional	tracker. 646  Jacob	and	Sibert	claim	that	panning	and	zooming	are	integrally	related:	the	user	does	not	think	of	them	as	separate	operations,	but	rather	as	a	single,	integral	task	like	“focus	on	that	area	over	there”.	This	supported	sticky	zoom	as	it	allows	users	to	pan	and	zoom	at	the	same	time	using	one	finger,	while	in	pinch-to-zoom	or	crown-zoom(as	used	in	Apple	Watch),	users	have	to	first	zoom	and	then	pan	around,	which	does	not	feel	smooth	but	rather	feels	clunky. 	 8. “Clutch-free panning and integrated pan-zoom control on touch-sensitive surfaces: the cyclostar approach”, Malacria, Lecolinet et al. CHI'10 (Telecom Paris Tech) Technology:	a	SmartBoard(on	the	wall),	a	vertical,	front-projection	121x90.5cm	interactive		whiteboard	with	a	display	resolution	of	1024x768px. The	study	design	is	cited	by	[6],	the	mid-air	pan-and-zoom	on	wall-sized	displays	study,	but	the	result	of	the	comparison	between	circular	and	linear	gestures	is	different	from	[6].	The	reason	is	explained	in	[6].	We	learned	from	their	user	study	and	designed	similar	tasks:	zoom	in	a	circle	enough	to	make	it	turn	green	until	the	user	finds	the	right	target.	Their	task	required	the	users	to	zoom	out	of	a	circle	to	see	the	color	change. 	 9. “AuraSense: Enabling Expressive Around-Smartwatch Interactions with Electric Field Sensing”, Zhou, Zhang et al. UIST'16 (CMU) Technology:	a	smartwatch	augmented	with	electric	field	(EF)	sensing	(a	Microchip	MGC3130	elec-	tric	field	sensing	chip) This	paper	used	electric	field	sensing	on	a	smartwatch	to	achieve	the	combination	of	skin	track,	hovering	gesture	and	single	hand	gestures.	They	quantified	the	basic	feasibility	and	accuracy	of	the	six	example	interaction	modalities	and	showed	that	it	allowed	high-fidelity	sensing.	But	they	did	not	explore	the	usage	of	proximity,	instead,	the	main	focus	is	on	periphery	control	or	above-screen	gestures. 	 10. “3D-Press: Haptic Illusion of Compliance when Pressing on a Rigid Surface”, Kildal. ICMI-MLMI'10 (Nokia Research Center) Technology	:	a	graphics	tablet	and	a	stylus	with	a	vibrotactile	actuator. The	paper	explored	methods	to	create	a	compliant	illusion	on	a	rigid	surface	by	using	vibration	and	friction.	Although	it	did	not	use	any	visual	or	audio	cues	similar	to	ours,	they	still	managed	to	create	the	illusion	successfully.	We	plan	to	adopt	the	way	they	conducted	the	user	study	partially,	in	terms	of	the	control	of	different	variables	to	create	the	illusion.	In	their	study,	they	vary	settings	in	four	design	parameters	to	create	16	different	settings	and	tested	them	one	by	one.	The	result	showed	that	most	settings	are	able	to	provide	a	robust	illusion	of	compliance.	We	also	decided	to	test	multiple	conditions	by	varying	the	animation	and	sound,	to	prove	that	Sticky	Zoom	can	create	the	stickiness	feeling	under	different	settings	(eg:	when	you	are	outside	and	can	not	play	the	sound). 	 657  Bibliography 1-Xiang 'Anthony' Chen, Julia Schwarz, Chris Harrison, Jennifer Mankoff, and Scott E. Hudson. 2014. Air+touch: interweaving touch & in-air gestures. In Proceedings of the 27th annual ACM symposium on User interface software and technology (UIST '14). ACM, New York, NY, USA, 519-525. DOI=10.1145/2642918.2647392 http://doi.acm.org/10.1145/2642918.2647392  2-Jaehyun Han, Sunggeun Ahn, and Geehyuk Lee. 2015. Transture: Continuing a Touch Gesture on a Small Screen into the Air. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '15). ACM, New York, NY, USA, 1295-1300. DOI=http://dx.doi.org/10.1145/2702613.2732849  3-Ken Hinckley, Seongkook Heo, Michel Pahud, Christian Holz, Hrvoje Benko, Abigail Sellen, Richard Banks, Kenton O'Hara, Gavin Smyth, and William Buxton. 2016. Pre-Touch Sensing for Mobile Interaction. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 2869-2881. DOI: http://dx.doi.org/10.1145/2858036.2858095  4-Marquardt N, Jota R, Greenberg S, et al. The continuous interaction space: interaction techniques unifying touch and gesture on and above a digital surface[C]//IFIP Conference on Human-Computer Interaction. Springer Berlin Heidelberg, 2011: 461-476. http://link.springer.com/chapter/10.1007/978-3-642-23765-2_32#page-1   5-Chris Harrison and Anind K. Dey. 2008. Lean and zoom: proximity-aware user interface and content magnification. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '08). ACM, New York, NY, USA, 507-510. DOI=http://dx.doi.org/10.1145/1357054.1357135  6-Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, and Wendy Mackay. 2011. Mid-air pan-and-zoom on wall-sized displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 177-186. DOI=http://dx.doi.org/10.1145/1978942.1978969  7-Robert J. K. Jacob and Linda E. Sibert. 1992. The perceptual structure of multidimensional input device selection. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '92), Penny Bauersfeld, John Bennett, and Gene Lynch (Eds.). ACM, New York, NY, USA, 211-218. DOI=http://dx.doi.org/10.1145/142750.142792  8-Sylvain Malacria, Eric Lecolinet, and Yves Guiard. 2010. Clutch-free panning and integrated pan-zoom control on touch-sensitive surfaces: the cyclostar approach. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). ACM, New York, NY, USA, 2615-2624. DOI=http://dx.doi.org/10.1145/1753326.1753724  9-Junhan Zhou, Yang Zhang, Gierad Laput, and Chris Harrison. 2016. AuraSense: Enabling Expressive Around-Smartwatch Interactions with Electric Field Sensing. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 81-86. DOI: http://dx.doi.org/10.1145/2984511.2984568   10-Johan Kildal. 2010. 3D-press: haptic illusion of compliance when pressing on a rigid surface. InInternational Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI '10). ACM, New York, NY, USA, Article 21 , 8 pages. DOI: http://dx.doi.org/10.1145/1891903.1891931  668  II. Review of Relevant Patents Search Parameters Disclaimer:	We	do	not	have	experience	in	doing	patent	search	and	did	not	understand	everything	fully.	The	following	is	only	meant	to	help	the	patent	experts.  Search	keywords:	hover	+	interaction	+	proximity	+	sensor/sensing	+	gesture Search	tool:	Google	Patent	Search A. References of Highest Current Relevance 1. Hover-based interaction with rendered content (Microsoft) WO 2016025356 A1 https://patents.google.com/patent/WO2016025356A1/en 1.	A	method	comprising	rendering	content	on	a	display,	detecting	an	object	in	front	of,	but	not	in	contact	with,	a	front	surface	of	the	display,	determining,	at	least	partly	in	response	to	detecting	the	object,	a	location	on	the	front	surface	of	the	display	that	is	spaced	a	shortest	distance	from	the	object	relative	to	distances	from	the	object	to	other	locations	on	the	front	surface	of	the	display; determining	a	portion	of	the	content	that	is	rendered	at	the	location	or	within	a	threshold	distance	from	the	location;	and	displaying,	in	a	region	of	the	display,	a	magnified	window	of	the	portion	of	the	content.	 The	claim	contains	an	interaction	like	zooming,	in	which	the	user	needs	to	lift	up	an	object	in	order	to	zoom	out	and	get	closer	to	the	screen	in	order	to	zoom	in,	which	is	the	reversed	direction	of	sticky	zoom	(8).	They	also	claimed	hover	panning(10,	and	the	method	of	zooming	when	a	finger	is	not	detected	anymore(11). 2.  Non-occluded display for hover interactions (Amazon) US 20140282269 A1 https://patents.google.com/patent/US20140282269A1/en Abstract:	When	users	hover,	the	displayed	information	can	be	an	enlarged	version	of	the	element	to	help	the	user	disambiguate	selection	of	multiple	elements.	 However,	they	did	not	mention	it	in	the	claim,	and	in	the	description,	their	design	is	showing	a	larger	hover	box	of	the	occluded	key	when	the	user	is	typing	on	a	keyboard. 		3. Input interaction on a touch sensor combining touch and hover actions (Cirque) WO 2014152560 A1 https://patents.google.com/patent/WO2014152560A1/en 679  Abstract:	A	system	and	method	for	defining	a	gesture	to	be	any	combination	of	touch	and	hover	actions,	the	touch	and	hover	actions	being	combined	in	any	order	and	any	number	of	discrete	touch	and	hover	actions	that	define	a	single	gesture	or	a	series	of	gestures. They	are	claiming	the	method	of	combining	gestures	of	touch	and	hover. 4. Proximity sensor-based interactions  (MS) US 20150253858 A1 https://patents.google.com/patent/US20150253858A1/en This	patent	claims	receiving	values	from	1-3	proximity	sensors	and	to	perform	operations	based	on	the	values.	They	also	claim	a	sensor	comprising	a	capacitive	display. Claim	game,	map,	security	and	authentication	applications,	claim	velocity	detection. Claimed	computing	devices:	laptop,	smartphone,	tablet,	portable	media	player,	or	video	game	device.	Smartwatches	are	not	mentioned. 	 5.User input using proximity sensing (MS) US 9063577 B2 https://patents.google.com/patent/US9063577B2/en This	patent	does	not	include	any	specific	gestures	or	interaction	methods	but	claims	the	process	of	sensing	and	the	composition	of	the	device	itself.	The	devices	comprise	of	multiple	sensors	to	detect	user	input	in	the	interaction	area	that	extends	outwardly	from	a	surface	of	a	casing	of	the	device,	at	least	in	a	plane	of	a	display	portion	or	on	the	sides	of	the	device. The	gestures	are	detected	by	creating	sensing	images	and	mapping	to	certain	operations	to	control	the	program.	  6.Hover gestures for touch-enabled devices (MS) WO 2014143556 A1 https://patents.google.com/patent/WO2014143556A1/en This	patent	claims	a	method	of	detecting	hover	gestures,	including	the	finger(s)'	positions,	proximate,	and	movement.	Hover	gestures	include:	finger	tickle,	circle	gesture	and	holding	a	finger	in	a	fixed	position	for	a	predetermined	period	of	time. The	claim	includes	associating	the	finger	position	with	an	icon	displayed	on	the	touch	screen,	displaying	additional	information	associated	with	the	icon.	---	Magnify,	HoverPeek,	FolderPeek	all	need	to	hover	for	a	while	and	associate	the	icon	with	finger	position. Their	design	is	on	the	mobile	phone	and	use	hovering	to	see	the	recent/missed	call	of	a	calling	app	and	see	the	calendar	items	for	a	current	day.    6810  	 7. One-handed gestures for navigating ui using touch-screen hover events (Motorola) US 20140362119 A1 https://patents.google.com/patent/US20140362119A1/en  This	patent	claims	a	zoom	function	by	proximity:	detecting	presence	of	user	digit	in	proximity	to	the	screen,	entering	a	hover-zoom	model,	distance	between	digit	and	the	screen	determines	a	zoom	factor	for	the	display,	location	of	the	user	digit	is	used	to	determine	a	direction	to	pan	on	the	display,	including	panning	on	the	display/viewport	by	an	amount	of	the	digit's	movement. It	is	almost	the	same	as	Sticky	Zoom,	but	they	start	the	zoom	not	by	touching	but	persistent	hovering	for	a	determined	period	of	time.	 8. Multiple hover point gestures (MS) WO 2015100146 A1 https://patents.google.com/patent/WO2015100146A1/en In	the	instruction	they	mention:	The	gather	gesture	may	be	used	to	reduce	screen	brightness,	to	limit	a	social	circle	with	which	a	user	interacts,	to	make	an	object	smaller,	to	zoom	in	on	a	picture,	to	gather	an	object	to	be	lifted,	to	crush	a	virtual	grape,	to	control	device	volume,	or	for	other	reasons. 	Claim:	detecting	a	plurality	of	up	to	10	hover	points,	without	using	a	camera	or	a	touch	sensor,	and	produce	independent	categorization	of	data	and	tracking	data. Claim	the	multiple	hover	point	gesture	is	a	gathering,	spreading,	cranking,	rolling,	ratcheting,	poof,	and	sling	shot	gesture.    6911  B. References of Potential Future Relevance We	found	several	references	that	are	of	lower	relevance	to	our	current	project	but	will	probably	be	useful	in	the	future. 9.Method and apparatus for hover-based spatial searches on mobile maps (Nokia) WO 2013124534 A1 https://patents.google.com/patent/WO2013124534A1/en This	one	is	about	maps,	not	related	to	our	design,	but	if	we	implement	map	apps	in	the	future	this	should	be	useful.	10. Hover-over gesturing on mobile devices (Google) US 8255836 B1 https://patents.google.com/patent/US8255836B1/en\ This	patent	claims	two-handed	touch+hover	gestures	on	a	mobile	device.	It	does	not	cover	Gelly	since	we	are	only	using	one	handed	gestures	at	the	moment.       11.System and method for interacting with a touch screen interface utilizing a hover gesture controller (Honeywell) US 20140240242 A1 https://patents.google.com/patent/US20140240242A1/en System	and	method	to	recognize	the	interaction	intentionally	in	order	to	reduce	the	inadvertent	interaction.	We	will	probably	use	it	for	the	future	development	of	Gelly. 	    7012  III. Comparisons of Gelly with Existing Products  1. Comparison between Samsung Air View and Gelly 	 App Video Photo Dial Text Airview Hover	over	a	day	inside	a	month-view	calendar	to	preview	events	on	a	day.  Hover	over	an	album	inside	the	photo	gallery	to	preview	several	photos.  Hover	over	a	message	to	preview	the	whole	message. Hover	over	a	video	file	to	preview	the	content	of	a	video.  Slide-hover	over	the	control	bar	to	preview	a	given	point	of	the	video. N/A Speed	dial	preview:	hover	over	a	key	on	the	keypad	and	shows	which	contact	it	is	assigned	to. Hover	over	a	part	of	the	text	in	a	web	page	to	magnify. Gelly Magnify:	Hovering	over	apps	makes	it	bigger	and	easier	to	touch(necessary	for	the	small	screen.) Hoverpeek:	hover	over	any	app	icon	to	preview	notifications,	and	click	to	open	directly	to	certain	content. Folderpeek:	Hover	over	a	folder	and	preview	the	apps	contained	in	the	folder,	click	to	directly	open	the	app.	 N/A Sticky	Zoom:	touch	the	photo	to	start,	lift	up	the	finger	to	zoom	in,	get	closer	to	zoom	out,	go	further	from	screen	to	zoom	in. N/A QuickRead:	hover	on	the	left	or	right	side	to	scroll	a	sentence	from	left	to	right,	while	hover	height	determines	the	scrolling	speed.  Summarization:	In	short,	the	Samsung	features	are	based	on	previewing	and	magnification,	but	for	a	smartphone,	they	are	nice-to-have	features	rather	than	things	that	are	absolutely	necessary.	Some	hover	functions	are	so	slow	and	unnecessary	that	users	would	rather	choose	to	click.	Previewing	is	a	common	trend	between	the	Air	View	and	our	interactions,	but	the	above	table	shows	why	we	claim	that	it	is	different	and	useful.	Our	interaction	techniques	focus	more	on	replacing	clicking	with	hovering	to	make	it	more	efficient	in	controlling	apps	and	folders	and	improve	the	performance	of	tasks	that	are	difficult	to	perform	on	small	displays,	such	as	clicking	on	a	small	icon,	zooming,	and	reading. 71Appendix BStudy ProposalThis appendix contains the study proposal used in the experiment discussed inChapter 6.72Zoom Study  Zoom Study 0 Researchers 1 Purpose 1 Contributions 1 Research Questions 1 Expected Outcomes 2 Related Work 2 Data sketch 4 Protocol 4 Illusion 5 Interaction Usability: Objective Performance and Qualitative Feedback 6 Performance 6 Interview 7 Study Design Description 8 Participants 8 Independent Variables 8 Dependent Variables 8 Session Time Budget 8 Apparatus 8 Analysis 8 Overview and Rationale for Study Approach 9 Target Publications 10 Ethics 10 Ethics Form Number 10 Video Recording? 10 Declarations 10 Amendments Required 10 Bibliography 10       73Researchers Lead: Dilan Ustek, Karon Maclean Additional Team Members: Kevin Chow, Haihua Zhang Document prepared by: Dilan Ustek, Haihua Zhang (w/ input from MacLean)  Purpose In this user study, we want to compare the StickyZoom interaction techniques with the regular pinch-to-zoom zooming technique on a smartwatch to compare effectiveness and preference. We also want to find out how the visual cues for zooming and audio influences the feeling of connectedness/elasticity of the finger to the image. Contributions Our contributions will be: 1. An exploration of the design space of proximity zooming interactions and the design of the StickyZoom interaction technique with variations of how it could be implemented: a proximity-based zooming technique designed to facilitate zooming on direct-touch displays which are (a) small compared to the user’s finger reach; (b) work with just one finger; and (c) minimize occlusion.. 2. Usability evaluation (performance and qualitative feedback) of multiple StickyZoom implementations, as compared with the current standard touch-display zoom technique (pinch to zoom).  . 3. Insight into the conditions under which a pseudo-haptic illusion can be induced for the StickyZoom in-air (non contact) interaction, and data on its impact on performance when it is in effect.   Research Questions  RQ1:​  What are the usability and user experience of the StickyZoom interaction techniques, relative to the standard pinch-to-zoom technique?  Which technique, out of the regular zooming technique and all conditions of sticky zooming techniques - have the highest ​performance​ in a zooming task? (timing)  - have any apparent ​learnability​ issues? (observation)  - are ​preferred ​ by user s? (ranking)  - are considered ​easy to use ​? (ranking) - are considered ​pleasant​? (ranking)   RQ2:​  How do audio cues and changes in the C/D ratio for zooming influence the pseudo-haptic illusion of elasticity(connectedness)​ of the finger to the image on a screen? (subjective rating)   1 74  Expected Outcomes We expect to learn:  With respect to “sticky” pseudohaptic illusion:  A. If there is still a feeling of elasticity without the image of a spherical ball B. If there is a feeling of elasticity with or without audio for a rubber band snapping, when there is an image of a person, not a spherical ball.   With respect to StickyZoom (and variations) performance relative to standard: C. On average, which technique was the most efficient in terms of how long it takes to perform 1 task. D. Which technique was the most pleasant for users. E. Whether there were any learnability issues to be concerned about. F. Which technique users had a preference towards. G. Which technique was considered easy to use by users. H. How users define the experiences of all the techniques.   Related Work  Pinch-to-zoom on Small Screens On nearly all smart devices with a touch screen, the most popular way of zooming is pinch-to-zoom. However, as some researchers[PTZ, 1] have pointed out, “precision-pointing and occlusion problems” are two of the obvious problems of pinch-to-zoom. When it comes to the zooming on smart watches, precision becomes less important because usually zooming situation like checking photos doesn’t require high degree of precision. But the occlusion problem become even more serious on small screen devices, and Stickyzoom is mainly aimed at fixing this problem. Usually when people do zooming on screens, they tend to zoom in and out for many times, switching between the details and high-level context. [PTZ,2,3,4]This interaction requires users to repeat the same gestures continuously, which can be tiring and time-consuming, especially when the screen size is small, there is high rate of false positive operation. Thus, another important goal of Stickyzoom is to allow users to zoom in and out efficiently, without the constraint of the screen size.  In [PZT], the results of their experiment showed that zoom acceleration reduced effort, that is, the number of pans and clutches, but not reducing the task time. The reduction of task time only showed in the longitudinal study later, after the users gradually get used to the new technique. However, we believe Stickyzoom can help reduce the task time during the experiment as well. In order to solve the occlusion problem, researchers has been exploring the space around smart watches. [SkinTrack 14] put up with new interactions allowing users to “zoom in and out by scrolling on the hand”. The technology in [SideSight 13] allows users to move fingers on the surface around the screen, to enlarge the space for doing pinch-to-zoom. However, they both did 2 75not solve the problem of clutching, which means users still need to exert repetitive zooming gestures in order to zoom in or out to the ideal ratio.  In-air Technologies and Interactions In 2011, the [Continuous Interaction Space11] pointed out there are “rich interaction space between “on the surface” and “above the surface”, and named this interaction space as continuous interaction space. They believed in this area, three-state interactions could be designed to avoid occlusion and improve precision, such as lifting fingers to adjust scale precision when controlling a video timeline. Since then, there have been many hover technologies, interaction techniques and concepts developed, like the [Air+Touch 6] and [PreTouch7]. [6] used depth camera back plane chassis with a smart phone to achieve in-air gestures. They proposed a gesture vocabulary called “Air+Touch”, dividing interaction gestures into three phases: Before/Between/After touch. Based on this concept, they designed a series of interactions including non-clutching scrolling and zooming. The [Pre-Touch Sensing 7] paper mainly contributed to the background sensing interactions. Their self-capacitance touchscreen was able to sense the hovering and gripping above and around the mobile phone, and support the “graceful degradation to a one-handed version of the technique”. [Transture 8] focused more on hover interactions on small touchscreens, and designed in-air interaction techniques such as panning, zooming and activating menu on a smartwatch with the depth camera, in order to to delimit the constraint of the screen size. They also admit that pinch-to-zoom is hard to operate on small screens due to the insufficient space for multi-finger gesture.  Circular vs. Linear Zooming Among all these hover interactions designed, there are two existing zooming techniques:  Transture zooming starts with drawing a circle on the screen, and divides the interaction space into three circular regions: panning, zooming, and dead. In the zooming region, the distance between the zooming center(also the circle’s center) determines the zooming ratio, the direction of a circling gesture determines zooming directions. If users move their finger to the panning region, they can pan while zoom[8]. This interactions solved the problem of the limited interaction space for zooming, but according to the result of the experiment, “participants wanted to disable panning function in the zooming zone”, which implies that panning and zooming are not connected well in this design, possibly because the technique is complicated, and hard for users to handle two modes at the same time. [Air+Touch 6] proposed a zooming techniques based on the After Touch concept: Users start the zooming by touching the screen, then lift high up and draw circles in air to do continuous zooming. Zooming directions are determined by cycling directions. Users can quit zoom mode by tapping the screen again or doing non-cyclical motion for a short period. In contrast to the linear gesture of Stickyzoom, both of the interaction techniques above are based on the circular movement of fingers. Nevertheless, research has demonstrated the possible weakness of circular gestures. In [Mid-air pan and zoom 10], researchers found that when participants were doing zooming tasks, linear gestures are generally faster compared with circular ones, especially on 2D surface and 3D free hand conditions. They concluded that “the lack of a surface to guide the gestures significantly degrades the technique’s usability. 3 76What’s more, [Lean and Zoom 12] has proved “the notion of learning forward for visual enlargement is natural”, and we believe that lifting up the finger to see the objects more clearly is similarly natural and intuitive, thus makes Stickyzoom have higher learnability and memorability, compared with the above circular gestures.  Illusion and User Experience Despite solving the problem of occlusion and clutching, we also want to create a more pleasant experience of zooming by trying to create an illusion during the zooming process. In [3D Press 15], the author used kinaesthetic cues and cutaneous cues to create a perceptual illusion that involves “pressing on a rigid surface and perceiving that the surface is compliant”. According to previous research, perceptual illusions are defined as “systematically-originated errors in the perception of figures or scenes, which are observed in almost all people”.[3D Press, Haptic Perceptual Illusions 16]. In their results, 80.3% of the descriptions showed that the user perceived the illusion of compliance, and proved the multimodal illusion was robust under various conditions.  In our design, we want participants to feel a certain extent of connectedness between their finger and the pictures in the screen through the synchronization between the visual cues on the screen and the kinaesthetic cues of their fingers.  Data sketch ● Transcripts of the think-aloud process of the initial familiarization process and during the tasks: note the specific words implying illusion and feeling. (Qualitative+Subjective) ● Ranks of elasticity for the ball image. photo, and with/without audio. (Quantitative + Subjective) ● Ranking results for all interaction techniques (Quantitative + Subjective) ● Time spent on task through sticky and regular zooming techniques (Quantitative+Objective) ● Interview: opinions of the user experience, rankings of preference, ease of use, and pleasantness, questions and places need to be improved. (Qualitative+Subjective)  Protocol [10 min intro- including signing] Initial explanation:  In this experiment, we’re studying ways that you can use proximity (your height above the surface) to zoom the screen contents in and out.  In front of you is a smartwatch and a smartphone; we’ve created a prototype setup that allows us to measure your finger’s height above their screens, to mimic some new technology that we think will be coming to mobile devices  before long.  We are asking you to focus on the experience of the interaction technique rather than the polished quality of this early prototype - which I must warn you, does not always work perfectly. In fact, you may experience some glitches, and I’ll ask you to do your best to ignore these. If at any time you’re not sure whether what you see is a glitch or an intended behavior, please ask.  4 77 - Leap doesn’t work great. try not to move your hands around too much while interacting. If at any point you see a hand image on the screen it means you need to remove your hand and bring it back, show it [LIKE THIS] in front of it, then point your index finger to interact. - We ask that you do not put your fingers too close to the leap as it will not see your hands if they are too close, so moving upwards on the device only will prevent this problem.  1. Illusion 15 min (Smartphone: Samsung galaxy S7)   Audio No Audio Ball Ball with audio Ball without audio Photo Photo with audio Photo without audio  Conditions will be fully randomized for each participant. Each participant will encounter and rate each condition one at a time, number of repetitions being once.   Users will be given smartphones with an image of a ball, a group photo, or some text and will be asked to zoom in by touching the image, and then pulling their finger up as their wrist continues to be set on the same platform as the phone. They will be asked to point one  finger as to focus their attention to the illusion and feeling rather than the gestures (learned from the pilot 0 that we have done). For both of the image conditions, there will be two conditions: with and without audio feedback (that is, the sound of an elastic balloon stretching to a max point, and then going back to the initial size when it hits the top limit) as the image is pulled up and as it snaps back after a certain threshold. After about 20 seconds of exploration, participant will be asked:  - Can you describe what you feel as you move your finger up and down above the image? Is there anything it reminds you of? - If first one:​ “When you scroll with a mouse and the cursor moves either slower or faster relative to your hand movement, it can make the mouse feel “heavier” or “lighter” - or when you use a scrollbar to scroll up and down and if it is slower to react  when you hit the top and bottom, it feels magnetic or sticky. he have you ever felt that or can you imagine what that would be like?  This is what we call a “pseudo-haptic illusion” - when you think you feel something even you don’t really. It’s the kind of thing we’d like to know if you experience any effects here.” - On a scale of 0-10, 0 being no effect, 10 being a very strong effect, how strong was this effect? ( ON PAPER) … - Are there places in the space where your movement seems easier or harder, or does it 5 78always feel about the same?  [if yes]: Where in the space is it more difficult/easy to move through? - If they say it “breaks/falls/drops” etc, show me, roughly, at what point [height above the surface]  it is going to break/fall? If they didn’t say anything reset. -  ... - If user is changing speeds around the slow region, ask:​ Why did you slow down/increase your speed/stop at that point? ... - If there was auditory feedback:​ “What does the auditory feedback remind you of?”  After they are done with all 4, they will be shown all of them again and given the opportunity to revise as they would like:    Metrics: - Rating from 0-4 for each condition. - User’s description of what they feel as they move their fingers up and down, and if it reminds them of anything, unconstrained think aloud, capturing all the words. - User’s reactions captured from the unconstrained think aloud. - Whether or not user can identify the slow region (before the break of the elastic rubber band) - Leap measurements of speed of hand in different regions - Measurement of distance where the user predicts the image to be at the limit range  2. Interaction Usability: Objective Performance and Qualitative Feedback (30 mins) Performance    Metrics: All tasks will be automatically timed from the start of the image-touch. The number of clutches, overshoots, and ​dropping​ of the image while zooming in the elastic condition will be recorded.   Task: A square will appear on the screen as well as a larger square frame. Your task is to zoom into the small square such 6 79that it fits into the frame’s borders. As long as it is somewhere in the range of the red border, it will turn yellow. You need to keep it in the yellow range until it turns green which is when it will restart to the beginning (frame turns red again). If you pass the border, you can always go back down.   The task will be repeated for two other blocks of different sizes such that in total there will be a small, medium, and a large block (to have different zooming amount). There will be 5 trials per size which means 15 trials, in a randomized way.  3 sizes * 5 = 15 trials for a given square of a certain“border”. Then, they will be asked to repeat these 15 trials for a different border such that there is a larger range of accepted “success”.  This will be repeated for 4 conditions:  - Pinch-to-zoom [ ​baseline​] - Linear proximity zooming without Audio (no illusion expected condition) - Elastic proximity zooming with Audio: the sound of an elastic balloon stretching to a max point, and then going back to the initial size when it hits the top limit - Elastic proximity zooming without Audio  This whole thing will be repeated for smartwatch again.  Estimated time: 2-3 minutes per condition which means about 12 minutes for device. X2 devices = ~24 minutes. + 10 minutes for questions which equals to about ~30 minutes.  Interview  (10 mins)  -> Goal: To compare user experience of zooming techniques  Users will be asked verbally and will be allowed to retry the above 4 conditions.  - Rank all 4 zooms in order of ease of use. Why?  - Rank all 4 zooms in order of pleasantness. Why? - Rank all 4 zooms in order of preference. Why?  - In which case do you think are they useful for? You don’t have to give an answer to each condition.   - What do you think of auditory feedback? Did you find it helpful / annoying?  How would you change the auditory feedback if you could?  - Any questions?  7 80  Study Design Description   Participants  Number: ​ 10 Description ​: Adults aged 18-50 who have had min 2 years of smartphone usage experience. Incentive​: $15/hr Rationale:​ Participants should have familiarity with the smartphone and how to interact with a touch screen device.  Independent Variables  Condition (all 4 stated above - including pinch-to-zoom) Dependent Variables  Time it takes to accomplish each task. All rankings, ratings, and descriptions stated by participant as described above.   Session Time Budget  1 hour per participant. Apparatus  Analysis RQ1: How do visual and audio modalities influence the virtual ​illusion of connectedness​ on a screen? (subjective rating)  2x2 factorial ANOVA with a Bonferonni correction  RQ2: How is the usability of the StickyZoom interaction techniques?  - Which technique, out of the regular zooming technique and all 4 conditions of sticky zooming techniques have the highest ​performance ​ in a zooming task? (timing)  The tasks will be timed and then normalized according to how many circles that task required in order to reach the end. Then they will be averaged within the conditions and each condition will be compared between users to find any outliers and statistical significance. - Do any of the techniques, out of all 5, have any apparent ​learnability​ issues? (observation)  8 81 Familiarization and other observations’ notes will be analyzed for anything that sticks out as a problem during the initial learning phase.  - Which interaction techniques do users have a ​preference​ towards? (ranking) ranking analysis  - Which interaction techniques do users consider ​easy to use ​? (ranking) ranking analysis - Which techniques do users consider ​pleasant ​? (ranking) ranking analysis  - How do users define the ​experience ​ of all 5 techniques? (interview)  Analyze notes for any patterns  RQ3: How is the prototype's performance relative to what is needed to zoom?  - Is it interfering with the experiment?  Analyze notes during observations.  - Is the prototype's performance adequate to support fluid proximity based zooming? Analyze notes during observations.     Overview and Rationale for Study Approach  1. We decided to get rid of panning for the first part of the experience, that is, the participant will only be allowed to zoom in and out without panning when they are performing the task. The first reason is that we want to test the fluidity and efficiency of StickyZoom when users quickly zoom in and out, thus panning is not a crucial part of the process. Secondly, adding panning to the process might influence the illusion of connectedness, which is an important factor we want to test in the user study. Thus the panning in both StickyZoom and regular zoom will be excluded in this study. 2. In paper [6] the researcher calculated the overshoot frequency of each technique, but we will not be counting the overshoot. First, it is because precise control is not a focus of StickyZoom: Usually on the smartwatch and smartphone, users don't need very precise zooming, especially when zooming a picture--what StickyZoom is designed for. Also, from the study's perspective, if participants are asked to try their best to be precise, they might be too cautious on the zooming process to experience the enjoyable feeling of stickiness we want to test. 3. There will be a familiarization process at the beginning of the study, for the users to play with different techniques freely and get used to them. We added this part because most participants possibly don't have experience with using a smart watches, the unfamiliar feeling might cause them to fail the first several tasks. Thus adding the practice period can both help them feel smooth when perform the tasks and reduce the training effect. 9 82Also, this can help us examine the learnability of both types of zoom: if participants feel they can conduct zooming smoothly after playing with it for several minutes, it shows the interactions are learnable. Most importantly, we are comparing different conditions of StickyZoom, and the practice part will allow them to calibrate to different conditions, giving them a comprehensive understanding of each condition. 4. In each task, we will randomly assign numbers of trials it will take to reach the correct circle that will be the end to their task. The numbers assigned for each condition will be the same, but the orders will be randomized. This is to control the number of times participants zoom in and out are the same in every condition, so that we can better compare the consumed time and test efficiency. 5. As we noticed that it is hard to create a “stickiness” illusion, we changed the point of the experiment from looking at how to create the stickiness illusion, to looking at if people are feeling any physical sensation, if that is due to the zooming effects, and whether that is accentuating the zooming experience. 6. Users are not asked to rate the level of stickiness, as this would be a leading question. They are therefore asked to describe the experience and then rate the level of the given feeling.   Target Publications UIST 2017  Ethics  Ethics Form Number Video Recording? Declarations Amendments Required   Bibliography  1. Sears, A. and Shneiderman, B. High precision touchscreens: design strategies and comparisons with a mouse. International Journal of Man-Machine Studies 34, 4 (1991), 593. 2. Käser, D.P., Agrawala, M., and Pauly, M. Finger- Glass: Efficient multiscale interaction on multitouch screens. CHI '11 (2011), 1601. 3. Lank, E. and Phan, S. Focus+Context sketching on a pocket PC. CHI EA '04 (2004). 4. Negulescu, M., Ruiz, J., and Lank, E. ZoomPointing revisited: supporting mixed-resolution gesturing on interactive surfaces. ITS '11 (2011), 150. 5. pinch-to-zoom-plus 10 836. Xiang 'Anthony' Chen, Julia Schwarz, Chris Harrison, Jennifer Mankoff, and Scott E. Hudson. 2014. Air+touch: interweaving touch & in-air gestures. In Proceedings of the 27th annual ACM symposium on User interface software and technology (UIST '14). ACM, New York, NY, USA, 519-525. DOI=10.1145/2642918.2647392 http://doi.acm.org/10.1145/2642918.2647392 7. Ken Hinckley, Seongkook Heo, Michel Pahud, Christian Holz, Hrvoje Benko, Abigail Sellen, Richard Banks, Kenton O'Hara, Gavin Smyth, and William Buxton. 2016. Pre-Touch Sensing for Mobile Interaction. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 2869-2881. DOI: http://dx.doi.org/10.1145/2858036.2858095 8. Jaehyun Han, Sunggeun Ahn, and Geehyuk Lee. 2015. Transture: Continuing a Touch Gesture on a Small Screen into the Air. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '15). ACM, New York, NY, USA, 1295-1300. DOI=http://dx.doi.org/10.1145/2702613.2732849 9. Sylvain Malacria, Eric Lecolinet, and Yves Guiard. 2010. Clutch-free panning and integrated pan-zoom control on touch-sensitive surfaces: the cyclostar approach. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). ACM, New York, NY, USA, 2615-2624. DOI=http://dx.doi.org/10.1145/1753326.1753724 10. Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, and Wendy Mackay. 2011. Mid-air pan-and-zoom on wall-sized displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 177-186. DOI=http://dx.doi.org/10.1145/1978942.1978969 11. Marquardt N, Jota R, Greenberg S, et al. The continuous interaction space: interaction techniques unifying touch and gesture on and above a digital surface[C]//IFIP Conference on Human-Computer Interaction. Springer Berlin Heidelberg, 2011: 461-476. 12. Chris Harrison and Anind K. Dey. 2008. Lean and zoom: proximity-aware user interface and content magnification. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '08). ACM, New York, NY, USA, 507-510. DOI=http://dx.doi.org/10.1145/1357054.1357135 13. Alex Butler, Shahram Izadi, and Steve Hodges. 2008. SideSight: multi-"touch" interaction around small devices. In Proceedings of the 21st annual ACM symposium on User interface software and technology (UIST '08). ACM, New York, NY, USA, 201-204. DOI=​http://dx.doi.org/10.1145/1449715.1449746 14. SkinTrack 15. Johan Kildal. 2010. 3D-press: haptic illusion of compliance when pressing on a rigid surface. In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI '10). ACM, New York, NY, USA, , Article 21 , 8 pages. DOI: http://dx.doi.org/10.1145/1891903.1891931 16. Gentaz, E. and Hatwell, Y. 2008. Haptic Perceptual Illusions. In Human Haptic Perception, M. Grunwald Ed. Birkhäuser, 223-233.     11 84Appendix CParticipant ChecklistThis sheet was used to keep track of things to do before and after each participantduring the study.85Participant Checklist  TODO before each participant:  ❏ Connect computer to UBCVisitor, click on random webpage and click accept. ❏ Computer volume set to (full -7) ❏ Turn off phone’s autolock. ❏ Run: node leapnode.js ❏ Connect watch to UBCVisitor, click on random webpage, and click accept. ❏ Turn off bluetooth on phone, put it on airplane mode. ❏ Connect phone to UBCVisitor, click on ramdom webpage, and click accept. ❏ Connect leap and test setup using visualizer.  ❏ Set up speakers and test audio level. (max vol - 6) ❏ Open procedure: https://docs.google.com/document/d/1ATczB5LJWWZ9I8lnzhIEwmTP8Kgg4MdRWD7uIhaTPaQ/edit#  ❏ Print out Participant Answer Sheet: https://docs.google.com/document/d/1itIXoTUvSVe4_LTQpVtbvARYzNXg-ruS9VjiuWgkIFA/edit  ❏ Prepare Coding Sheet ❏ Prepare Consent Form. ❏ Prepare $15. ❏ Prepare payment confirmation signature form. ❏ Prepare NDA to be signed. ❏ Set up voice recorder.  TODO after each participant: ❏ Turn off recorder and check. ❏ Upload recording on the drive. ❏ Check PID is on all forms.  ❏ Put Coding sheets on drive  ❏ NEVER delete any data. ❏ Put all data under their folder with their PID. Put unnecessary data in another folder under their folder. ❏ Upload data to Drive and then to SPIN server. ❏ Check all forms. 86Appendix DStudy Script and Coding SheetThis sheet is the detailed study script and coding sheet that was used for eachparticipant throughout the study.87 Participant Number: ____________________  Zoom Study Steps and Coding Sheet Introduction to Study  Hello! Our names are… We are in Prof. Karon MacLean’s SPIN lab Signing the consent form Signing the NDA. You are allowed to leave the experiment at any point you’d like. Let us know if you need a break. The study has two main sections and we will give a short break in between anyway.  In this experiment, we’re studying ways that you can use proximity (your height above the surface) to zoom the screen contents in and out.  In front of you is a smartwatch and a smartphone; we’ve created a prototype setup that allows us to measure your finger’s height above their screens, to mimic some new technology that we think will be coming to mobile devices before long.  To set up, I would like to know if you are right or left handed?  ❏ Set participant up next to the leap according to which hand is their dominant hand.  ❏ Check visualizer. ❏ Ask if comfortable.  ❏ Arrange wrist on a surface. ❏ Start voice recorder.  We are asking you to focus on the experience of the interaction technique rather than the polished quality of this early prototype - which I must warn you, does not always work perfectly. In fact, you may experience some glitches, and I’ll ask you to do your best to ignore these. If at any time you’re not sure whether what you see is a glitch or an intended behavior, please ask.   - Leap doesn’t work great. try not to move your hands around too much while interacting. If at any point you see a hand image on the screen it means you need to remove your hand and bring it back, show it [LIKE THIS] in front of it, then point your index finger to interact. - We ask that you do not put your fingers too close to the leap as it will not see your hands if they are too close, so moving upwards on the device only will prevent this problem. - If you see an image of a hand, it means that the leap has not recognized your 88hand. You can take your hand behind the leap, and put it back into the frame so as to calibrate it.  Part1 - Illusion Introduction CHECK THAT PARTICIPANT ID IS NOT GENERATED.  In the first part of the study, we will test your reactions to 4 conditions of zooming only on a smartphone. You will see and hear things relevant to the interaction. You should pretend that the speakers are from the device you are working on.   ❏ EXPLAIN HOW TO ZOOM ! starts with a touch and might end with a touch.  We will allow you to explore each condition for about 20 seconds. You are encouraged to think out loud; no constraints while you explore them.  After each condition, you will be asked what kind of object each of the interactions remind you of interacting with, and how strong that effect is.  Conditions are randomized on the settings page. Play the conditions and ask:  Condition 1: ________________  1. Can you describe what you feel as you move your finger up and down above the image?     2. Is there anything it reminds you of?    3. “When you scroll with a mouse and the cursor moves either slower or faster relative to your hand movement, it can make the mouse feel “heavier” or “lighter” - or when you use a scrollbar to scroll up and down and if it is slower to react when you hit the top and bottom, it feels magnetic or sticky. he have you ever felt that or can you imagine what that would be like?  This is what we call a “pseudo-haptic illusion” - when you think you feel something even you don’t 89really. It’s the kind of thing we’d like to know if you experience any effects here.”     4. On the sheet next to you, could you rate the strength of the illusion you’re reminded of?   5. Are there places in the space where your movement seems easier or harder (due to visual and auditory cues -- might need to ask about speed or control--), or does it always feel about the same?   [if yes]: Where in the space is it more difficult/easy to move through?    6. Was there anything in the movement of the images or the audio feedback that warned you that the image might drop soon, BEFORE it happened?     7. Can you show me, very roughly, a point [height above the surface]  in the range where it might break/fall/drop? Just zoom into the image until you get there and tell me “NOW” while keeping your finger there and I will tell the program to note that.     Condition 2: ________________  1. Can you describe what you feel as you move your finger up and down above the image?      2. Is there anything it reminds you of?    3. Any illusion effects that remind you of something?    904. On the sheet next to you, could you rate the strength of the illusion you’re reminded of?   5. Are there places in the space where your movement seems easier or harder​ (due to visual and auditory cues -- might need to ask about speed or control--),​ or does it always feel about the same?   [if yes]: Where in the space is it more difficult/easy to move through?     6. Was there anything in the movement of the images or the audio feedback that warned you that the image might drop soon, BEFORE it happened?    7. Can you show me, very roughly, a point [height above the surface]  in the range where it might break/fall/drop? Just zoom into the image until you get there and tell me “NOW” while keeping your finger there and I will tell the program to note that.     Condition 3: ________________  1. Can you describe what you feel as you move your finger up and down above the image?     2. Is there anything it reminds you of?    3. Any illusion effects that remind you of something?    4. On the sheet next to you, could you rate the strength of the illusion you’re reminded of?   5. Are there places in the space where your movement seems easier or harder (due to visual and auditory cues -- might need to ask about speed or control--), or does it always feel about the same?   [if yes]: Where in the space is it more difficult/easy to move through? 91   6. Was there anything in the movement of the images or the audio feedback that warned you that the image might drop soon, BEFORE it happened?    7. Can you show me, very roughly, a point [height above the surface]  in the range where it might break/fall/drop? Just zoom into the image until you get there and tell me “NOW” while keeping your finger there and I will tell the program to note that.     Condition 4: ________________  1. Can you describe what you feel as you move your finger up and down above the image?     2. Is there anything it reminds you of?    3. Any illusion effects that remind you of something?    4. On the sheet next to you, could you rate the strength of the illusion you’re reminded of?   5. Are there places in the space where your movement seems easier or harder (due to visual and auditory cues -- might need to ask about speed or control--), or does it always feel about the same?   [if yes]: Where in the space is it more difficult/easy to move through?    6. Was there anything in the movement of the images or the audio feedback that warned you that the image might drop soon, BEFORE it happened?    927. Can you show me, very roughly, a point [height above the surface]  in the range where it might break/fall/drop? Just zoom into the image until you get there and tell me “NOW” while keeping your finger there and I will tell the program to note that.    --- REVISION-- Allow users to revise the above ratings.  Part2: Usability and User Experience Introduction  ❏ CHECK THAT PARTICIPANT ID IS GENERATED.  In the second, and final, part of the experiment we will ask you to perform a task on both of the devices (smartwatch, and smartphone). We are looking for performance here so we ask that once you start a task, you finish it as quickly as you would, in a natural manner (no need to rush). The timing starts once you click to start the zooming so you do not have to finish all trials, and can stop between the trials.   Your task: A square will appear on the screen as well as a larger square frame. Your task is to zoom into the small square such that it fits into the frame’s borders. As long as it is somewhere in the range of the red border, it will turn yellow. You need to keep it in the yellow range until it turns green which is when it will restart to the beginning (frame turns red again). If you pass the border, you can always go back down.   There will be regular pinch-to-zoom, and proximity zooming versions of this task. You are allowed to pinch outside of the box, because it is so small.  One thing to note is that the watch goes back when trying to ptz sometimes and so try to make sure your fingers are staying on the image and you can just scroll it back..  Let’s do one example with the watch.  Show both a prox zooming, and a pinch-to-zoom example.  ❏ [Delete trials] TRIALS ARE DELETED  You will be prompted to do this trial a couple of times for 4 different conditions of zooming. After you’ve done all 4, you will be asked for your preferences. 93 Ok, we will now start the experiment  [start experiment]  (Check as it is completed) Completion Pinch-to-Zoom Linear Elastic w. Audio Elastic W/out Audio Phone     Watch       ❏ [DONE] -> CHECK ALL DATA LOGS ARE COMPLETE  Interview After PHONE:  For these questions, we would like you to pretend like there are no glitches and imagine the proximity technology has reached a point where it works really well.    1. USEFULNESS: Between the pinch-to-zoom and your favorite proximity zooming technique, which of the two do you find more useful? WHY?   2. USABILITY: One the sheet next to you, could you rank: among the 3 proximity zooming techniques, which one you’d be happier to use in the long term for this device? WHY?    3. In which cases do you think they might be useful for? It can be proximity vs pinch-to-zoom or more detailed proximity comparisons if you have any in mind; you don’t have to give an answer to each condition.   Maps vs text? Moment in day/life, certain context?   Interview After WATCH: For these questions, we would like you to pretend like there are no glitches and imagine the proximity technology has reached a point where it works really well.  1. USEFULNESS: For the watch, between the pinch-to-zoom and your favorite proximity 94zooming technique, which of the two do you find more useful? WHY?    2. USABILITY: One the sheet next to you, could you rank: among the 3 proximity zooming techniques, which one you’d be happier to use in the long term for this device? WHY? What do you think of them? (Can try them again)    3. In which cases do you think they might be useful for? It can be proximity vs pinch-to-zoom or more detailed proximity comparisons if you have any in mind; you don’t have to give an answer to each condition.   Maps vs text? Moment in day/life, certain context?   4. (If they haven’t said anything about elasticity) Ask about elasticity….   ----  Auditory:  1. Overall, what do you think of auditory feedback? Did you find it helpful / annoying? How would you change the auditory feedback if you could?   ---   Exit - Any questions or comments for us? - Thank you! - Signing payment form  - Payment 95Appendix ECall For ParticipationThe appendix is the call for participation we used in our study.9697Appendix FConsent FormThis appendix is the consent form we used in our study.9899 Version 2.0 / March 1, 2017 / Page 2 of 2  STUDY RESULTS: We plan to publish the analyzed, anonymized results of this study in peer-reviewed articles where we hope they will positively impact the development of this class of technology in both academia and industry. CONFIDENTIALITY: You will not be identified by name in any study reports. Any identifiable data gathered from this experiment will be stored in a secure Computer Science account accessible only to the experimenters. Video excerpts will be edited to remove identifying information (including but not limited to obscuring face and/or voice) and will not be used in publication unless permission is explicitly given below. VIDEO RELEASE: You may be asked for video to be recorded during this session. You are free to say no without affecting your reimbursement. I agree to have VIDEO recorded:  ☐  Yes                ☐  No I agree to have ANONYMIZED VIDEO EXCERPTS presented in publications:     ☐  Yes                ☐  No  You understand that the experimenter will ANSWER ANY QUESTIONS you have about the instructions or the procedures of this study. After participating, the experimenter will answer any other questions you have about this study. Your participation in this study is entirely voluntary and you may refuse to participate or withdraw from the study at any time without jeopardy. Your signature below indicates that you have received a copy of this consent form for your own records, and consent to participate in this study. Any questions about the study can be directed to Dilan Ustek, ustekd@cs.ubc.ca. If you have any concerns or complaints about your rights as a research participant and/or your experiences while participating in this study, contact the Research Participant Complaint Line in the UBC Office of Research Ethics at 604-822-8598 or if long distance e-mail RSIL@ors.ubc.ca or call toll free 1-877-822-8598.  You hereby CONSENT to participate and acknowledge RECEIPT of a copy of the consent form: PRINTED NAME ________________________________ DATE ____________________________ SIGNATURE ____________________________________  100Appendix GNon-Disclosure AgreementThis is the Non-Disclosure Agreement form signed by all participants in the be-ginning of the study.101102103104105Appendix HParticipant Rating SheetThis appendix is the rating sheet used by participants in our study.106 Participant ID:​ ___________________  On a scale of ​0​ to ​10​, with ​0​ being ​no effect​, and ​10​ being a ​strong effect​, how strong was each condition?   Condition 1:          0                 1                 2                 3                  4                  5                  6                  7                 8                 9                 10     ​no effect                                           moderate effect                          strong effect              Condition 2:          0                 1                 2                 3                  4                  5                  6                  7                 8                 9                 10     ​no effect                                           moderate effect                          strong effect              Condition 3:          0                 1                 2                 3                  4                  5                  6                  7                 8                 9                 10     ​no effect                                           moderate effect                          strong effect              Condition 4:           0                 1                 2                 3                  4                  5                  6                  7                 8                 9                 10     ​no effect                                           moderate effect                          strong effect               Please rate the following by labeling each cell from ​1​ to ​4​ where ​1​ is the ​most ​and ​4​ is the ​least.   I’d be happy to use for the long term:   PHONE Pinch-to-Zoom  Proximity Zoom 1  Proximity Zoom 2  Proximity Zoom 3      WATCH Pinch-to-Zoom  Proximity Zoom 1  Proximity Zoom 2  Proximity Zoom 3     107Appendix IThematic Analysis CodingSheetThis is the coding sheet used to analyze qualitative data from the study discussedin Chapter 6.108Participant #: Elasticity Connectedness Sticky Elastic Conn. StickyBall 1 n/a 0 n/a 0 n/a 0 P1 7.5 0 02 Bounciness 9 0 0 P2 9 0 03 Harder to make it move at the top 4 0 0 P3 4.5 0 04 String, yo-yo 6 String, yo-yo 6 0 P4 7.7 6 05 Stretchy string, tension, force, spring, yo-yo 10 Stretchy string, heavier, tension, force, spring, yo-yo 10 0 P5 9.3 8.5 06 0 Sticky, glue, not separated 8 Sticky, glue, not separated 8 P6 0 7.3 77 0 String 6 0 P7 5 5.7 58 Elastic, sticky 3 0 Elastic, sticky 3 P8 6 0 39 String illusion is stronger, feels light for a ball 7 String illusion is stronger, feels light for a ball 7 0 P9 7 7 010 String, elastic 7 String, elastic 7 0 P10 7.3 7 011 n/a 0 n/a 0 n/a 0 P11 0 0 012 Heavier, something falling from the sky (gravity), explode 6 0 0 P12 7 0 0Average 4.333333333 3.666666667 0.9166666667 70.352Ball w/ Audio 1 Rubber, slimy, stretchy 8 0 02 Feels like I'm bouncing a ball 10 0 03 Harder to make it move at the top 5 0 04 Balloon that will explode 8 0 05 Spring, stretchy 9 0 06 0 0 Sticky 57 Rubber-bandy String 5 Rubber-bandy String 5 08 Elastic 8 0 09 String is kind of stretchy (sometimes less stretchy), feels light 5 String is kind of stretchy (sometimes less stretchy), feels light 5 010 Elastic 8 0 011 n/a 0 n/a 0 n/a 012 Balloon 8 0 06.166666667 0.8333333333 0.416666666774Image 1 n/a 0 n/a 0 n/a 02 Effort 9 0 03 n/a 0 n/a 0 n/a 04 n/a 0 n/a 0 n/a 05 Spring 9 0 06 0 Sticky, connected 8 Sticky, connected 87 0 String 6 08 Stiffer 5 0 09 Audio makes the string more stretchy 8 Audio makes the string more stretchy 8 010 Elastic, tighter as you go up 6 0 011 n/a 0 n/a 0 n/a 012 n/a 0 n/a 0 n/a 03.083333333 1.833333333 0.666666666737Image w/ Audio 1 Effortful as it goes up 7 0 02 Stretching 8 0 03 n/a 0 n/a 0 n/a 04 Spring, hairtie, hairband, stretching 9 0 05 0 String 7 06 0 Sounds make it less connected 6 07 Chewing gum, rubber band 5 0 Chewing gum, rubber band 58 Balloon, elastic stretching 8 0 09 Sounds like stretching, spring, stiff, string, bar/pole 8 Sounds like stretching, spring, stiff, string, bar/pole 8 010 Elastic 8 0 011 n/a 0 n/a 0 n/a 012 Balloon 7 0 05 1.75 0.41666666676010 elastic 6 conn. 3 sticky8 yo-yo, rubber bandy string, stretchy string, connected, string, not separated, bar, pole20 hair band, Slimy, Elastic, rubber, rubber band, hair tie, stretchy string, balloon, chewing gum, spring, stretchy, bouncy, harder to move at the top then drop, stiffness, tension, tightness, yoyo, spring, force, gravity3 Chewing gum, glue, sticky8 connected, string, not separated, bar, pole20 hair band, Slimy, Elastic, rubber, rubber band, hair tie, stretchy string, balloon, chewing gum, spring, stretchy, bouncy, harder to move at the top then drop, stiffness, tension, tightness, yoyo, spring, force, gravity3 glue, stickyOne problem in getting the overall average (4.7) of elasticity through all conditions is that it is mixed with a lot of 0s when there were other conditions than elasticity as well, and since all 4 conditions are mixed (image no audio is weak), then there's a small number. 109

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0354394/manifest

Comment

Related Items