UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Supporting focused work on window-based desktops Pilzer, Jan 2019

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2019_november_pilzer_jan.pdf [ 2.98MB ]
JSON: 24-1.0384561.json
JSON-LD: 24-1.0384561-ld.json
RDF/XML (Pretty): 24-1.0384561-rdf.xml
RDF/JSON: 24-1.0384561-rdf.json
Turtle: 24-1.0384561-turtle.txt
N-Triples: 24-1.0384561-rdf-ntriples.txt
Original Record: 24-1.0384561-source.json
Full Text

Full Text

Supporting Focused Work on Window-Based DesktopsbyJan PilzerB.Sc., DHBW Mannheim, 2014A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFMaster of ScienceinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Computer Science)The University of British Columbia(Vancouver)October 2019c© Jan Pilzer, 2019The following individuals certify that they have read, and recommend to the Fac-ulty of Graduate and Postdoctoral Studies for acceptance, the thesis entitled:Supporting Focused Work on Window-Based Desktopssubmitted by Jan Pilzer in partial fulfillment of the requirements for the degree ofMaster of Science in Computer Science.Examining Committee:Reid Holmes, Computer ScienceCo-SupervisorThomas Fritz, Computer ScienceCo-SupervisorGail Murphy, Computer ScienceSupervisory Committee MemberiiAbstractWhen working with a computer, information workers continuously switch tasksand applications to complete their work. Given the high fragmentation and com-plexity of their work, staying focused on the relevant pieces of information canbecome quite challenging in today’s window-based environments, especially withthe ever increasing size of display technology. To support workers in staying fo-cused, we conducted a formative study with 18 professional information workersin which we examined their computer based and eye gaze interaction with the win-dow environment and devised a relevance model of open windows. Based on theresults, we developed a prototype to dim irrelevant windows and reduce distrac-tions, and evaluated it in a user study. Our results show that participants keep anaverage of 12 windows open at all times, switch windows every 17 seconds, andthat our prototype was able to predict and highlight relevant open windows withhigh accuracy and was considered helpful by the users.iiiLay SummaryWhen working with a computer, information workers continuously switch tasksand applications to complete their work. Given the high fragmentation and com-plexity of their work, staying focused on the relevant pieces of information canbecome quite challenging in today’s computer environments, especially with theever increasing size of screens. To support workers in staying focused, we con-ducted a formative study with 18 professional software developers in which weexamined their computer and eye gaze interaction with their computer windowsand devised a relevance model of open windows. Based on the results, we devel-oped a prototype to dim irrelevant windows and reduce distractions, and evaluatedit in a user study. Our results show that participants keep an average of 12 win-dows open at all times, switch windows every 17 seconds, and that our prototypewas able to determine and highlight relevant open windows with high accuracy andwas considered helpful by the users.ivPrefaceAll of the work presented henceforth was conducted in the Software Practices Lab-oratory at the University of British Columbia and the Software Evolution & Archi-tecture Lab of the University of Zurich. All projects and associated methods wereapproved by the University of British Columbia’s Research Ethics Board [certifi-cates #H17-02682 and #H19-01449].A version of this material has been submitted to be published [Jan Pilzer,Raphael Rosenast, Andre´ N. Meyer, Thomas Fritz, and Elaine M. Huang. Sup-porting Focused Work on Window-Based Desktops].The study and some initial analysis in Chapter 3 was conducted by RaphaelRosenast in Zurich and included in his Master’s thesis. I received his anonymizeddata and performed additional detailed analysis tailored to my research goals. Iwas the lead investigator for the projects located in Chapter 4 and 5, responsiblefor all major areas of concept formation, data collection and analysis, as well asmanuscript composition and participant recruitment. Thomas Fritz was the super-visory author on this project and was involved throughout the project in conceptformation and manuscript composition. Andre´ N. Meyer and Elaine M. Huangcontributed to manuscript edits.vTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.1 Window Interaction Studies . . . . . . . . . . . . . . . . . . . . . 42.2 Detecting Relevance . . . . . . . . . . . . . . . . . . . . . . . . 62.3 Supporting Focused Work . . . . . . . . . . . . . . . . . . . . . 63 Study 1: Developers’ Window Interactions . . . . . . . . . . . . . . 83.1 Monitoring Study . . . . . . . . . . . . . . . . . . . . . . . . . . 83.1.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . 83.1.2 Study Method . . . . . . . . . . . . . . . . . . . . . . . . 93.1.3 Monitoring Application . . . . . . . . . . . . . . . . . . 93.1.4 Collected Data . . . . . . . . . . . . . . . . . . . . . . . 103.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.2.1 Open Windows Behavior . . . . . . . . . . . . . . . . . . 113.2.2 Window Switching Behavior . . . . . . . . . . . . . . . . 12vi3.2.3 Multi-Monitor Usage . . . . . . . . . . . . . . . . . . . . 133.2.4 Usage of Screen Real Estate . . . . . . . . . . . . . . . . 143.2.5 Visual Attention and Focus on Windows . . . . . . . . . . 153.2.6 Behavior by Activity . . . . . . . . . . . . . . . . . . . . 153.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Predicting Relevant Windows . . . . . . . . . . . . . . . . . . . . . . 184.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184.1.1 Temporal, Recency . . . . . . . . . . . . . . . . . . . . . 184.1.2 Temporal, Duration . . . . . . . . . . . . . . . . . . . . . 194.1.3 Temporal, Frequency . . . . . . . . . . . . . . . . . . . . 194.1.4 Semantic, Window Title . . . . . . . . . . . . . . . . . . 194.2 Empirical Analysis based on Study 1 . . . . . . . . . . . . . . . . 205 Study 2: Highlighting Relevant Windows to Support Focus . . . . . 225.1 WindowDimmer Approach . . . . . . . . . . . . . . . . . . . . . 225.2 Intervention Study . . . . . . . . . . . . . . . . . . . . . . . . . 245.2.1 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 255.2.2 Participants . . . . . . . . . . . . . . . . . . . . . . . . . 265.2.3 Collected Data . . . . . . . . . . . . . . . . . . . . . . . 265.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265.3.1 Window Interaction Behavior . . . . . . . . . . . . . . . 275.3.2 Predicting Relevant Windows . . . . . . . . . . . . . . . 275.3.3 Evaluation of WindowDimmer . . . . . . . . . . . . . . . 285.4 Participant Feedback . . . . . . . . . . . . . . . . . . . . . . . . 306 Threats to Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326.1 Construct Validity . . . . . . . . . . . . . . . . . . . . . . . . . . 326.2 Internal Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . 336.3 External Validity . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38viiBibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39viiiList of FiguresFigure 3.1 Average number of open and visible windows by participant. . 12Figure 3.2 Weighted mean of open windows by participant and hour dur-ing a typical work day where we could gather enough datafrom all participants. . . . . . . . . . . . . . . . . . . . . . . 13Figure 3.3 Time between window switches. The bump at 300 secondsis caused by our 5 minute idle timeout. The last bar at 600represents all longer times between window switches. . . . . . 14Figure 4.1 Probability of predicting the correct next window within thetop X most relevant windows. . . . . . . . . . . . . . . . . . 20Figure 5.1 WindowDimmer dims all less relevant windows and the desk-top to emphasize the top 3 most relevant windows. . . . . . . 23Figure 5.2 WindowDimmer settings that allow enabling, switching themode for the study, and setting the number of windows thatdon’t get dimmed. . . . . . . . . . . . . . . . . . . . . . . . 24Figure 5.3 WindowDimmer pop-up asking participants to select the rele-vant windows out of a list of all open windows. . . . . . . . . 25Figure 5.4 Number of windows reported (by participants) and predicted(by our model) relevant averaged over all reports. The threetop scoring windows in our model are considered relevant. . . 28ixFigure 5.5 Distribution of the duration a window is active before and afterthe dimming is activated by type of application. The distribu-tion is binned into 3 buckets (0-1s, 1-5s, 5+s) to highlight thechanges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29Figure 5.6 Percentage of desktop area dimmed by desktop size. Both arecalculated across all screens. . . . . . . . . . . . . . . . . . . 30xChapter 1IntroductionMulti-tasking and fragmentation of work are known challenges of modern knowl-edge work [9, 15, 32]. While multi-tasking is necessary and beneficial in enablinginformation workers to make progress on more than one task, it can come at asignificant cost: reduced quality and more errors [43], more time overall [42, 43],increased stress [2, 26], and lower productivity [28, 31]. Although these difficultiesare to some extent inherent and inevitable side effects of engaging in multiple workactivities simultaneously, we believe that these effects can also be decreased if theinformation worker can focus only on the most relevant tasks at any given time,and is subjected to limited distraction from less relevant tasks.Modern window-based computer desktops provide support for multi-tasking byallowing information workers access and visibility to many applications and digitalartifacts simultaneously. However, with increasing screen size, multi-monitor sup-port and the ability to run several applications and open many windows at the sametime, computer desktops can become more cluttered, with many windows openand visible, and thereby provide ample opportunity for distractions and switchesto less relevant tasks. As studies have shown these many open windows and possi-ble distractions can derail focus [1]. Additionally, the more virtually cluttered thecomputer is by having more applications, windows, or tabs open, the higher thecognitive costs [30] are and the more time workers need to spend to find what theyare looking for [20, 37]. Switching between windows and tasks further diminishesour concentration by leaving an “attention residue”, which makes it more difficult1to resume a task once a worker is distracted [23].In this research we aim to identify ways in which the desktop environment canbe leveraged to support strategic multitasking, by providing lightweight guidancethat helps information workers maintain focus on relevant tasks, and minimizemulti-tasking activities that are distracting and unproductive. To this extent, wedesigned and developed the WindowDimmer application which predicts the mostrelevant windows currently in use and dims the other ones to reduce their likelihoodof drawing the information worker’s attention.To test the value of this approach, it was necessary to first establish an under-standing of information workers’ practices in interacting with conventional windows-based desktops in the context of knowledge work, and specifically to understandtheir patterns of opening, closing, and switching between windows and tasks. Weconducted a formative study to monitor computer based and eye gaze interactionwith the window environment of information workers. In addition to yielding sur-prising findings about the frequency and brevity of task switches, such as the aver-age time of 15.7 seconds between two window switches, this study also provided abaseline against which we could compare participant’s task and window switchingpractices using WindowDimmer. Our subsequent study of WindowDimmer in useindicated a substantial reduction in brief window switches, and visits to windowsnot relevant to the current task. Additionally, participant feedback to the applica-tion was generally positive, and indicated that WindowDimmer could help reducedistraction and improve focus.In our work, we focused on one community of information workers, softwaredevelopers. We studied software developers because of their extensive use of com-puters at work, their openness towards improving their work and productivity [24],and overall comparable work, while working on a broad variety of activities [32].However, we believe that although some aspects of the practices we present in thiswork may be different for other areas of knowledge work, the overall approachesfor predicting window relevance and providing desktop-based support for focusmay be valuable for supporting information workers in other professions as well.This thesis makes these primary contributions:• A monitoring study capturing information on window interaction practices2of professional software developers.• The WindowDimmer application, capable of predicting the relevance of anopen window and applying a dimming effect to the remaining desktop.• An evaluation of the dimming approach with a second group of professionalsoftware developers and software engineering students.3Chapter 2Related WorkRelated work can be roughly categorized into the following areas: studies on un-derstanding information workers’ interactions with their window environment, ap-proaches to detect currently relevant windows, and approaches to support task fo-cused work.2.1 Window Interaction StudiesModern window-based desktops allow information workers to access and see manyapplications and windows simultaneously with many applications capable of open-ing multiple windows. With increasing screen size, multi-monitor support, and theability to use several applications at the same time, they also provide sources fordistractions, such as multi-tasking between the main work task and less relevanttask, thus reducing focus [1]. A high number of open and visible windows createsdistracting visual clutter, but windows can introduce a cost even when not visible.Many open windows make it harder to locate the ones needed for a task, increasingthe cost of window switches [20]. Each window switch can potentially act as atrigger to decoy the worker to perform a task switch, such as a new unread email,an interesting Twitter conversation, or a not yet finished task. Work by McMainsand Kastner [30] and Niemela¨ and Saariluoma [37] has shown that having morewindows or tabs open on the computer at the same time, increases cognitive costsand requires more time for workers to find the relevant window. Little is known4yet about information workers’ practices in interacting with conventional windows-based desktops at work, such as opening, closing and switching between windowsand tasks. Existing studies vary in the type of information that was tracked. Mostly,studies either focused on active observations [47] or specific computer interactionevents based on keyboard and mouse interaction with specific applications [13, 16].The interactions of software developers in particular have been studied within theirIDEs for the Pharo IDE [34] and Eclipse [36]. Our work is not restricted on anindividual application, but considers interactions with all programs and windowson a developers’ computer.In 2004, Hutchings et al. conducted a more generic monitoring study across allwindows to investigate the effect of larger screen real estate on window switchingbehavior [17]. They found that users with smaller screens had a median of 4 visiblewindows, while those with larger or more screens had a median of 6 visible win-dows. With ever increasing screen sizes and resolutions, we expect an even largernumber of open and visible windows. Our studies will monitor similar features andprovide an update to these numbers.Further studies that observed information workers used the interaction to de-termine tasks [32] or higher-level working spheres [15, 25]. These studies wereconducted over multiple days and reported a very fragmented working style withmany task switches and interruptions. For instance, Gonza´lez and Mark [15] de-fined units of work as working spheres between which information workers switchfrequently. In their study with 24 information workers they they found that theirwork is highly fragmented and switches between working spheres occur on averageevery 12 minutes Mark et al..Our first study uses eye-tracking to investigate visual attention on the desk-top environment. Eye-tracking has traditionally been used to study reading andcomprehension. Early experiments studied the differences in code comprehensionbetween novice and expert developers [5, 8]. Other research used eye-trackingtechnology to investigate the comprehension of specific software artifacts, namelyclass diagrams [55], design patterns [45], and identifier styles [46]. In more gen-eral contexts eye-tracking was used to measure task difficulty [4, 14], mental work-load [18], and cognitive load [22]. We aim to leverage the eye-tracking technologyto gather task-independent insights into how software developers visually interact5with the windows on their desktop.2.2 Detecting RelevanceThere is a range of approaches that have tried to detect the relevance of win-dows [39], artifacts [21], and groups of applications [40, 48]. Applications for us-ing the relevance of related resources include task resumption, task switching, andself-monitoring. Most of these approaches take as basis two types of features; tem-poral and semantic. Temporal features are related to the order in which resourcesare accessed. For instance, Bernstein et al. used switches between two windows tocalculate how closely-related they are [6]. They used their WindowRank to arrangewindows for easier task-related window switching.Semantic features are usually shared words in the title and content, for exam-ple used to group open applications and documents into tasks and provide userseasier access to task related resources [11]. Oliver et al. used both types of featuresto analyse window switches [38] and provide an alternative window switching ap-plication [39]. In SWISH [38], they calculated temporal and semantic features togroup windows into tasks and achieved an accuracy of 70% for pre-recorded data.RelAltTab [39] also uses both temporal and semantic features to measure windowrelatedness and reorder or highlight the windows in the window switching applica-tion. Their user study showed that user switched to related windows in over 80%of the instances and that reordering windows had a negative effect.2.3 Supporting Focused WorkThe high fragmentation of a software developer’s typical workday is well reported [15,25, 44]. On average, there are switches between different activities every fewminutes even when not all of them are task switches [32]. Times with fewertask switches and higher engagement with tasks are reliably rated as more pro-ductive [27, 31]. Successful approaches for increasing the perceived productivityhave, for instance, reduced distractions of not work-related websites completelyby blocking them [29] or reducing their availability [52] which gave participants ahigher feeling of being in control.Generally multiple windows are required to complete a tasks. By reducing6the time and effort to find the right window when switching users can keep theirfocus and resume tasks faster. There are several approaches to assist users withwindow switching [19, 50, 51, 54], but their work focuses solely on providing abetter overview and speeding up window switching. Others, more closely relatedto our work, have tried to reduce the overload that people experience by groupingtasks and documents [11], grouping windows [48], improving access to occludedwindow content [53], or reducing the visibility of secondary screens [10]. Thelatter approach by Dostal et al. tested several approaches on how to reduce dis-traction from a separate, non-focused screen. Dostal et al. examined the proposedapproaches in a qualitative study with one participant who perceived a techniquethat dims the whole screen, but visualises display changes on the pixel level asmost useful [10]. Different to our work, the do not consider individual windowsbut only dim out the whole screen.Specifically for software developers, approaches have been proposed to reducethe “window plague” in the window-based Pharo IDE use temporal features tohighlight the most important windows and automatically close the least importantones [35, 41]. Both of these approaches are restricted to the IDE (a single ap-plication) and were only validated based on recorded data, but never tested withusers.7Chapter 3Study 1: Developers’ WindowInteractionsTo support information workers’ focus at work by emphasizing relevant windows,it was first necessary to establish an understanding of their practices in interactingwith conventional windows-based computer desktops. Specifically, we were inter-ested in better understanding developers’ patterns of opening, closing, arrangingand switching between windows on their monitor(s). In addition, we collected thisdata to use it as a baseline for WindowDimmer’s model in study 2.3.1 Monitoring StudyIn this first formative study, Raphael Rosenast, a MSc student at the Universityof Zurich monitored 18 professional software developers for an average of 17.9days (±12.9) per participant in their usual work environment. For the period ofthe study, he ran a monitoring application on participants’ computers in the back-ground, which collected window interaction data, user input data and eye-gazes.3.1.1 ParticipantsRaphael recruited 18 professional software developers from three companies ofvarying size in the software and computer hardware industry through personal con-tacts. He focused his study on professional software developers—as one commu-8nity of information workers—to make sure that participants largely use the com-puter for their work and have similar work habits [32, 33]. Also, he limited hisstudy to users of Microsoft Windows as their main operating system due to thecompatibility with the eye-tracker and his monitoring application. Of his partic-ipants, 17 were male and 1 was female. They had an average of 17.5 years ofsoftware development experience, ranging from 2 to 35 and described their mainresponsibility as development with accompanying tasks of project management,system engineering, and testing. They all resided in Switzerland.3.1.2 Study MethodRaphael started the study by handing out consent forms and explaining the purposeand process of the study to participants as well as informing them about the moni-toring application and the data it collects. He also informed participants that theirparticipation is completely voluntary and that they are allowed to withdraw at anypoint in time. He then provided instructions and supported them with installingthe monitoring application on their computer and with installing and correctly po-sitioning the eye-tracker in front of their primary monitor. He chose the Tobii 4C1as the eye-tracker for his study due to its portability and affordability. Note thatsince it was technically impossible to hook several Tobii 4C’s to the participantscomputer, he could only capture eye-gazes on the participants’ primary monitor.Once the setup was complete, he asked participants to continue their regular workas usual while the monitoring application and eye-tracker collected data nonintru-sively in the background. At the end of the study, he revisited each participant,collected the monitoring data from the participant’s device, uninstalled the soft-ware and the eye-tracker, and conducted a short follow-up interview to ask forfeedback on the study.3.1.3 Monitoring ApplicationTo collect computer interaction and eye-tracking data, Raphael developed his ownmonitoring application that runs in the background based on the PersonalAnalytics1https://gaming.tobii.com/product/tobii-eye-tracker-4c9project 2. For compatibility reasons with the eye-tracker, he developed his appli-cation for the Windows 10 operating system. His application collects data on allmouse movements and keyboard events without tracking the specific keys beingpressed for privacy reasons. Mouse movements are recorded as a single event withthe time and the end position of the cursor. In addition, the application logs all win-dow events, including focus, create, destroy, move, and resize events together withthe size and location of the window, the title of the window and its state (active,minimized or maximized), and information on all other open windows and theirposition. Finally, the application also collects information on all connected screensand the dimension of each screen. In terms of eye-tracking data, his applicationcaptures eye fixations—the location on the screen the user visually pays attentionto—including fixation position and duration as provided by the Tobii eye-tracker.3.1.4 Collected DataRaphael collected a total of 322 days of data for this study, ranging from one tofour weeks worth of data per individual participant. All participants reported thedays they were monitored on as regular work and did not report any distractionor changes in their work based on his monitoring application and the eye-tracker.Overall, he collected a total of 9,522,956 window interactions and 63,292,622 eyefixations from all 18 participants.For each window interaction, we calculated how many pixels of each open win-dow was visible based on the order in which windows appear on the monitor. Wefurther determined how long windows were open or active based on the durationbetween window events. From the mouse movement data, we derived periods oftime when participants were not active on their computer and excluded these fromour analysis. In particular, we chose a 5-minute threshold and considered everyperiod without mouse movement that was longer as not active and excluded it.To identify the windows a participant looked at, we mapped each captured eyefixation to a window using our processed window interaction data that includesthe visibility of all windows, their location and size. Overall, we were able todistinctively map 86.6% of the recorded fixation points to a window. In cases2https://github.com/sealuzh/PersonalAnalytics10when a participant looked at the taskbar, a monitor with no windows, or a newlyopened window that was not yet registered with the windowing system (which canhappen in the first second of opening a new application), we were not able to mapthe eye fixation to a window.By comparing and validating the data of all participants, we noticed one outlier,participant P6, that had more than twice the amount of open windows comparedto all other participants. We therefore decided to exclude the data of P6 from thefurther analysis.3.2 ResultsThis section presents the results of our quantitative analysis of the logged com-puter interaction and eye fixation data. While Raphael provided an initial analysisof the collected data, all the results reported in this thesis were newly calculatedfrom the data. We start with an analysis based on developers’ computer interac-tion (Section 3.2.1 to 3.2.4) and finally present the results from an analysis of theeye-tracking data (Section 3.2.5).3.2.1 Open Windows BehaviorTo better understand the potential for distracting developers when switching win-dows, we analyzed how many windows and applications developers have open atany given time. We calculated the weighted number of open applications and win-dows by the duration they were open. Figure 3.1 presents the number of openwindows across participants. Overall, developers had an average number of 12.1(±6.6) windows open at all times from an average of 9.5 (±3.0) applications.Also considering their multi-monitor setup, we observed that single monitor userstend to keep an average of 6.9 (1 participant only), dual monitor users keep 12.6(±4.0), and triple monitor users keep 14.9 (±11.4) open windows at the same time,which is comparable to previous work [40]. Surprisingly, the overall screen sizedid not significantly impact the number of open windows (Pearson correlation co-efficient of 0.21 with p = 0.42).We found that the number of open windows varies over the course of a day. Inparticular, developers opened more windows than they closed, leading to a growing11l l l ll l l ll l l ll lll l0102030P08 P04 P10 P07 P17 P02 P13 P16 P05 P03 P18 P15 P09 P14 P01 P12 P11Number of WindowsTypel OpenVisibleFigure 3.1: Average number of open and visible windows by participant.number of open windows over the course of the workday, from 9.9 (±5.5) inthe morning to 14.4 (±7.3) in the evening. While this suggests that users mighthave turned off their computers in the evening, we do not have this information.Despite the variation in the number of open windows each developer has open, thisincrease over the course of a day is consistent across all participants as illustratedin Figure 3.2, and similar to the increase in the number of windows open inside theIntegrated Development Environment (IDE) that Roethlisberger et al. found [41].Developers generally do not close windows frequently, but rather leave them openin the background, although, some developers at least tend to reorganize and cleanup their desktop a bit more after returning from lunch break or before leaving work.We also found that developers very rarely move or resize windows, comparedto the frequency of other window interactions, such as switching, opening andclosing. Less than 3% of all window interaction events that were captured over thecourse of the study were changes of size or location of a window.3.2.2 Window Switching BehaviorBased on the computer interaction data, on average, each window was open for atotal of 79.2 (±60.1) minutes, yet this value varied a lot, ranging from 19 minutes to4 hours across participants. Nonetheless, we found that developers switch betweenwindows very frequently keeping a window in focus (active) for only very shortamounts of time. On average, developers focused on a specific window for only12llllll lllllll ll l llllll l l l l lllllll lll lllllllllllllll llll l llll lllllll ll lllllllllllll llll ll llll llllll llllll llllllllllllllll llll lllll lll lllll ll llll l llll ll l l lll1020308:00 9:00 10:00 11:00 12:00 13:00 14:00 15:00 16:00 17:00 18:00Hour of the dayOpen WindowsFigure 3.2: Weighted mean of open windows by participant and hour duringa typical work day where we could gather enough data from all partici-pants.15.7 (±77.3, median=2.2) seconds, before switching to the next one. The highnumber of switches are comparable to previous work [15, 32, 33] where researcherslooked into activity and task switching which are related to window switching.As illustrated in Figure 3.3, the short time developers spend in a window isright skewed by a high number of very short window switches. Overall, 32.9%of all window switches lasted less than 1 second and only 30.7% of the windowsremained active for longer than 5 seconds before the developer switched awayagain. The large number of short window switches might occur for several reasons,including the developer navigating through the open windows to find the relevantone, the developer closing irrelevant windows or selecting the wrong one, or frombeing briefly distracted. Most window switches are switches to previously openwindows. In total, developers revisited already open windows in 76.6% of thewindow switches and only in the remaining 23.4% they opened a new window.3.2.3 Multi-Monitor UsageEven though 94.1% (16 out of 17 participants) are using a multi-monitor setup, ourdata shows that developers are using their primary monitor during the majorityof the time they spend on the computer and position the most windows on. On theWindows 10 operating system, one monitor is defined as the primary one, for which1332.9%36.4%30.7%0.0%25.0%50.0%75.0%100.0%Duration5+s1−5s0−1s1010001000000 200 400 600Duration (in seconds)Number of Window Switches (log)Figure 3.3: Time between window switches. The bump at 300 seconds iscaused by our 5 minute idle timeout. The last bar at 600 represents alllonger times between window switches.the coordinates [0/0] are assigned to the left upper corner, the remaining ones aresecondary monitors. The secondary or tertiary monitors were actively3 used onlyduring 30.7% (±23.3%) of the time, and for 23.9% (±19.8%) of the windows, onaverage. 94.1% (16 out of 17 participants) were using a multi-monitor setup, 14(82.4%) worked with two monitors, the remaining 2 (11.8%) worked with three.Several developers used a laptop and connected the laptop to an external monitorfor most of the time, but sometimes also just used the laptop and therefore only asingle monitor configuration. Over the course of the study, we found that 64.7% (11participants) switched their monitor configuration from time to time, while 35.3%(6 participants) consistently used the same monitor setup throughout the study.3.2.4 Usage of Screen Real EstateWe found that developers have several windows open and visible at the sametime. Over all monitors, there were always an average of 1.5 (±0.7) windows fullyand 3.3 (±1.7) windows partially4 visible. The currently active window, whichon Windows 10 is always fully visible, thereby took up 84.7% (±9.7%) of themonitors’ screen real estate and was maximized5 60.6% (±48.9%) of the time.3An “active window” means it is currently focused and/or receives keyboard input.4In a partially visible window, another window is partly overlapping, and thus covering, it.5A maximized window takes up the entire screen real estate of the monitor, other than the taskbar.143.2.5 Visual Attention and Focus on WindowsThe results on developers’ window switching behavior so far are based on de-velopers’ computer interaction. Hence, it is still unclear whether developers areactually looking at the active window, or at another open window that is currentlynot in focus. Since a better understanding of the visual attention also allows usto better understand which windows are relevant, we equipped participants witheye-trackers on their primary monitor.The results of our analysis show that visual attention shifts similarly fre-quently as the active window, with an average of 31.2 (±87.8) seconds per win-dow and a median of 4.7 (±20.4) seconds. The distribution of shifts in visualattention between windows is again heavily skewed, with 19.3% (±8.4%) of theshifts being less than a second long. The longest time we observed a developerlooking at the same window was 64 minutes, significantly lower than the 4.7 hoursfor the active window. This is however not surprising, since developers might justlook away in between for short periods to relax their eyes.Using the captured eye fixations, we further found that the visual attentionmatches the actively selected window in 83.1% of the cases on the primary mon-itor. The other 16.9% of the eye fixations were directed at windows that were openand visible, yet not the active one. It is not surprising that developers do not alwayslook at the active window, and in fact, developers looked away from the active win-dow at least once for 67.7% of all active windows. When they did so, they lookedat an average of 2.2 (±3.2) distinct non-active windows. There were, however,also 32.3% of the active windows that participants never looked away from untilthey switched the active window using mouse or keyboard. Overall, this analysisprovides evidence that using the active window is a relatively good indicator for adeveloper’s attention.3.2.6 Behavior by ActivityWhile we cannot exactly know what a developer was working on without observ-ing them, the application and title of the active window can be used to estimate theactivity. We observed different window interaction behavior depending on thetype of activity a window belongs to. Our participants tended to keep some types15of windows open in the background even when not used while others where onlyopened for a specific task and then closed again. The File Explorer (211.2 ±110.9minutes), email clients (167.4 ±164.0), and chat application (165.4 ±120.2) win-dows were open for significantly longer than windows of applications for program-ming (51.9 ±40.8) or code review (9.7 ±15.0).We have also observed differences in the time between window switches de-pending on the activity. Email client windows (18.7 ±28.6) and web browser win-dows with work related pages (13.3 ±9.5) are active for significantly longer thanall other types of activities which are only active for 3.6 (±4.8) seconds at a time.Whether a window of an activity is often maximized varies depending on theactivity’s interaction with other windows. On the extremes, with 94.2% (±23.4%)windows for code review—a rather independent activity—are almost always max-imized, but File Explorer windows almost never with 0.7% (±4.9%). The FileExplorer is mostly used to select and open a file or document to start or continuean ongoing and usually does not require a large window. In between, email clientwindows and programming windows are maximized 52.5% (±46.4%) and 44.7%(±38.4%) of the time respectively. These activities can require large windows andundivided attention, but often require external resources and references as well.3.3 DiscussionThe results of our first study showed that most developers keep a large number ofwindows open at once, providing ample opportunity for distractions and switchingto a non relevant window [1, 30, 37]. The high frequency of window switches andshort duration a selected window stays active further shows the potential of dis-tractions and might be due to several accidental switches that could lead to gettingdistracted from the primary task.According to discussions with the participants from study 1 and our own ex-perience, very few tasks require a high number of open windows. Yet, developersare not taking any measures to potentially reduce distractions, which points to thevalue of a lightweight automated approach to reduce distractions by open windowsand support developers in their focused work.Finally, our results show that using the computer interaction can be a good indi-16cator for determining a developer’s attention. Therefore, we will base our relevancemodel in the following solely on the computer interaction due to the potential ofits applicability and lesser invasiveness when deployed with information workers.17Chapter 4Predicting Relevant WindowsTo develop a lightweight approach that helps information workers maintain focuson the relevant windows and minimizes unintentional switches, we explored theprediction of relevant windows.4.1 ModelWe leveraged the results from study 1 to develop a model on predicting the mostrelevant windows. We used the following temporal and semantic features that havebeen applied successfully in previous work [11, 38, 39]. Note that other featuresthat we identified from related work or our own experience would have made themodel either too complex or not allow a real-time application in a real-world settingthat we implemented in study Temporal, RecencyRecency is one of the simplest and most commonly used temporal features thatscores a window based on how recently it has been active, so that the last win-dow before the current one receives the highest score. To calculate a recency scorewe focus on the window activation events in the computer interaction event log.Specifically, we start from the current window and go back in time to the ten previ-ous windows, remove duplicates, and give a score in reverse chronological order.From an order of A→B→C→B→D→C with C being the active window we get18a ranking of D,B,A as the active window is not scored and duplicates are removed.4.1.2 Temporal, DurationOur duration feature scores a window by how long it has been active during thelast 10 minutes. The score is calculated by dividing the summed active duration bythe observation interval, in our case 10 minutes. We picked this interval to providea good balance between keeping the measure stable from short recent switchesand only including windows that are related to the current task. Task switches arereported to occur every 3 to 12 minutes, so the current task should in most cases beentirely covered by our interval [27, 32].4.1.3 Temporal, FrequencyThe frequency feature shares the same observation interval with the duration fea-ture. Windows are scored by counting the number of switches to that window: themore switches to a window, the higher the rank of the window. Many tasks involveswitching frequently between the same few windows. This feature is intended tocapture that behavior.4.1.4 Semantic, Window TitleWe assume that windows whose title and textual content are similar are also them-selves related and therefore relevant to each other. We want to use semantic featuresto predict how related two windows are. To calculate a score in regard to predict-ing window switches we are interested in how related a window is to the currentlyactive window. We are using the window titles to determine relatedness. While thecontents of a window could provide more information, they are not as easily avail-able as the window titles, which are exposed by operating system interfaces, andreading the contents of all windows is very invasive, introducing justified privacyconcerns by users. We processed the window titles by removing all punctuation,filtering out stop words, and stemming the remaining words. The weight of thestemmed words in the processed window titles of all open windows are calculatedby the term frequency–inverse document frequency. All processing operations areperformed with tm, a library for text mining in R [12]. With these weights we cal-19llllllllll40%60%80%1 2 3 4 5Window is in the Top x Most Relevant WindowsProbabilityllWeightedMeanRecencySemanticDurationFrequencyFigure 4.1: Probability of predicting the correct next window within the topX most relevant windows.culate the cosine similarity of the window title of each open window to the windowtitle of the currently open window. The open windows are ranked by the similarityvalue.4.2 Empirical Analysis based on Study 1After calculating the scores for the temporal and semantic features described above,we used the data collected during our monitoring study to determine the weightof each of the features to calculate a combined score. As a starting point, welooked into a linear equation due to its simplicity and speed of calculation. Morecomplex equations and models as a topic of future work are discussed in Chapter 7.For each window switch we calculated a score for each open window equal toα ∗ recency+β ∗duration+ γ ∗ f requency+δ ∗ semantic.We investigated the features individually and created a ranking based on theirpredictive power. Figure 4.1 shows that the order is not easy to determine anddepends on the target number of windows that are predicted. When we consideronly our highest scoring window, the feature with the highest predictive power isrecency, followed by semantic, duration, and then frequency. When consideringthe top 2 or top 3 results, the order changes as the semantic feature becomes lesspredictive.As a first step we tested all features with equal weight as well as all combi-20nations of 3 out of 4 and 2 out of 4 features. Figure 4.1 shows the performanceof the four individual features, a combination using equal weights (mean), andour final weighted combination (weighted). Using equal weights interestingly per-forms significantly worse than either of our best two features. The best combina-tion we found only consisted of three of our features containing recency, semantic,and duration. Adding frequency reduced the result slightly. We then proceededto examine various weight schemes of the three taking into account the order.We used a binary search approach with intervals of 0.2, 0.1, and finally 0.05 togradually optimize the weights. The optimal combination we found that way is0.5 ∗ recency+ 0.45 ∗ semantic+ 0.05 ∗ duration. While its weight ended up be-ing rather small, the duration feature improves the score by filtering out switchesto windows that become active and relevant for a very short time, but should notchange the whole context of the task.21Chapter 5Study 2: Highlighting RelevantWindows to Support FocusWe explored how to visualize the most relevant windows to the user and de-emphasizethe remaining ones. To that purpose, we extended the monitoring application witha component, called WindowDimmer, which fades away all windows that are notconsidered relevant.5.1 WindowDimmer ApproachFor our approach called WindowDimmer, we extended the monitoring applicationthat we developed for study 1 (Section 3.1.3) with the model to predict relevantwindows that we determined in the empirical analysis (Section 4.2). For Win-dowDimmer, we chose to show the top 3 most relevant windows and reduce thevisibility of all others. We chose 3 as the threshold as developers tend to notonly fixate the active window, but an average of 3.2 distinct windows includingthe active one (Section 3.2.5). Hence, for WindowDimmer, we predict a rankedlist of the 3 most relevant windows using our combined and weighted predictor(weighted in Figure 4.1) which reaches a 72.7% probability of selecting the correctthree most relevant windows. Whenever WindowDimmer receives an event that theuser switched between windows, we gather a list of the remaining open windowsand their window titles, extract their features, and calculate the relevance score.22Figure 5.1: WindowDimmer dims all less relevant windows and the desktopto emphasize the top 3 most relevant windows.Whenever a window title changes without a window switch, such as when the userselects a new tab inside the web browser, we recalculate the relevance scores.As visualized in an example in Figure 5.1, WindowDimmer reduces the vis-ibility of all windows that are not the top three most relevant ones, by dimmingthem. Concretely, the active window and the two windows with the highest rele-vance scores remain untouched, while all other remaining windows and the desktopbackground, in case it is still visible, get slightly dimmed by applying a gray over-lay with 25% opacity. The Windows 10 taskbar stays visible and un-dimmed toallow easy navigation between windows. Note that the most relevant windows arenot necessarily all in the foreground, since the order of how Windows 10 stacks thewindows is based on recency only, while our model includes duration and semanticsimilarity as well.As mentioned above, by default WindowDimmer highlights the top 3 most rel-evant windows (including the currently active one). WindowDimmer provides fea-tures to adjust the number of windows that don’t get dimmed and pause or disablethe dimming (see Figure 5.2). As with the monitoring application, WindowDim-mer is based on and integrated into PersonalAnalytics1 and as such is restricted toMicrosoft Windows 10. The calculation of scores and application of the dimmingis based on the background monitoring and occurs instantaneously after receiv-1https://github.com/sealuzh/PersonalAnalytics23Figure 5.2: WindowDimmer settings that allow enabling, switching the modefor the study, and setting the number of windows that don’t get dimmed.ing operating system events. The calculation of feature scores was reimplementedfrom the reactive approach in the analysis to run in real time using the Accord.netframework [49] for text processing of the window titles and scoring of the semanticfeature.5.2 Intervention StudyTo evaluate the potential of increasing focus by dimming irrelevant windows, weconducted a second, two-part field study. In the first part, we investigated if ourmodel can identify the top three relevant windows as reported by participants. Inthe second part, participants used WindowDimmer in situ during their real-worldwork and provided us with qualitative feedback.24Figure 5.3: WindowDimmer pop-up asking participants to select the relevantwindows out of a list of all open windows.5.2.1 ProcedureIn the beginning of the study, we asked participants to read through the consentform, ask any questions they might have, and sign it. After that, participants wereasked to install the monitoring tool with the WindowDimmer extension on theirmain work computer. Finally, we asked participants to continue their work as usualfor the next 5 workdays.During the first part of study 2, which lasted three of the five days, we loggedparticipants’ interactions with their windows, and asked them to answer a pop-upsurvey in regular 40-50 minute intervals (to have some randomization). The pop-ups, as seen in Figure 5.3, listed all currently open windows and asked participantsto tick a check box of all the windows that are relevant for their current work.No definition for relevance was given, as we did not want to bias participants’responses by restricting relevance to work-related windows for example. With thispart of the study, we wanted to examine whether our predictive model based onuser interaction is also able to accurately predict user-defined relevance.During the second part of study 2, which lasted the remaining two days, wecontinued logging participants’ computer interaction and enabled WindowDimmer.We disabled the self-reporting pop-up to prevent frustrating participants in casetheir selection of relevant windows was different to the windows that our approachdimmed. At the end of the second part of study 2, we conducted semi-structured25interviews to receive feedback on their experience with WindowDimmer. We alsoasked them to answer a survey with demographic questions as well as the Sys-tem Usability Scale [7], a standardized survey for evaluating the usability of ourapproach. Finally, we collected the logged data from participants’ machines, af-ter giving them the opportunity to obfuscate the logged data to alleviate potentialprivacy concerns. We then uninstalled the monitoring application and gave partic-ipants $30 US to compensate for their efforts.5.2.2 ParticipantsWe recruited a total of 12 software developers; 6 professional software develop-ers who worked for 4 different software companies in Canada, the US, Germany,and Switzerland, and 6 computer science students (1 undergraduate, 4 graduate, 1postdoc) in Canada and Switzerland. Contact to the participants was establishedthrough personal contacts. Participation was entirely voluntary. 5 of our partici-pants identified as female, the remaining 7 as male. The average age is 26 with anaverage of 2.6 years of experience for the working participants.5.2.3 Collected DataWe collected all data required to assess our approach. We collected data on con-nected screens, open windows, and window interaction comparable to the moni-toring study as described in Section 3.1.4. We additionally recorded the calculatedrelevance scores for each feature as well as the summed score and rank of the openwindows. For each window in a submission of the pop-up by a participant, werecorded the response and whether our model predicted the window as relevant orwould have dimmed it. In total we collected 266 pop-up responses with an averageof 22.2 (±9.5) responses per participant over three days. The number of responsesvaried between 11 and 36, depending on how much time the participant spent ontheir computer. Each pop-up contained a list of 14.5 (±11.8) open windows.5.3 ResultsThe intervention study provides similar data to the monitoring study, but extendsit with information related to our model and the WindowDimmer approach. We26compare data on computer desktops, windows, and window interaction with theprevious results, investigate participants’ reports on relevant windows, and com-pare window interaction behavior between the two parts of the study.5.3.1 Window Interaction BehaviorThe participants of the intervention study had similar screen setups as the partic-ipants of the previous monitoring study. All participants used a laptop with sixparticipants adding one and three participants adding two external screens. Wecalculated the number of open windows with the same methodology as before andobserved 13.5 (±11.1) open windows across participants, ranging from 4 to 31,which is a slightly higher average compared to the 12.3 open windows in the mon-itoring study. The growth of open windows over the course of the day, from 10.0(±8.0) in the morning to 14.3 (±8.9) in the evening, is also comparable, thoughless stable due to fewer participants and a shorter observation time.Unlike the over 30% before, only 17.0% of the captured window switcheslasted for less than one second, although the distribution is similar to the one inFigure 3.3. This could be due to different applications used by the participants in-fluenced by their different job roles. The proportion of window switches to alreadyopen windows is also lower by about 8 percentage points with 67.8%.5.3.2 Predicting Relevant WindowsAs a first step, we wanted to evaluate whether the predictions by our relevancemodel overlap with the perceptions of the participants. Therefore, we comparedthe reported relevance of open windows in the first part of the study with the top3 relevant windows predicted based on user interaction. In the 266 self-reportsthat we collected participants reported 1.8 (±0.9) windows to be relevant out ofthe 14.5 (±11.8) windows that were open and listed in our pop-up. This numberincludes the currently active window for 85.3% of all pop-ups.When we compare the top 3 windows that our relevance model predicted withthe windows reported as relevant by the participants, we found that 88.3% of thepredicted windows matched the self-reports (10.4% true positives and 77.9%true negatives). 9.8% of all windows were predicted relevant by our model, but271.080.921.8515.310.221.611.2811.890.651.291.657.530.081.421.588.080.111.421.532.470.231.741.2621.430.551.181.829.000.650.961.9629.650.091.701.2716.940.121.471.535.710.061.391.281.440.091.391.551.7620406080P01 P02 P03 P04 P05 P06 P07 P08 P09 P10 P11 P12Open Windows in Pop−Up (sqrt)Participant, Model Not Relevant, Not Relevant Relevant, Relevant Not Relevant, Relevant Relevant, Not RelevantFigure 5.4: Number of windows reported (by participants) and predicted (byour model) relevant averaged over all reports. The three top scoringwindows in our model are considered relevant.not by the participants (false positives). Only 1.8% should have been consideredrelevant according to our participants, but were not predicted as relevant based onour model (false negative).Figure 5.4 shows the relevance results per participant. To account for the vary-ing number of responses per participant, the values are averaged per pop-up. Thenumber of windows incorrectly predicted as not relevant, incorrectly predicted asrelevant, and correctly predicted as relevant are very similar across participants.The value that is varying the most is the number of windows that were predictedand reported as not relevant. This is most likely due to the the varying number ofopen windows per participant.5.3.3 Evaluation of WindowDimmerDuring the study period in which participants used WindowDimmer an averageof 30.4% of the computer desktop area (all screens) was dimmed. The resultsalso show that the dimmed area increases with the size of the computer desktopwith a Pearson correlation of 0.59 (p=0.00011). Figure 5.6 illustrates this effect.During the study, all of our participants used a laptop with 9 participants usuallyconnecting at least one additional screen (see Section 5.3.1). Their screen resolu-tion ranged from 1280 * 680 to 3840 * 2160.28Other Terminal Web browserChat Email IDEBefore After Before After Before After0%25%50%75%100%0%25%50%75%100%Duration0−1s1−5s5+sFigure 5.5: Distribution of the duration a window is active before and af-ter the dimming is activated by type of application. The distribution isbinned into 3 buckets (0-1s, 1-5s, 5+s) to highlight the changes.During the intervention phase with WindowDimmer activated, participants hadfewer window switches and spent longer time in relevant windows. The num-ber of window switches lasting less than one second decreased from 18.4% to15.3%. However a paired t-test showed that this is not significant (p=0.1372).Figure 5.5 displays a breakdown by application type of the length of window acti-vations. While there is a decrease of very short window activations across all typesof applications, the ratio of very short window interactions varies. Applicationsthat require a higher level of focus and concentration like IDEs and web browsersstay active for longer times while email clients have more very short activations.Having WindowDimmer reduce visibility of open windows might encourageparticipants to open more windows as they are no longer as distracting. However,we do not see a consistent change in the number of open windows across partici-pants. While the number decreased for 6 participants, it increased for the other 6.The number of windows seems to be much more related to the type of work andtasks performed during the day.For each window switch by a participant we recorded whether our model pre-dicted the target window to be relevant and the rank of the target window. Windowswitches to windows not predicted to be relevant and therefore dimmed duringthe second part of the study decreased by 10.2%, from 48.2% to 43.3% with the29lllllllllllllllllllllllllllllllllllll0%20%40%60%5,000,000 10,000,000Desktop Size (across all screens)Area DimmedDuration (hours)lllll255075100125Figure 5.6: Percentage of desktop area dimmed by desktop size. Both arecalculated across all screens.dimming active while switches to the window WindowDimmer predicted as mostrelevant increased from 29.8% to 33.1%. The remaining switches target the secondmost relevant window, which is the third window not to be dimmed, where we alsosaw an increase by one percentage point.5.4 Participant FeedbackOur post-study survey included the System Usability Scale (SUS). We measured amean SUS score of 72.7 which is considered to represent a “Good Usability” basedon a large survey of SUS scores of previous studies [3].We further interviewed participants after they completed the study to collectqualitative feedback and ask about specific situations, where they perceived ourapproach to be helpful or hindering to their work. Generally, the WindowDimmerapproach was perceived as useful by 8 of the 12 participants. They mentionedthe window dimming was “helpful”, or “worked well”. The other 4 participantsreported it having no or very little effect, but also not hindering their work.P9 and P11 reported an interest in dimming everything but the currently ac-tive window “to really focus on the current task”. Leaving only one active win-dow undimmed instead of three might help them to stay better focused. Addition-ally, they liked that the dimming not only applied to other windows, but also thedesktop background itself. These two participants found their desktop background30“normally very cluttered and that can be pretty distracting”. With one participantcalling himself “not a great organizer of my desktop”, dimming the desktop back-ground reduces the focus on the clutter.Three participants (P6, P9, and P10) found themselves distracted by suddenchanges in the dimming. When they had too many open windows, they wanted tomake sure WindowDimmer was not dimming anything important, which causedthem to look at windows that just had the dimming applied. P9 and P10 wouldprefer a smoother transition leading to the full dimming over a few seconds.All participants generally agreed on the problem of decreased focus whenworking on their window-based computer desktops. Although outside of the scopeof WindowDimmer, many participants reported having trouble finding the relevantcontent within tabbed interfaces, especially the web browser. Two participants (P2and P10) suggested applying a similar dimming approach to tabs they hadn’t usedin a while and were no longer relevant.31Chapter 6Threats to ValidityThe biggest threats to the validity for our results are external validity threats due tothe limited number of participants and the type of information workers that partic-ipated. We elaborate those and the threats to construct and internal validity in thissection.6.1 Construct ValidityBoth of our studies were conducted in the real world to gather data that is as re-alistic as possible. Using a monitoring application in this unsupervised scenariobears the risk of causing inaccurate data to be included due to bugs in the imple-mentation or unexpected restrictions of the participants’ device. To mitigate thisrisk, we built our monitoring application based on an existing application that hadbeen used in previous studies and conducted study test-runs on various machinesprior to running both of our studies.Limitations to the logged data are that the monitoring application can only cap-ture the participants’ interactions with the computer, and not away from it. Also,the eye-tracker did not allow tracking multiple monitors at the same time, which iswhy our eye-gaze data is limited to the primary screen only.326.2 Internal ValidityMonitoring participants might implicitly influence participants’ behaviors, sincethey are feeling observed. In order to mitigate this risk, we constructed our moni-toring application in a way such that data can be collected in the background only.Many participants reported after the study that they did forget about the monitor-ing application shortly after installing it. In study 1, the eye-tracker could havereminded participants of the ongoing study, but no other interaction was required.In the first three days of study 2, the periodic self-reports (pop-up to collect win-dow relevancy data) might have been considered more intrusive, but participantsdid not state anything with that regards in the post-study interview. To reduce theintrusiveness of the pop-up, we minimized the amount of time required to selectthe most relevant windows by adding application icons for quick identification andonly requiring a single click to select each relevant window. We further allowedparticipants to skip individual pop-ups to prevent them from submitting inaccurateinformation in situations where they were unable to spend enough time selectingthe actually relevant windows. In the second part of study 2, where we appliedWindowDimmer, we actively influenced participants’ window switching behavior,but this was the intention of this intervention.6.3 External ValidityOur selection of participants and the total number of participants for either studycould limit the generalizability of our findings. While all participants were infor-mation workers, participants in study 1 were professional software developers, 6(of the 12) participants in study 2 were computer science students. We believethat recruiting software developers as a type of information worker for the firststudy is a good starting point, also to be able to compare better between individu-als. Overall, we further tried to mitigate the threat to generalizability by recruitingparticipants from different companies, countries, and contexts. However, furtherresearch is needed to extend this to a broader set of information workers. The dailyactivities of a developer vary greatly depending on their team, project, and possiblyday of the week. We tried to mitigate this by recruiting participants from differentcompanies and staggering the start of the study.33The architecture of our monitoring application required us to restrict the studyto participants using Microsoft Windows 10 as operating system. Further studieswith extended tooling are required to assess the effect the window managers ofdifferent operating systems have on the window interactions and focus of theirusers.34Chapter 7DiscussionThe objective of our research is to reduce the visual clutter and distractions causedby open windows and thereby foster focused work. To this end, we studied infor-mation workers window interactions and developed the WindowDimmer approachto actively dim possibly distracting windows. In a pilot study, we collected valu-able feedback from users to smoothen the study process and eliminate bugs in theapproach. In the actual study, we then found that participants thought that Win-dowDimmer was usable and helpful.We addressed the open and visible windows, but there are obviously manyother forms of distractions on the desktop. For example, minimized or occludedwindows can be distracting as well, but are not addressed by our dimming ap-proach. Not all forms of distraction effect users the same way. Influenced by thenumber of screens available and their screen setup, our participants used differentlayouts of windows which has an impact on how easy it is to block out distractionsand focus on the active window.Addressing Visual Clutter Similar to previous studies [17], we have seen the sizeof monitors and number of windows increase. This trend is most likely to con-tinue, increasing the potential for visual clutter from open windows. Predicting thewindows that are relevant at any given point in time can be helpful for multipleapproaches. We have focused on increasing the visibility of relevant windows as asimple yet effective first step, but can imagine approaches with a larger impact like35minimizing or hiding windows that are not relevant and grouping related windowsthat are related to a task to have faster access.Relevance of Tabs Within Windows At this point, we used a granularity level ofapplication windows, yet given the high usage of tabs within windows—especiallythe ever more important web browser—predicting relevance on a finer granular-ity would help to provide better support and could have been a useful feature inWindowDimmer. Two participants specifically mentioned the support for tabs intheir interviews as they often have up to 100 tabs open in the Chrome browser.The current WindowDimmer approach is not directly transferable to tabs, as thereis by definition only one tab visible (per window). Approaches like the PlagueDoctor [35] that use color to show relevance could be applied to the tab titles. Al-ternatively showing a list of the top 3 most relevant tabs on the side could provideeasier access for users with lots of open tabs.Models to Predict Relevance Our model for predicting relevance uses three tem-poral and semantic features and a linear combination with heuristic weighting ofthese features. Building on a larger dataset, adding more features, and using ma-chine learning techniques could improve our model significantly. Our studies re-vealed large differences in the window interaction behavior between our partici-pants. To achieve the best relevance prediction, a personalized model might be re-quired. Some participants wished for the functionality of overriding the relevanceby manually setting the dimming of a window. A more sophisticated, personalmodel could utilize these inputs.Interactive Approaches Our approach features a model that runs in the back-ground and passively collects user data automatically. To adapt to the variousactivities a developer engages in over the course of the workday, a more inter-active approach could take user settings into account. A very specialized activitylike programming could feature a lower number of not dimmed windows and amore change-resistant model, whereas a broader research activity could dim fewerwindows and value the semantic relatedness of windows higher. Even more direct36control over dimmed windows could give users the ability to toggle whether a spe-cific window is dimmed or manually mark windows relevant or not relevant for thecurrent task. The individual labels could further be used to dynamically improvethe relevance model.Accuracy of Eye Tracking The eye tracker used in our monitoring study is limitedto a single screen. We instructed our participants to configure it for their primaryscreen which is used most often. Since eye-tracking is becoming less invasive—themodel used in our study is a small bar that is mounted on the bottom bezel of thescreen—we will most likely be able to capture more accurate data from multiplescreens in the near future, which will provide new means for window interactions.In most cases the attention is on the active window, however our participants oftenlooked away from the active window at least once showing us that more than justthe active window is relevant.Supporting Different Operating Systems The current computer market is almostentirely divided between Microsoft Windows, Apple macOS, and the various fla-vors of Linux (with many different desktop managers). While almost all of thoseare window-based, the features for managing windows are not the same. Soft-ware developers that use macOS were excluded from the studies, but expressedgreat interest in the dimming approach. Some reported working with mostly non-maximized windows and struggling with the number of open windows. Whilethese studies were for the Microsoft Windows 10 operating system exclusively, itmight be worth testing the effect on other operating systems that provide differentwindow management functionalities.37Chapter 8ConclusionWith the constant improvements in hardware and software technologies, screensizes and resolution are increasing and usage behavior for window-based desktopsis changing. Our monitoring study provides recent insights into how software de-velopers setup their desktop environment and how they interact with windows onthe desktop. We observed their work day to be very flexible with a changing num-ber of screens, many open windows, and very frequent window switches. Thismirrors the assessments of a fragmented work day found in previous work.Based on the collected data, we devised a model to predict the relevance ofopen windows that showed an accuracy of 72.7% when predicting the 3 most rel-evant windows. Evaluating the model in a second study with a different set ofinformation workers showed that the model can predict relevance self-reported byparticipants for 88.3% of the windows.Using the model, we developed an approach called WindowDimmer that re-duces the visibility of windows that we predict to be less relevant. The results ofour second study in which participants were asked to report which windows theyconsider relevant and test the dimming approach showed a reduction in windowswitches and an increase in switches to relevant windows when the dimming isactive. 75% of our participants felt that WindowDimmer was useful in supportingtheir focus on relevant windows. We plan to have a higher impact by improvingour model and testing different visualization techniques.38Bibliography[1] B. P. Bailey and J. A. Konstan. On the need for attention-aware systems:Measuring effects of interruption on task performance, error rate, andaffective state. Computers in Human Behavior, 22(4):685–708, July 2006.doi: 10.1016/j.chb.2005.12.009. → pages 1, 4, 16[2] B. P. Bailey, J. A. Konstan, and J. V. Carlis. The Effects of Interruptions onTask Performance, Annoyance, and Anxiety in the User Interface. InInteract, volume 1, pages 593–601, Tokyo, Japan, 2001. IOS Press. → page1[3] A. Bangor, P. T. Kortum, and J. T. Miller. An Empirical Evaluation of theSystem Usability Scale. International Journal of Human-ComputerInteraction, 24(6):574–594, July 2008. doi: 10.1080/10447310802205776.→ page 30[4] J. Beatty. Task-evoked pupillary responses, processing load, and thestructure of processing resources. Psychological Bulletin, 91(2):276–292,1982. doi: 10.1037/0033-2909.91.2.276. → page 5[5] R. Bednarik. Expertise-dependent visual attention strategies develop overtime during debugging with multiple code representations. InternationalJournal of Human-Computer Studies, 70(2):143–155, Feb. 2012. doi:10.1016/j.ijhcs.2011.09.003. → page 5[6] M. S. Bernstein, J. Shrager, and T. Winograd. Taskpose´: Exploring FluidBoundaries in an Associative Window Visualization. In Proceedings of the21st Annual ACM Symposium on User Interface Software and Technology -UIST ’08, page 231, Monterey, CA, USA, 2008. ACM Press. doi:10.1145/1449715.1449753. → page 6[7] J. Brooke. SUS: A ’quick and dirty’ usability scale. In Usability Evaluationin Industry, pages 189–194. CRC Press, London, 1996. doi:10.1201/9781498710411. → page 2639[8] M. Crosby and J. Stelovsky. How Do We Read Algorithms? A Case Study.Computer, 23(1):25–35, Jan. 1990. doi: 10.1109/2.48797. → page 5[9] M. Czerwinski, E. Horvitz, and S. Wilhite. A Diary Study of Task Switchingand Interruptions. In Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems, CHI ’04, pages 175–182, New York, NY,USA, 2004. ACM, ACM. doi: 10.1145/985692.985715. → page 1[10] J. Dostal, P. O. Kristensson, and A. Quigley. Subtle Gaze-DependentTechniques for Visualising Display Changes in Multi-Display Environments.In Proceedings of the 2013 International Conference on Intelligent UserInterfaces - IUI ’13, page 137, Santa Monica, California, USA, 2013. ACMPress. doi: 10.1145/2449396.2449416. → page 7[11] A. N. Dragunov, T. G. Dietterich, K. Johnsrude, M. McLaughlin, L. Li, andJ. L. Herlocker. TaskTracer: A Desktop Environment to SupportMulti-tasking Knowledge Workers. In Proceedings of the 10th InternationalConference on Intelligent User Interfaces - IUI ’05, page 75, San Diego,California, USA, 2005. ACM Press. doi: 10.1145/1040830.1040855. →pages 6, 7, 18[12] I. Feinerer, K. Hornik, and D. Meyer. Text Mining Infrastructure in R.Journal of Statistical Software, 25(5), 2008. doi: 10.18637/jss.v025.i05. →page 19[13] K. Fenstermacher and M. Ginsburg. A Lightweight Framework forCross-Application User Monitoring. Computer, 35(3):51–59, Mar. 2002.doi: 10.1109/2.989930. → page 5[14] T. Fritz, A. Begel, S. C. Mu¨ller, S. Yigit-Elliott, and M. Zu¨ger. UsingPsycho-Physiological Measures to Assess Task Difficulty in SoftwareDevelopment. In Proceedings of the 36th International Conference onSoftware Engineering - ICSE 2014, pages 402–413, Hyderabad, India, 2014.ACM Press. doi: 10.1145/2568225.2568266. → page 5[15] V. M. Gonza´lez and G. Mark. ”Constant, Constant, Multi-taskingCraziness”: Managing Multiple Working Spheres. In Proceedings of the2004 Conference on Human Factors in Computing Systems - CHI ’04, CHI’04, pages 113–120, Vienna, Austria, 2004. ACM Press. doi:10.1145/985692.985707. → pages 1, 5, 6, 1340[16] D. M. Hilbert and D. F. Redmiles. Extracting Usability Information fromUser Interface Events. ACM Computing Surveys, 32(4):384–421, Dec. 2000.doi: 10.1145/371578.371593. → page 5[17] D. R. Hutchings, G. Smith, B. Meyers, M. Czerwinski, and G. Robertson.Display Space Usage and Window Management Operation Comparisonsbetween Single Monitor and Multiple Monitor Users. In Proceedings of theWorking Conference on Advanced Visual Interfaces - AVI ’04, page 32,Gallipoli, Italy, 2004. ACM Press. doi: 10.1145/989863.989867. → pages5, 35[18] S. T. Iqbal, X. S. Zheng, and B. P. Bailey. Task-evoked pupillary response tomental workload in human-computer interaction. In Extended Abstracts ofthe 2004 Conference on Human Factors and Computing Systems - CHI ’04,page 1477, Vienna, Austria, 2004. ACM Press. doi:10.1145/985921.986094. → page 5[19] M. R. Jakobsen and K. Hornbæk. Piles, tabs and overlaps in navigationamong documents. In Proceedings of the 6th Nordic Conference onHuman-Computer Interaction Extending Boundaries - NordiCHI ’10, page246, Reykjavik, Iceland, 2010. ACM Press. doi: 10.1145/1868914.1868945.→ page 7[20] S. Jeuris, P. Tell, S. Houben, and J. E. Bardram. The Hidden Cost ofWindow Management. CoRR, abs/1810.04673, 2018. → pages 1, 4[21] M. Kersten and G. C. Murphy. Mylar: A degree-of-interest model for IDEs.In Proceedings of the 4th International Conference on Aspect-OrientedSoftware Development - AOSD ’05, pages 159–168, Chicago, Illinois, 2005.ACM Press. doi: 10.1145/1052898.1052912. → page 6[22] J. Klingner. Fixation-aligned pupillary response averaging. In Proceedingsof the 2010 Symposium on Eye-Tracking Research & Applications - ETRA’10, page 275, Austin, Texas, 2010. ACM Press. doi:10.1145/1743666.1743732. → page 5[23] S. Leroy. Why is it so hard to do my work? The challenge of attentionresidue when switching between work tasks. Organizational Behavior andHuman Decision Processes, 109(2):168–181, 2009. doi:10.1016/j.obhdp.2009.04.002. → page 2[24] P. L. Li, A. J. Ko, and J. Zhu. What Makes a Great Software Engineer? In2015 IEEE/ACM 37th IEEE International Conference on Software41Engineering, ICSE ’15, pages 700–710, Florence, Italy, May 2015. IEEEPress, IEEE. doi: 10.1109/ICSE.2015.335. → page 2[25] G. Mark, V. M. Gonzalez, and J. Harris. No Task Left Behind?: Examiningthe Nature of Fragmented Work. In Proceedings of the SIGCHI Conferenceon Human Factors in Computing Systems - CHI ’05, page 321, Portland,Oregon, USA, 2005. ACM Press. doi: 10.1145/1054972.1055017. → pages5, 6[26] G. Mark, D. Gudith, and U. Klocke. The Cost of Interrupted Work: MoreSpeed and Stress. In Proceeding of the Twenty-Sixth Annual CHI Conferenceon Human Factors in Computing Systems - CHI ’08, page 107, Florence,Italy, 2008. ACM Press. doi: 10.1145/1357054.1357072. → page 1[27] G. Mark, S. Iqbal, M. Czerwinski, and P. Johns. Focused, Aroused, but soDistractible: Temporal Perspectives on Multitasking and Communications.In Proceedings of the 18th ACM Conference on Computer SupportedCooperative Work & Social Computing - CSCW ’15, pages 903–916,Vancouver, BC, Canada, 2015. ACM Press. doi: 10.1145/2675133.2675221.→ pages 6, 19[28] G. Mark, S. Iqbal, and M. Czerwinski. How blocking distractions affectsworkplace focus and productivity. In Proceedings of the 2017 ACMInternational Joint Conference on Pervasive and Ubiquitous Computing andProceedings of the 2017 ACM International Symposium on WearableComputers on - UbiComp ’17, pages 928–934, Maui, Hawaii, 2017. ACM,ACM Press. doi: 10.1145/3123024.3124558. → page 1[29] G. Mark, M. Czerwinski, and S. T. Iqbal. Effects of Individual Differencesin Blocking Workplace Distractions. In Proceedings of the 2018 CHIConference on Human Factors in Computing Systems - CHI ’18, pages1–12, Montreal QC, Canada, 2018. ACM Press. doi:10.1145/3173574.3173666. → page 6[30] S. McMains and S. Kastner. Interactions of top-down and bottom-upmechanisms in human visual cortex. Journal of Neuroscience, 31(2):587–597, 2011. doi: 10.1523/JNEUROSCI.3766-10.2011. → pages 1, 4, 16[31] A. N. Meyer, T. Fritz, G. C. Murphy, and T. Zimmermann. SoftwareDevelopers’ Perceptions of Productivity. In Proceedings of the 22nd ACMSIGSOFT International Symposium on Foundations of Software Engineering- FSE 2014, pages 19–29, Hong Kong, China, 2014. ACM Press. doi:10.1145/2635868.2635892. → pages 1, 642[32] A. N. Meyer, L. E. Barton, G. C. Murphy, T. Zimmermann, and T. Fritz. TheWork Life of Developers: Activities, Switches and Perceived Productivity.IEEE Transactions on Software Engineering, 43(12):1178–1193, Dec. 2017.doi: 10.1109/TSE.2017.2656886. → pages 1, 2, 5, 6, 9, 13, 19[33] A. N. Meyer, G. C. Murphy, T. Zimmermann, and T. Fritz. DesignRecommendations for Self-Monitoring in the Workplace: Studies inSoftware Development. Proceedings of the ACM on Human-ComputerInteraction, 1(CSCW):1–24, Dec. 2017. doi: 10.1145/3134714. → pages9, 13[34] R. Minelli, A. Mocci, and M. Lanza. I Know What You Did Last Summer -An Investigation of How Developers Spend Their Time. In 2015 IEEE 23rdInternational Conference on Program Comprehension, pages 25–35,Florence, Italy, May 2015. IEEE. doi: 10.1109/ICPC.2015.12. → page 5[35] R. Minelli, A. Mocci, and M. Lanza. The Plague Doctor: A Promising Curefor the Window Plague. In Proceedings of the 2015 IEEE 23rd InternationalConference on Program Comprehension, ICPC ’15, pages 182–185,Piscataway, NJ, USA, 2015. IEEE Press. doi: 10.1109/ICPC.2015.28. →pages 7, 36[36] G. Murphy, M. Kersten, and L. Findlater. How are Java software developersusing the Elipse IDE? IEEE Software, 23(4):76–83, July 2006. doi:10.1109/MS.2006.105. → page 5[37] M. Niemela¨ and P. Saariluoma. Layout attributes and recall. Behaviour &information technology, 22(5):353–363, 2003. doi:10.1080/0144929031000156924. → pages 1, 4, 16[38] N. Oliver, G. Smith, C. Thakkar, and A. C. Surendran. SWISH: SemanticAnalysis of Window Titles and Switching History. In Proceedings of the11th International Conference on Intelligent User Interfaces - IUI ’06, page194, Sydney, Australia, 2006. ACM Press. doi: 10.1145/1111449.1111492.→ pages 6, 18[39] N. Oliver, M. Czerwinski, G. Smith, and K. Roomp. RelAltTab: AssistingUsers in Switching Windows. In Proceedings of the 13th InternationalConference on Intelligent User Interfaces - IUI ’08, page 385, Gran Canaria,Spain, 2008. ACM Press. doi: 10.1145/1378773.1378836. → pages 6, 18[40] G. Robertson, E. Horvitz, M. Czerwinski, P. Baudisch, D. Hutchings,B. Meyers, D. Robbins, and G. Smith. Scalable Fabric: Flexible Task43Management. In Proceedings of the Working Conference on AdvancedVisual Interfaces - AVI ’04, pages 85–89, Gallipoli, Italy, 2004. ACM, ACMPress. doi: 10.1145/989863.989874. → pages 6, 11[41] D. Roethlisberger, O. Nierstrasz, and S. Ducasse. Autumn Leaves: Curingthe Window Plague in IDEs. In 2009 16th Working Conference on ReverseEngineering, pages 237–246, Lille, France, 2009. IEEE. doi:10.1109/WCRE.2009.18. → pages 7, 12[42] R. D. Rogers and S. Monsell. Costs of a predictible switch between simplecognitive tasks. Journal of experimental psychology: General, 124(2):207,1995. doi: 10.1037/0096-3445.124.2.207. → page 1[43] J. S. Rubinstein, D. E. Meyer, and J. E. Evans. Executive control ofcognitive processes in task switching. Journal of experimental psychology:human perception and performance, 27(4):763, 2001. doi:10.1037/0096-1523.27.4.763. → page 1[44] H. Sanchez, R. Robbes, and V. M. Gonzalez. An Empirical Study of WorkFragmentation in Software Evolution Tasks. In 2015 IEEE 22ndInternational Conference on Software Analysis, Evolution, andReengineering (SANER), pages 251–260, Montreal, QC, Canada, Mar. 2015.IEEE. doi: 10.1109/SANER.2015.7081835. → page 6[45] B. Sharif and J. I. Maletic. An eye tracking study on the effects of layout inunderstanding the role of design patterns. In 2010 IEEE InternationalConference on Software Maintenance, pages 1–10, Timi oara, Romania,Sept. 2010. IEEE. doi: 10.1109/ICSM.2010.5609582. → page 5[46] B. Sharif and J. I. Maletic. An Eye Tracking Study on camelCase andunder score Identifier Styles. In 2010 IEEE 18th International Conferenceon Program Comprehension, pages 196–205, Braga, Portugal, June 2010.IEEE. doi: 10.1109/ICPC.2010.41. → page 5[47] J. Singer, T. Lethbridge, N. Vinson, and N. Anquetil. An examination ofsoftware engineering work practices. In Proceedings of the 1997 Conferenceof the Centre for Advanced Studies on Collaborative Research, CASCON’97, pages 21–, Toronto, Ontario, Canada, 1997. IBM Press. → page 5[48] G. Smith, P. Baudisch, G. Robertson, M. Czerwinski, B. Meyers, andD. Robbins. GroupBar: The TaskBar Evolved. In (2003) OZCHI 2003Conference for the Computer-Human Interaction Special Interest Group of44the Human Factors Society of Australia, Brisbane, Australia, Jan. 2003.University of Queensland. → pages 6, 7[49] C. Souza, A. Cross, A. Gustafsson, M. D. Catalano, S. Voss, H. Zavvari,A. Messer, N. Markou, Fo40225, A. , A. Kirillov, W. Hongqi, R. Temer,D. Schroeder, K. Lukaschenko, I. Pizhenko, Milos Simic, B. Phelan, M. P.Rasmussen, M. , E. Hubbell, B. Collins, A. Zinenko, F. , D. Blank,D. Durrleman, B. Song, B. Staib, A. Pe´rez, and S. . The Accord.NetFramework, Oct. 2017. → page 24[50] S. Tak and A. Cockburn. Improved window switching interfaces. InProceedings of the 28th of the International Conference Extended Abstractson Human Factors in Computing Systems - CHI EA ’10, page 2915, Atlanta,Georgia, USA, 2010. ACM Press. doi: 10.1145/1753846.1753884. → page7[51] S. Tak, A. Cockburn, K. Humm, D. Ahlstro¨m, C. Gutwin, and J. Scarr.Improving Window Switching Interfaces. In T. Gross, J. Gulliksen, P. Kotze´,L. Oestreicher, P. Palanque, R. O. Prates, and M. Winckler, editors,Human-Computer Interaction – INTERACT 2009, volume 5727, pages187–200. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009. doi:10.1007/978-3-642-03658-3 25. → page 7[52] V. W.-S. Tseng, M. L. Lee, L. Denoue, and D. Avrahami. OvercomingDistractions during Transitions from Break to Work using a ConversationalWebsite-Blocking System. In Proceedings of the 2019 CHI Conference onHuman Factors in Computing Systems - CHI ’19, pages 1–13, Glasgow,Scotland Uk, 2019. ACM Press. doi: 10.1145/3290605.3300697. → page 6[53] M. Waldner, M. Steinberger, R. Grasset, and D. Schmalstieg.Importance-driven compositing window management. In Proceedings of the2011 Annual Conference on Human Factors in Computing Systems - CHI’11, page 959, Vancouver, BC, Canada, 2011. ACM Press. doi:10.1145/1978942.1979085. → page 7[54] A. Warr, E. H. Chi, H. Harris, A. Kuscher, J. Chen, R. Flack, and N. Jitkoff.Window Shopping: A Study of Desktop Window Switching. In Proceedingsof the 2016 CHI Conference on Human Factors in Computing Systems - CHI’16, pages 3335–3338, Santa Clara, California, USA, 2016. ACM Press.doi: 10.1145/2858036.2858526. → page 7[55] S. Yusuf, H. Kagdi, and J. Maletic. Assessing the Comprehension of UMLClass Diagrams via Eye Tracking. In 15th IEEE International Conference45on Program Comprehension (ICPC ’07), pages 113–122, Banff, Alberta,BC, June 2007. IEEE. doi: 10.1109/ICPC.2007.10. → page 546


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items