Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Designing for authoring and sharing of advanced personalization Haraty, Mona 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2016_november_haraty_mona.pdf [ 4.49MB ]
Metadata
JSON: 24-1.0319244.json
JSON-LD: 24-1.0319244-ld.json
RDF/XML (Pretty): 24-1.0319244-rdf.xml
RDF/JSON: 24-1.0319244-rdf.json
Turtle: 24-1.0319244-turtle.txt
N-Triples: 24-1.0319244-rdf-ntriples.txt
Original Record: 24-1.0319244-source.json
Full Text
24-1.0319244-fulltext.txt
Citation
24-1.0319244.ris

Full Text

DESIGNING FOR AUTHORING AND SHARING OF  ADVANCED PERSONALIZATION by  Mona Haraty  B.Sc., The University of Tehran, 2008 M.Sc., Simon Fraser University, 2010 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Computer Science)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  October 2016  © Mona Haraty, 2016 ii  Abstract Interactive technologies have become prevalent in all aspects of life including managing our tasks, looking for information, and connecting with others. We often adapt our behaviors, consciously or unconsciously, to accommodate the technology. The unique nature of our needs and preferences, and how they change over time are better supported with technologies that are designed to be personalizable. Lack of personalization facilities limits our range of behaviors.  In this dissertation, we focus on understanding and supporting differences in individuals’ behaviors through forms of personalization that are beyond choosing among a set of predefined options, by allowing users to author new functionalities. We refer to such personalizations as “advanced personalizations.” Authoring advanced personalizations, when supported, is often time-consuming and requires programming skills. Consequently, either because of lack of ability or time, many users take advantage of personalizations created and shared by others. The overarching goal of this dissertation is to design for authoring and sharing of advanced personalizations. We explore this goal in the domain of personal task management (PTM), where rich individual differences deeply influence user behavior and tool use.  First, to gain insights into individual differences in PTM as well as changes in PTM behaviors over time, we conducted  a series of studies: a focus group and contextual interviews in an academic setting, a large survey questionnaire with a broader population, and follow-up interviews with some of the survey respondents. These studies provide insights into different types of advanced perosnalizations that a PTM tool needs to support.  iii  Next, we designed a personalizable PTM tool with two key components for authoring advanced personalizations, building on ideas from end user programming approaches, and following theoretical guidelines on designing personalizable tools such as meta-design guidelines. A controlled user study of our design revealed opportunities and challenges in supporting advanced personalization, and our detailed design process provides a practical starting point for designing personalizable tools. Finally, through studying personalization sharing practices, we characterized the multi-faceted nature of online personalization sharing ecosystems, which include multiple components for hosting personalizations, discussing, and managing them. Our findings also highlight tradeoffs and design considerations in such ecosystems.    iv  Preface All research reported in Chapters 2, 3, 4, and 5 was conducted under the supervision of Dr. Joanna McGrenere (Department of Computer Science). Dr. Charlotte Tang (Post Doc at the Department of Computer Science, UBC) helped with the design of the survey questionnaire (Chapter 2, 3) and the data analysis in Chapter 3 by acting as a second coder for a subset of the data. Dr. Andrea Bunt (Department of Computer Science, University of Manitoba) co-supervised the study of personalization sharing (Chapter 5). All research with human participants was reviewed and approved by the UBC Research Ethics Board. The numbers and project titles for the associated Certificates of Approval are:   H12-01599 CS HCI Course Projects  H11-01976 Analyzing, Designing, and building Technologies for Personal Task Management I was the primary contributor to all aspects of this research. Chapter 2 includes a portion of a paper on individual differences in personal task management (the first publication listed below). That paper was written collaboratively in the context of a course project. Graduate students Diane Tam, Shathel Hadad, and I collaborated on all aspects of the study. I took a lead role in the research design and data analysis, and I was the lead author of the paper. The chapter has additional new material from two additional studies. I was the only student involved in the research for those two studies.  v  All the Chapters have been published in peer-reviewed journals or conference proceedings:   Mona Haraty, Diane Tam, Shathel Haddad, Joanna McGrenere, and Charlotte Tang. 2012. Individual differences in personal task management: a field study in an academic setting. Proceedings of the 2012 Graphics Interface Conference, Canadian Information Processing Society, 35–44.   Mona Haraty, Joanna McGrenere, and Charlotte Tang. 2015. How and Why Personal Task Management Behaviors Change over Time. Proceedings of the 41st Graphics Interface Conference, Canadian Information Processing Society, 147–154.   Mona Haraty, Joanna McGrenere, and Charlotte Tang. 2016. How personal task management differs across individuals. International Journal of Human-Computer Studies 88: 13–37.  Mona Haraty and Joanna McGrenere. 2016. Designing for Advanced Personalization in Personal Task Management. Proceedings of the 2016 ACM Conference on Designing Interactive Systems, ACM, 239–250.   Mona Haraty, Joanna McGrenere, and Andrea Bunt. 2017. Online Customization Sharing Ecosystems: Components, Roles, and Motivations. To appear In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work.  vi  Table of Contents  Abstract .......................................................................................................................................... ii Preface ........................................................................................................................................... iv Table of Contents ......................................................................................................................... vi List of Tables ..................................................................................................................................x List of Figures .............................................................................................................................. xii Glossary ...................................................................................................................................... xiv Acknowledgements ......................................................................................................................xv  Introduction ................................................................................................................1 1.1 Thesis goal and research questions .............................................................................. 3 1.2 Research approach, thesis overview and contributions ............................................... 5 1.2.1 Understanding individual differences in PTM (Chapter 2) ......................................... 6 1.2.2 Understanding how PTM behaviors change over time (Chapter 3) ............................ 7 1.2.3 Designing and evaluating a personalization mechanism (Chapter 4).......................... 8 1.2.4 Exploring mechanisms for sharing advanced personalizations (Chapter 5)................ 9 1.2.5 Summary of research contributions ........................................................................... 10 1.3 Outline of the dissertation ......................................................................................... 11  Individual Differences in PTM ................................................................................12 2.1 Related work .............................................................................................................. 14 2.1.1 PTM studies ............................................................................................................... 14 2.1.2 Individual differences in personal information management .................................... 17 2.2 Study One: focus group + contextual interviews ...................................................... 19 2.2.1 Methods ..................................................................................................................... 19 2.3 Findings: Study One .................................................................................................. 23 2.3.1 Three types of users ................................................................................................... 23 vii  2.3.2 PTM behaviors .......................................................................................................... 32 2.4 Study Two: survey questionnaire .............................................................................. 43 2.4.1 Survey design ............................................................................................................ 44 2.4.2 Respondents ............................................................................................................... 45 2.4.3 Data analysis .............................................................................................................. 47 2.5 Findings: Study Two ................................................................................................. 50 2.5.1 Tools used .................................................................................................................. 50 2.5.2 Identifying adopters ................................................................................................... 53 2.5.3 Identifying make-doers and DIYers .......................................................................... 54 2.5.4 Associations between the user types and individual characteristics.......................... 59 2.5.5 Behaviors of adopters ................................................................................................ 61 2.6 Discussion .................................................................................................................. 65 2.6.1 How individual differences in PTM compared across Study One and Study Two ... 66 2.6.2 Factors associated with differences in PTM across individuals ................................ 67 2.6.3 Barriers to using dedicated PTM tools ...................................................................... 69 2.6.4 Benefits of assessing generalizability ........................................................................ 70 2.7 Limitations ................................................................................................................. 70 2.8 Implications for design .............................................................................................. 71 2.9 Conclusion ................................................................................................................. 74  Changes in PTM Behaviors Over Time .................................................................76 3.1 Related work .............................................................................................................. 77 3.2 Methods ..................................................................................................................... 79 3.3 Findings ..................................................................................................................... 81 3.3.1 Changes in PTM behaviors........................................................................................ 81 3.3.2 Contributing factors to PTM changes ........................................................................ 83 3.4 Discussion and implications for design ..................................................................... 89 3.5 Limitations ................................................................................................................. 94 3.6 Conclusion ................................................................................................................. 95  Design and Evaluation of a Mechanism for Advanced Personalization..............96 4.1 Related work .............................................................................................................. 97 viii  4.1.1 Guidelines on designing personalizable tools ........................................................... 97 4.1.2 End user programming (EUP) approaches ................................................................ 99 4.2 Meta-designing a PTM tool ..................................................................................... 101 4.2.1 Establishing the building blocks of a PTM system ................................................. 102 4.2.2 Creating new personalizations using the building blocks ....................................... 104 4.3 Controlled user study ............................................................................................... 110 4.3.1 Participants .............................................................................................................. 110 4.3.2 Tasks ........................................................................................................................ 111 4.3.3 Procedure ................................................................................................................. 113 4.3.4 Data analysis ............................................................................................................ 115 4.4 Findings ................................................................................................................... 116 4.4.1 Task completion and mistakes made ....................................................................... 116 4.4.2 Unexpected personalization behaviors .................................................................... 119 4.4.3 ScriPer: strengths and challenges ............................................................................ 120 4.4.4 Self-disclosing mechanism: strengths and challenges ............................................. 125 4.5 Discussion and conclusion ...................................................................................... 126  Understanding Online Personalization Sharing ..................................................130 5.1 Related work ............................................................................................................ 132 5.1.1 Types of customizations .......................................................................................... 132 5.1.2 Motivations to customize ........................................................................................ 132 5.1.3 Customization sharing: benefit, roles, and medium ................................................ 133 5.1.4 FOSS: background, roles and motivations .............................................................. 135 5.2 Methods ................................................................................................................... 136 5.2.1 Systems investigated ............................................................................................... 136 5.2.2 Participants .............................................................................................................. 138 5.2.3 Procedure ................................................................................................................. 139 5.3 Findings ................................................................................................................... 140 5.3.1 Customization sharing ecosystems .......................................................................... 141 5.3.2 What drives the customization sharing ecosystems ................................................ 150 5.4 Discussion and implications for design ................................................................... 157 ix  5.5 Conclusion ............................................................................................................... 163  Conclusion ...............................................................................................................164 6.1 Thesis contributions ................................................................................................. 164 6.1.1 Rich characterization of individual differences in PTM, informing the design of personalizable PTM tools ........................................................................................ 164 6.1.2 Design and controlled evaluation of a prototype of a personalizable PTM tool ..... 167 6.1.3 Methodological approach to the creation of a personalizable tool .......................... 169 6.1.4 Identification and characterization of online personalization sharing ecosystems.. 170 6.2 Directions for future research .................................................................................. 171 6.2.1 Further work on informing the design of personalizable tools ................................ 171 6.2.2 Further work on designing for authoring of advanced personalization ................... 173 6.2.3 Further work on designing for sharing of advanced personalization ...................... 175 6.3 Concluding comments ............................................................................................. 178 Bibliography ...............................................................................................................................181 Appendices ..................................................................................................................................197  Interview Script (Study One in Chapter 2) ......................................................... 198  Survey Questions (Study Two in Chapter 2) ...................................................... 202  Interview Script for Survey Follow Up Study (Chapter 3) ................................ 210  Screenshots of ScriPer (Chapter 4) .................................................................... 211  Materials of the Controlled User Study (Chapter 4) ........................................... 222 E.1 Pre-questionnaire ..................................................................................................... 222 E.2 Interview script ........................................................................................................ 222 E.3 Post questionnaire .................................................................................................... 224  Interview Script for Sharing Personalization Study (Chapter 5) ........................ 228  Codebook for the Analysis of Changes and Reasons in Survey (Chapter 3) ..... 231  x  List of Tables  Table 2.1 Study One (Focus group and contextual interviews) participants. ............................... 24 Table 2.2 Key features of remembering strategies used by the participants ................................ 42 Table 2.3 Participants’ occupations in Study Two. ...................................................................... 46 Table 2.4 Summary of the survey results...................................................................................... 58 Table 2.5 Dedicated PTM tools used by the adopters .................................................................. 62 Table 2.6 What adopters liked and disliked about their tools ....................................................... 65 Table 3.1 Participants’ occupations in the survey and the follow-up interview studies ............... 78 Table 3.2 Factors contributing to changes in PTM behaviors ...................................................... 87 Table 4.1 Examples of user needs from prior PTM studies. ....................................................... 103 Table 4.2 Example of feature requests in RTM. ......................................................................... 103 Table 4.3 Categories of building blocks in a PTM system. ........................................................ 104 Table 4.4 Tasks used in the user study ....................................................................................... 114 Table 4.5 Correct personalization for each personalization task ................................................ 117 xi  Table 5.1. Systems investigated in the study .............................................................................. 139 Table 5.2. Summary of the ecosystems in terms of their components ....................................... 141  xii  List of Figures  Figure 2.1 Three types of users ..................................................................................................... 25 Figure 2.2 Examples of how participants used and personalized general tools. ........................... 29 Figure 2.3 Changes in PTM behaviors ......................................................................................... 30 Figure 2.4 The survey structure. ................................................................................................... 45 Figure 2.5 Tools used by the survey participants ......................................................................... 52 Figure 2.6 User types identified in manual coding with high, medium, and low confidence. ..... 55 Figure 2.7 Summary of user types identified among the survey respondents. ............................. 56 Figure 2.8 Survey responses to six questions that were chosen based on Study One .................. 59 Figure 2.9 Methods of becoming aware of tools' functionality among adopters .......................... 64 Figure 3.1 The survey question about changes in PTM................................................................ 77 Figure 3.2 Types of changes in PTM. ........................................................................................... 83 Figure 4.1 The prototype in personalization mode ..................................................................... 106 Figure 4.2 An example of using ScriPer’s guided scripting mechanism. ................................... 107 xiii  Figure 4.3 Working with an attribute with the data type of time ................................................ 108 Figure 4.4 A grammatically correct script. ................................................................................. 108 Figure 4.5 Number of participants who successfully completed each task with no mistake...... 116 Figure 4.6 Breakdown of mistakes across types and programming expertise ............................ 119 Figure 4.7 Steps involved in using the 'Ask me for' block to perform part of T5P. ................... 124  xiv  Glossary Personalization: There is no consistent terminology for personalization/ customization/ adaptation/tailoring/appropriation in the literature. We use the term personalization to refer to the changes made to the UI or functionality of a system by the user, also known as user-controlled personalization (McGrenere, Baecker, & Booth, 2002). We use the terms personalization, customization, and adaptation interchangeably. Personalizable tools: We use the term “personalizable tools” to refer to ones that put users in control of tailoring the system to their needs by providing users with flexibility to make their own changes to the UI or functionality of the system.  Advanced personalization: Personalization that goes beyond changing the look and feel, and involves changing functionality, for example by adding/removing functions or by changing the behavior of existing functions. Customization sharers: Users of personalizable tools who create personalizations and share with others. Customization re-users: Users of personalizable tools who re-use personalizations created and shared by customization sharers. Individual differences: We use the term individual differences to refer to the observable differences in individuals’ behaviors. xv  Acknowledgements I am very grateful that I had the opportunity to work on my thesis’ topic with an awesome advisor and great company, in the beautiful UBC campus. Many people contributed to this dissertation in their own special way. I would like to thank: Joanna McGrenere, my advisor, first for agreeing to supervise me and for many other things including her great mentoring (sharing her similar experiences and being compassionate in all the hard time that I went through); her timely, helpful, and insightful feedback; her great care for the quality of work (data analyses, writing, and presentations); and for pushing me to write crisper and think more practically.  Gail Murphy, a member of my supervisory committee, first for her mentoring over a tri-mentoring season and her two great advices of: never compare yourself with others regarding publications, and meet people who visit the department even if their area is different than yours; and for accepting to be on my committee, and all her helpful feedback throughout these years.  Lisa Nathan, a member of my supervisory committee, for accepting to be on my committee, bringing a different perspective to my thesis, her role in framing of the thesis, and her encouraging and helpful comments throughout these years. Carl Gutwin, Heather L. O’Brien, and Michiel Van de Panne, my examining committee, for accepting to be on my committee and for all their great questions and feedback in my defense. xvi  Andrea Bunt, who helped with the research reported in Chapter 5, for being a great collaborator and all her contributions to that research. Dian Tam, Shathel Hadad, and Charlotte Tang, for their help with conducting the studies in Chapter 2 and with the data analysis in Chapters 2 and 3. Study participants, who volunteered their time to participate in our studies, for their contributions to all my research.  Kelly Booth, for his feedback and questions on my work in the MUX meetings, as well as for the things that I learned from him indirectly through Syavash. Cristina Conati, for the research opportunity she gave me in my first year of PhD. Hasti Seifi, my friend, for all her contributions to my work, all our discussions, our brainstorming sessions, and her feedback on paper drafts and practice talks.  Muxers— Kamyar Ardakani, Peter Beshai, Matthew Brehmer, Paul Bucci, Derek Cormier, Jessica Dawson, Francisco Escalona, Brian Gleeson, Izabelle Janzen, Idin Karoui, Sung-Hee Kim, Rock Leung, Vincent Levesque, Juliette Link, Karon MacLean, Narges Mahyar, Tamara Munzner, Matei Negulescu, Louise Oram, Antoine Ponsard, Oliver Schneider, Micheal Sedlmair, Yasaman Sefidgar, Francesco Vitale, Kailun Zhang—for their feedback on my research, paper drafts, and practice talks. xvii  Other friends in the department—Sarah Rastkar, Mikhail Bessmeltsev, Noushin Saeedi, Monir Hajiaghayi, Samad Kardan, Prashant Sachdeva—for our random research chats and sharing our experiences as PhD students. Other members of the department—Anne Condon, Laura Slender, Michele Ng, Holly Kwan—for making the department more friendly. Vancouverite friends— Mojgan Akhgari, Saba Alimadadi, Parnian Alimi, Saeedeh Ebrahimi, Shima Gerani, Sarah Hormozi, Hamed Jahromi, Nahid Karimaghaloo, Aida Karimfazli, Saman Khani, Ario Madani, Amirhossein Mehrabian, Mehran Mir, Payam Mousavi, Mona Rahmani, Abtin Rasoulian, Maryam Saberi, Reza Shahidinejad, Fahimeh Sheikhzadeh, Ali Vakil, Behrooz Yousefzadeh, Mahshid Zeinaly—for making all these years more fun. Other researchers outside UBC— Dan Cosely, Jennifer Turns, David Levy, Maria Hakanssan, Gilly Leshed, Henry Lieberman, Tom Erickson, David Karger, Michael Terry, Loren Terveen—for our discussions on my research at conferences or in my visits to their offices. Joe Konstan, Abi Sellen—the steering committee at the Doctoral Consortium at CHI2013—for their helpful feedback on my research. All the department visitors with whom I had a chance to chat about my research—Saul Greenberg, Gerhard Fischer, Patrick Baudisch, Tuomas Sandholm—for their feedback on my work. xviii  Helen Wang, Jaime Teevan, and Shamsi Iqbal, my mentors at MSR, for giving me an internship opportunity which made me realize that research can and should be done more efficiently given that no research is perfect. Also thanks to Meredith Morris, whom I met at a CSCW conference, for introducing me for this MSR internship. My parents, sister, and brother for all their encouragement and support throughout my life.  My in-laws for being so nice and supportive over the last 12 years.  Robin for making me doing the last bits of my research more efficiently with his coming. Syavash for his support in every possible way, being so available for quick discussions and giving feedback on my work. For all his great contributions to this thesis: his help with the research methods, data analyses, writing, shortening papers, presentations, and taking care of Robin more than his fair share whenever I needed so that I could meet the numerous deadlines in the last year.      The Natural Sciences and Engineering Research Council of Canada (NSERC) and the Graphics, Animation and New Media Network of Centres of Excellence (GRAND NCE) for funding my research.   1   Introduction In a nutshell, the problem being addressed by this dissertation is that software tools are often unable to accommodate a range of user needs and preferences, which forces users to shape their behaviours according to the tool.  This can be disempowering. Designing tools that can be personalized in a significant way to satisfy users’ diverse needs and preferences is a possible solution. This dissertation takes the first steps at assessing the viability of this solution approach, by addressing the following key research question: How can we enable advanced personalization that accommodates individual differences in a given domain? When using software tools to perform our daily activities such as managing our information or tasks, searching the web, or getting in touch with our friends, we have needs and preferences regarding how to perform those activities. While some of our needs and preferences are similar to those of other users, some are unique and are rarely met by the software tools that are designed for a mythical ‘average’ user. The unique nature of our needs and preferences call for personalizable software systems. In addition, our needs and preferences change over time for various reasons, such as our desire to improve, providing another reason for designing personalizable tools that allow individuals to evolve their tools in order to make them fit their unique needs and support changes in their behaviors.  When technologies lack personalization facilities, users have to adapt their behaviors and thoughts consciously or unconsciously to accommodate the technology. The process of technology shaping human activities and thoughts—referred to as reverse-adaptation (Winner, 2  1978)—is considered to be disempowering (Nardi & O’Day, 1999) as it involves giving power to the technology to shape human behavior. Unlike reverse adaptation though, personalizing a technology to fit one’s specific needs is empowering, since it involves expressing one’s identity (Mainwaring, Chang, and Anderson, 2004).  Fortunately, many applications provide personalization mechanisms through which users can adapt the system to better fit their unique needs and preferences. But there is a limit to the extent of personalization that is supported. Current applications are often limited to basic personalizations such as making simple changes to the visual appearance of interface elements (e.g., changing icons or a background), customizing access to functionality (e.g., adding, removing, re-arranging commands/buttons to/in a toolbar or defining a shortcut), and modifying system behavior by choosing options from a list of predetermined alternative behaviors. More advanced personalizations such as extending system functionality are possible through mechanisms such as macros and add-ons, but these mechanisms have limitations. Recording a macro extends a system’s functionality by encapsulating a sequence of repeated user actions that can be invoked later. But sophisticated macros that add new functionality require users to edit the code generated by the macro recorder which requires programming skills. Tools such as web browsers enable users to extend system functionality by creating and installing add-ons. However, end users are often restricted to using pre-existing add-ons, unless they have the programming skills to develop new add-ons. One of the goals of this dissertation is to bridge the gap between simple and advanced personalization mechanisms by designing a personalization mechanism for authoring advanced personalizations without requiring the user to code (Chapter 4).  3  A prerequisite to design of personalizable solutions in any domain is the knowledge of the types of personalizations needed in that domain. That knowledge can be gained through understanding what differences exist among individuals’ behaviors and how the behaviors change over time. Little HCI research has focused on this issue. Thus, this dissertation starts with characterization of individual differences, and changes in behaviors over time (Chapter 2 and 3) to inform the design of mechanisms for authoring advanced personalizations. While it is critical to facilitate authoring of advanced personalizations, many users might still prefer to take advantage of personalizations that have been created and shared by others because of lack of ability or time (Mackay, 1990a). For example, many users of applications, such as web browsers, developer tools, and text editors, rely on personalizations that others have created by installing add-ons; for example, 85% of Firefox users have personalized by installing add-ons (Scott, 2011). Even when playing games, such as Minecraft, gamers rely heavily on using personalizations, often called “mods,” that other gamers have created to improve their experience. Although a variety of mechanisms are available for sharing personalizations, little is known about how these mechanisms are used, and how they either support or hinder sharing practices. We focus on this issue in Chapter 5. 1.1 Thesis goal and research questions The overarching goal of this dissertation is to enable advanced personalizations targeted at supporting individual differences as well as changes in individuals’ behaviors over time. We chose personal task management (PTM) as the domain of this thesis, because we had anecdotal evidence about rich individual differences in that domain and also the numerous PTM apps in the 4  market place suggests the diversity of PTM needs, calling for personalizable solutions1. To achieve the goal of this dissertation, we used a multi-faceted approach by (1) understanding what types of advanced personalizations are needed in the domain of PTM through studying behavioral differences across individuals, as well as studying how individuals’ behaviors change over time, (2) designing and evaluating a mechanism for authoring advanced personalizations, (3) understanding how people share advanced personalizations to inform the design of future sharing mechanisms. The first and the second steps above were undertaken in the domain of personal task management. For the third step, however, we broadened our enquiry because personalization sharing in PTM did not have the richness found in other domains. There are several different mechanisms used for sharing personalizations in tools from various domains. We chose to study those mechanisms to better understand how and why online personalizations are shared which could inform the design of such mechanisms.  In this dissertation, we addressed the following research questions: 1. What individual differences exist in PTM? 2. How and why do individuals’ PTM behaviors change over time?                                                    1 The goal of this thesis was not to evaluate the effectiveness of individuals’ PTM practices, but to understand the diversity of PTM practices across individuals to inform the design of personalizable tools. 5  3. How can we design mechanisms for authoring advanced personalizations in a PTM tool, without requiring users to code? 4. What mechanisms are used for sharing personalizations, and how do they either support or hinder sharing practices? 1.2 Research approach, thesis overview and contributions To address the above questions, we first conducted a focus group + contextual interviews2 to investigate individual differences in PTM. Second, we conducted a survey with two goals: (1) to assess the generalizability of our first study to a broader population, since our first study was conducted exclusively with an academic population, and (2) to study changes in PTM behaviors by asking respondents about changes in their PTM and the reasons behind them. Third, we conducted follow-up interviews with some of the survey respondents to deepen our understanding of PTM changes they had reported. Fourth, we used guidelines on designing personalizable tools to design a PTM system that supports advanced personalization. Our personalizable PTM system has two key components for facilitating the authoring of advanced personalization: a self-disclosing mechanism and a guided scripting mechanism. To investigate                                                    2 We use the unconventional notation of ‘+’ to indicate that both the focus group and the contextual interviews are part of one study, which we refer to as Study One. 6  the strengths and challenges of these two components, we conducted a controlled user study with people with various programming backgrounds. Finally, to understand the mechanisms for sharing personalization, we conducted interviews with personalization sharers of four diverse systems. Below, we briefly describe each of the above steps to provide an overview of the thesis. 1.2.1 Understanding individual differences in PTM (Chapter 2) There has been previous research on how people manage their tasks. For example, Bellotti et al. studied how busy professionals manage their tasks (2004). Our work extends the prior PTM work by its focus on understanding the similarities and differences in individuals’ PTM behaviors with the longer-term goal of designing personalizable PTM systems. To investigate individual differences in PTM, we first conducted a focus group + contextual interviews with 19 participants in an academic setting. Then, we assessed the generalizability of our initial study by conducting a survey questionnaire with 178 respondents from a broader population.  Contributions: Based on the focus group + contextual interviews study, we summarized the differences and similarities across the individuals by categorizing the participants into three categories of DIYers, make-doers, and adopters. In our broader survey study, we found that many of the respondents did not fit neatly into one of the previous categories; rather, they demonstrated tendencies of varying strength toward adopting, make-doing, and DIYing for their PTM. This was reflected in how they recorded and remembered their tasks, and if/how they maintained task lists. Based on this, we recommend that PTM tools have the capacity to accommodate the varying strengths of those tendencies: they should be personalizable such that people with DIY desire can personalize their tool when they need to, and should be relatively 7  easy to use and integrate well with other systems in use to satisfy make-do tendencies. We also identified three groups of PTM behaviors (recording tasks, remembering tasks, and maintaining and organizing task lists) and described how they differed across individuals. Finally, we offer implications for the design of personalizable PTM tools, which can support differences in PTM behaviors across individuals and contexts. 1.2.2 Understanding how PTM behaviors change over time (Chapter 3) A second goal for conducting the survey study (first introduced above) was to investigate changes that occur in an individual’s PTM behavior over time which have not been explored to our knowledge. Understanding how and why PTM behaviors change can inform the design of personalizable PTM tools that can support such changes. In our survey study, we asked respondents about the changes they made in their PTM behaviors and the reasons behind those changes. To deepen our understanding of PTM changes reported in the survey and to see if the respondents had made any changes to their PTM since their participation in the survey, we conducted follow-up interviews with 12 of the survey respondents about a year later.  Contributions: We characterized three different types of changes that occurred in individuals’ PTM behaviors over time: strategy changes (changes made in how the user approaches PTM), within-tool changes that are made to a single tool (personalizing a tool), and tool-set changes (adding or removing a tool to the suite of tools used by the user). We also characterized the factors that precipitated these changes as the user’s changing needs, their dissatisfaction caused by unmet needs, and opportunities revealing unnoticed needs. We suggest ways for the design of 8  personalizable PTM tools to utilize these contributing factors to better support changes in PTM behaviors over time.  1.2.3 Designing and evaluating a personalization mechanism (Chapter 4) Many applications provide personalization mechanisms through which users can make changes to adapt a system to better fit their needs or preferences. However, advanced personalizations are often only available to programmers. To achieve our goal of designing tools that support advanced personalization, we built on ideas from end user programming (EUP) approaches such as controlled natural languages and sloppy programming (Little et al., 2010), and followed guidelines on designing personalizable tools such as meta-design guidelines (Gerhard Fischer & Scharff, 2000).  Contributions: We designed a prototype of a personalizable PTM tool with two key components for enabling the authoring of advanced personalizations: 1) a self-disclosing mechanism that reveals system functionality to users and thus makes it easier for users to understand what can be changed or added to the system, 2) a guided scripting personalization mechanism (ScriPer) that enables users to construct new features by combining building blocks that are familiar to them. To investigate the strengths and challenges of these two components, we conducted a controlled user study with 24 participants. We showed that participants with varied programming backgrounds were all able to use ScriPer to perform advanced personalization tasks, except in two out of 96 trials. A secondary contribution is our design process that provides additional insights into how to employ the theoretical guidelines on designing personalizable tools. 9  1.2.4 Exploring mechanisms for sharing advanced personalizations (Chapter 5) Research on personalization sharing has focused predominantly within organizational boundaries (Draxler, Stevens, Stein, Boden, & Randall, 2012; Kahler, 2001; Murphy-Hill & Murphy, 2011). It remains unclear how the personalization sharing practices translate from within-organization settings to online settings where sharers come from diverse contexts and may have other motivations to share. Little empirical research has been conducted to investigate online environments and mechanisms in terms of their conduciveness to personalization sharing. We are only aware of research that has examined online customization sharing in the context of remixing behaviors in maker communities (Oehlberg, Willett, & Mackay, 2015). To understand and inform the design of mechanisms for sharing personalization, we conducted interviews with 20 personalization sharers of four diverse systems: Sublime Text, Minecraft, Alfred, and IFTTT; the first two being personalizable systems and the others being personalizing tools used for creating personalization.   Contributions: Through documenting current personalization sharing practices in four diverse systems, we revealed the concept of sharing ecosystems. These ecosystems are often complex, consisting of various components to support the different aspects of personalization sharing and re-using: hosting personalization, discussing them, finding them, installing them, and keeping them updated. We encapsulate the roles that bring these ecosystems to life and show that they have some, but not full, overlap with the roles in free and open source software (FOSS) projects. Our findings also shed light on motivations to create and share personalizations and the degree to which these motivations overlap with those that drive contributions in FOSS. Collectively, our 10  findings point to implications for the design of personalization sharing ecosystems, and highlight important design tradeoffs. 1.2.5 Summary of research contributions To summarize, this thesis provides the following contributions: 1. Rich characterization of individual differences in PTM, informing the design of personalizable PTM tools a) Characterization of PTM behaviors that differ across individuals, revealing the types of personalizations needed in a personalizable PTM tool  b) Identification of three types of users (Adopter, DIYers, and Makedoers) that encapsulate the individual differences in PTM along two identified dimensions – personalization and type of tool used – with an academic population c) Assessment of the generalizability of the three user types to a broader population, revealing that users can exhibit tendencies towards these three types rather than matching one type only d) Characterization of changes in PTM behaviors over time and the factors that contribute to those changes 2. Design and controlled evaluation of a prototype of a personalizable PTM tool  a) Design of ScriPer for authoring advanced personalizations b) Empirical evidence of the strengths and weaknesses of ScriPer and its accessibility to non-programmers  c) Design process of a personalizable PTM tool 11  3. Methodological approach to the creation of a personalizable tool 4. Identification and characterization of personalization sharing ecosystems  a) Identification of ecosystem components that support different aspects of personalization sharing and re-using b) Identification of roles that people play within these ecosystems and their motivations We revisit and comment further on these contributions at the end of this thesis in Chapter 6. 1.3 Outline of the dissertation Chapter 2 describes our initial study of individual differences in PTM, as well as the survey study which was conducted to assess the generalizability of our first study. Chapter 3 describes a secondary part of the survey, which we used to investigate changes in PTM, as well as a follow-up interview study. Chapters 2 and 3 together provide insights into the types of advanced personalizations that we enable in Chapter 4. In Chapter 4, we describe the design and evaluation of a mechanism for authoring advanced personalization. Chapter 5 reports an interview study with people who share their personalizations in four personalizable systems. In Chapter 6, we elaborate on the thesis contributions and discuss areas of future research. Related work to each chapter is embedded in each of the individual chapters. Finally, we include the materials used in our research as appendices.  12   Individual Differences in PTM Many people are managing an ever-increasing number of tasks—loosely defined as “to-dos.” Based on our own casual observations, we noticed a large variety of ways in which people manage their tasks and that many people still rely on general-purpose tools, such as a text file for tracking their tasks. Such observations have formally been reported in Blandford and Green (2001); they highlighted that the majority of people adopt general-purpose tools, such as paper scraps and mobile phones for remembering their to-dos. A plethora of electronic personal task management (e-PTM) systems have been developed since then, such as OmniFocus3, Things4, Remember The Milk5, and Google Tasks6, and people seem to have very different opinions as to which one is the best PTM application. Four different variations of the question “what is the best task management application?” in Quora7 revealed 45 responses that collectively identified 33                                                    3 https://www.omnigroup.com/omnifocus/ 4 https://culturedcode.com/things/ 5 https://www.rememberthemilk.com/ 6 https://www.gmail.com/mail/help/tasks/ 7 Quora.com (an online question and answer system) 13  different e-PTM applications8. As one respondent put it: “The one thing that this thread illustrates is that there is no "best" task manager. There are hundreds if not thousands of options, but no clear market leader. All solutions have high abandonment rates, and ironically pen and paper is voted the best task manager on Lifehacker each year.” (Franklin, 2011) Taken together—the large number and diversity of e-PTM applications, the fragmented e-PTM market, and the fact that many people seem to use general tools to manage their tasks—suggests high diversity of PTM needs and behaviors across individuals.  Although there has been previous research on how people manage their tasks (Bellotti et al., 2004; Bellotti, Ducheneaut, Howard, & Smith, 2003), little to no research attention has been paid to differences in PTM behaviors across individuals. Understanding such individual differences should provide valuable insight into the types of advanced personalizations that we would like to enable in our design of a personalization mechanism in Chapter 4. The goal with our design in Chapter 4 is to better support individual differences so that people can adapt the tool to their own way of managing tasks, instead of adapting themselves to the tool’s approach, or abandoning the tool that was initially chosen to address some needs.                                                     8 In February, 2014 we searched for “task management” in Quora and picked the existing top four questions related to “what is the best task management tool?” and looked at their 45 responses combined. 14  To understand differences across individuals’ PTM behaviors, we ran two studies: Study One was a focus group + contextual interviews and Study Two was a survey study. For Study One, we opted to focus on a relatively homogenous population, namely students and faculty in an academic setting.  While this sample was chosen in part for convenience, it also helped us to focus on sources of individual differences beyond the well-known sources of differences, namely task type and occupation. We were initially concerned that “academics” might be too homogeneous in their PTM behaviors, so we first conducted a focus group with seven participants, which surprisingly revealed interesting variations in PTM behaviors. In close succession, we then conducted contextual interviews with 12 participants. About a year later, for Study Two, we broadened our sample by conducting an online survey with 178 people of diverse occupations to find out the extent to which the results of Study One generalize to a broader population.  2.1 Related work We first review prior PTM studies related to tool use. We then discuss the relationship between PTM and personal information management (PIM) and review the PIM studies that have reported individual differences in PIM.    2.1.1 PTM studies Task management has been studied from several perspectives. We categorize PTM studies into two groups: 1) studies of tool use and practices, i.e. what tools and practices people use to manage their tasks and how they use them, 2) studies of multitasking, task switching, and 15  interruptions, i.e. how people perform their tasks which includes how people multitask, switch tasks, handle interruptions, and resume an interrupted task. Our work belongs to the first group and adds to it by characterizing individual differences in PTM. Thus, we will review that group of work below.  PTM studies of tool use fall into two categories: studies investigating the use of a given tool such as calendar or email for PTM, and studies investigating how people manage their tasks in general.  Payne (1993) investigated the use of calendars and noted the mismatch between users’ models of time management and the time management model imposed by calendars and diaries . He offered some design guidelines for e-calendars, many of which have been adopted in existing e-calendars such as Google Calendar. An example of such guidelines is supporting user orientation by making today or this week perceptually distinct. A large body of work has investigated the use of email for task management (Bellotti et al., 2003; Ducheneaut & Bellotti, 2001; Gwizdka & Chignell, 2004; Krämer, 2010; Mackay, 1988; Siu, Iverson, & Tang, 2006; Whittaker, Bellotti, & Gwizdka, 2006). These studies have identified a variety of problems of using email for PTM. As a result, several solutions such as TaskMaster (Bellotti, Ducheneaut, Howard, & Smith, 2002), TeleNotes (Whittaker, Swanson, Kucan, & Sidner, 1997) and ContactMap (Whittaker et al., 2004) have been developed to enhance email support for managing tasks that involve other people (Whittaker, 2005). In a similar attempt, Google Inbox is designed as an email client centered on task management to the extent that the action of archiving has been replaced with the action of marking an email as “done.” Although these systems have been 16  successful in addressing the problems that they were targeting—except for Inbox for which there is no evidence yet on its success—individual differences do not seem to have been taken into account in their design, based on their design description. PTM studies, however, have characterized different types of tasks that have given rise to some differences in individuals’ PTM. For example, studies of task management in email identified three types of tasks that people manage in their email (Bellotti, Ducheneaut, Howard, Smith, & Grinter, 2005): rapid-response tasks that take a few seconds to respond, extended-response tasks that take longer to complete, and interdependent tasks that depend on the actions of others to be completed. People have shown different strategies to manage tasks in each of the categories. When discussing our findings, we reflect on how different types of tasks that have been identified in the literature explain some of the differences in PTM behaviors across individuals that we observed. Compared to empirical studies on how people use a single tool such as email for PTM, relatively fewer studies have examined how individuals manage their tasks more generally; one example is Blandford and Green’s study on how paper-based and electronic PTM tools are used together (2001). They concluded that there is no perfect PTM tool and instead of designing e-PTM tools that replace paper based tools, the weaknesses and strengths of different tools should be understood and seamless integration of the tools should be supported. Another example is Bellotti et al.’s study that investigated how busy professionals and managers manage their tasks (2004). The focus of their study was to discover the type of PTM activities that a PTM tool should support, with little emphasis on understanding how each PTM activity (e.g., recording 17  tasks) might differ across individuals. Leshed and Sengers (2011) investigated the relationship between experience of busyness and the use of PTM tools. They found that people use a single productivity tool such as a calendar book for different purposes such as planning the upcoming week, logging activities, making to-do lists, as well as writing anything that comes to mind. They suggest personalization for the design of productivity tools, for example, by keeping the system open to multiple interpretations of how it can be used. However, the forms of personalizations that should be provided in order to support appropriation for various purposes remain unclear.  2.1.2 Individual differences in personal information management Personal Information Management (PIM) refers to the practices of locating, creating, storing, organizing, maintaining, retrieving, using, and distributing information for various everyday purposes such as later retrieval, reminding, and collecting, that support our needs and task (Jones, 2007). PIM and PTM are related to each other in two different ways: 1) they have been considered as “the two sides of the same coin” (Jones, 2007) because people organize some of their information according to its anticipated use in their tasks/projects (Kwasnik, 1989); 2) PIM can be considered as a superset of PTM because a to-do/task such as “review paper by next Monday” is a form of information that needs to be stored, organized, and retrieved similar to other forms of information. Perhaps most relevant to PTM—among PIM studies—are studies of project management that have investigated how people organize information items related to their projects as part of their project management practices (e.g., (Bergman, Beyth-Marom, & Nachmias, 2006; Jones, Bruce, Foxley, & Munat, 2006; Jones, Munat, Bruce, & Foxley, 2005)). The focus of our studies differ from that of the project management studies in that we focus on 18  the management part of PTM, such as making lists, as opposed to activities related to performing tasks, such as organization of information items, such as documents, needed for execution of a task/project. Below, we review the PIM studies that have reported differences in PIM across individuals and thus have a similar focus to that of our studies. PIM studies have identified different groups of users with respect to their PIM behaviors. In a study of office workers, Malone identified two strategies of filing and piling in office management (Malone, 1983). This study was followed by MacKay’s study of how office workers used email to manage their daily work, where she found that email provided a mechanism for task management activities: some delegated tasks (requesters), and some received their tasks via email (performers); performers kept working information in their inbox as a reminder of the tasks that needed to be done (Mackay, 1988). Whittaker and Sidner found three strategies in managing email: frequent filers, spring cleaners, and no-filers (1996). Similarly, inspired by Malone’s filers and pilers, Van Kleek et al. found individual differences in use of a note-taking tool (List-it) (M. G. Van Kleek, Styke, Karger, & others, 2011). By analyzing their participants’ behaviours regarding note creation, edits, and deletion over time, they found four distinct usage patterns reflecting individual differences in using a note-taking tool. The four groups of users were minimalist, periodic sweepers, revisers, and packrats (a term used by some of the participants in Marshall et al.’s study, when referring to their behaviors in handling the encountered information while reading (Marshall & Bly, 2005)). Jones et al. studied how people keep/organize web information for re-use and they found a great diversity across individuals’ keeping methods: send email to self or others, print out the web 19  page, save the web page as a file, paste URLs into a document, add a hyperlink to a personal web site, bookmark, write down notes on paper, copy to a “Links” toolbar, and create a note in Outlook (Jones, Dumais, & Bruce, 2002). They explained the differences in keeping behavior between people by analyzing the functions that each keeping method provides: keeping methods differ in the functions they provide (portability of information, accessibility from different devices, persistence of information, preservation of information in its current state, currency of information, context, reminding, ease of integration, ease of maintenance, communication and information sharing) and people differ in the functions they need according to their job and tasks. Thus, the differences in keeping methods between people were attributed to the difference in people’s jobs and tasks.   2.2 Study One: focus group + contextual interviews 2.2.1 Methods We investigated differences in PTM behaviors across individuals in an academic setting with a focus group and contextual interviews. In both the focus group and the contextual interviews, we used convenience sampling.  Of special note, we referred to tasks as “to-dos” or “things that we need to do” in both written and verbal communications with participants. We intentionally did not impose any particular meaning of task—other than the above—because people vary in how they distinguish tasks from projects or even from goals. This approach is not uncommon in prior PTM work. For example, 20  Bellotti et al. used the term “to-do” to refer to task/project without distinguishing between the two (2004).  2.2.1.1 Focus group: participants and procedure The purpose of the focus group was threefold: to ensure sufficient variation in PTM behaviors among individuals in our population, to broaden our understanding of PTM behaviors and practices, and to help refine our methods to be used in the contextual interviews. Five graduate students (one female) and two post-docs—all from Computer Science Department at the University of British Columbia—attended the focus group. The goal was to allow the participants to talk about their task management practices without requiring them to answer specific questions. To seed the discussion, at the beginning of the session, two broad questions were posed to the participants about their everyday task management: How do you manage your tasks? Do you consider yourself organized in regard to managing your everyday tasks? A few more specific questions were shown on a slide during the session to help the participants talk about their task management. These questions addressed the tools used for PTM and what was liked/disliked about those tools. Each participant took a turn talking about how s/he managed her/his tasks, the tools used, and the challenges faced. The session was audio-recorded and transcribed. The substantial variations found in the participants’ PTM behaviors gave us confidence to proceed to the contextual interviews with participants from the academic population namely grad students, post-docs, and professors. 21  2.2.1.2 Contextual interviews: participants and procedure Twelve volunteers (six females), all from University of British Columbia, participated in our contextual interviews: 10 from Computer Science, one from Mechanical Engineering, and one from Medicine. All were graduate students except for one professor and a post-doc. Data were collected through semi-structured contextual interviews. These interviews were conducted in the place where participants typically engage in their PTM activities, such as their offices, or in most cases in an undisturbed space on campus (given that they had their PTM tools readily available, e.g. on their laptops). One participant was interviewed at his residence in the same city. The contextual interviews took place over a period of two weeks. We first asked the participants about their education and work background, followed by more general questions about their organizational styles with regard to how they handled their day-to-day tasks.  The goal was to find out how people felt about their PTM. Next, we asked participants to show us their PTM tools, to talk about how they used them, and to describe what and why they liked and/or disliked them. A critical incident technique (Flanagan, 1954) was employed to solicit stories about the tasks that they had recorded in their tools. We also asked them about their previous practices so as to capture the evolution of their PTM behaviors. Appendix A  includes the interview script that was used to guide the semi-structured interview. Each interview lasted between 30 minutes to one hour, depending on the number of tools the participant showed us, and his/her orientation to detail. All the interviews were audio-recorded and transcribed for data analysis. 22  2.2.1.3 Data analysis  We used a variant of grounded theory for data analysis (Corbin & Strauss, 2008). A central tenet of this approach is that “all is data,” which means whatever the source of the data is (e.g., informal interviews, conversation with friends), it should be included in the analysis. Therefore, all 19 participants from the focus group and the contextual interviews were included together in one comprehensive analysis. Three coders each independently coded two of the transcripts. The codes for the two transcripts were compared and discussed for establishing a consolidated list of codes. Using this list, a third transcript was coded by two of the coders, who then proceeded to code the remaining transcripts. The inter-coder reliability was calculated for the third transcript using Cohen’s Kappa index. With the minimum kappa of 0.79, the two members continued coding and memoing (Corbin & Strauss, 2008) the rest of the transcripts, from which we proceeded through axial coding, which is the process of relating codes to each other, to establish themes and generalizations. After several rounds of axial coding and finding concepts that best describe the differences across participants, three types of users emerged, after which we went back to the data to check if we could describe all our participants based on those user types. This process of reanalyzing the data using the concepts that emerged in the analysis is a variation of theoretical sampling in grounded theory given by Corbin & Strauss (2008), although it does not involve interleaving of data gathering and analysis. Given the similarities and overlaps between different qualitative research methods, one could also label our approach thematic analysis (Braun & Clarke, 2006). 23  2.3 Findings: Study One Here, we report the findings of Study One: the three types of users and the PTM behaviors that differed across them. 2.3.1 Three types of users We asked participants about what they used for managing their tasks and how they were used. Participants often used a tool-set—multiple tools in combination to satisfy their PTM needs (Table 2.1). The tools used for PTM ranged from highly general tools, both traditional (e.g. paper & pen) and electronic (e.g. Word document), to tools that provide some explicit PTM support (e.g. email, calendar), to tools that are dedicated to PTM (e.g. OmniFocus, RTM). Among these tools, email and calendar were commonly used for PTM by most of our participants. Further, some participants used one or two primary tools, in which they did most of their PTM, while other participants did not identify any primary PTM tool. On a different dimension, we found that participants who were using general tools for PTM (e.g. paper & pen) differed from one another with respect to their investment in personalizing those tools. Designing a PTM tool using a general purpose tool, such as a Word document or a text file, is what we refer to as personalization here. These tools were used in ways that were not specifically intended by their designers. As an example, P6 and P18 both use paper notepad/notebook for their PTM but to make a weekly/monthly list, P6 divides her paper into four columns and puts her tasks in one of those columns depending on the type of tasks; whereas P18 uses paper to simply jot down her tasks in a haphazard manner.  24   Table 2.1 Study One (Focus group and contextual interviews) participants.  Focus group participants are denoted by *, the tools they used for PTM, and their identified user type—Participants’ primary tools are in bold (N=19). Participants  Degree Gender Tools used for PTM Identified  user type P1 Ugrad F Paper planner  DIYer P2 Ugrad M Pieces of paper, Notepad, iCal, email DIYer P3 Ugrad M Paper, email, alarm DIYer P4 PhD F Word document, Notebook, Google Calendar, cellphone, alarm DIYer P5 PhD M OneNote, Microsoft Oulook  DIYer P6 PhD F Paper DIYer P7 Faculty F Word document, Google Calendar DIYer P8 MD M Microsoft Excel, Word, Google Calendar and Tasks, iPhone calendar DIYer *P9 PostDoc M Paper, calendars DIYer *P10 MSc M Wiki, Paper notebook, Mendeley DIYer *P11 MSc F Word document, Paper notebook, sticky notes DIYer P12 MSc M AbstractSpoon, Email (Gmail), Google Calendar, Smartphone (Calendar) Adopter *P13 PostDoc M Things (on Mac), Google Calendar Adopter *P14 MSc M Google Tasks, Email, Google Calendar, Whiteboard, wiki Adopter *P15 MSc M OmniFocus (on Mac & iPhone), Email for collaborative PTM Adopter P16 Ugrad M Paper notepad, iPod Touch (Calendar, Notepad, ListPro) Make-doer P17 MSc F Email, Google Calendar Make-doer P18 PostDoc F Calendar (Google, iphone), Post-it notes, notebook Make-doer *P19 PhD M Google Calendar, Firefox Tabs, text files  Make-doer   25  Given the similarities and differences we found among the participants, three mutually exclusive types of users emerged based on two criteria: (1) whether or not their primary PTM tool was a dedicated e-PTM tool, and (2) whether or not they personalized their primary non-dedicated tool. The three types of users, based on their primary tool, are:   Adopters: who use a dedicated e-PTM tool.  Do-it-yourselfers (DIYers): who use and personalize a general tool.  Make-doers: who use a general tool, but without personalizing it.  Participants cleanly belonged to only one of these categories, thus we were confident that these three mutually exclusive categories explain the data of Study One well. The majority of the participants were DIYers (11/19), with the remaining divided evenly between adopters (four) and make-doers (four). Figure 2.1 illustrates these three groups of users based on the two criteria and Table 2.1 shows the participants, their tools, and their types that we identified.   Figure 2.1 Three types of users  26  2.3.1.1 Adopters  The primary tools of adopters were dedicated e-PTM tools (e.g., OmniFocus), which were limited in terms of supporting personalization. Adopters differed with respect to the level of their investment in choosing their tools. While P12 chose his PTM tool by trying a number of different PTM applications in a single session, P14 on the other hand had tried approximately twenty PTM applications over a course of five years before finally deciding to use Google Tasks. When asked what he disliked about all these tools, he pointed out that they were not integrated with other tools that he had been using for PTM (e.g. email, calendar) and he disliked their inflexibility, which had forced him to adapt his PTM behavior to the way the tool required. This is a clear example of technology shaping the user’s behaviour. Three adopters reported that they had tried e-PTM tools based on an approach to task management called GTD (Getting Things Done)9 (Allen, 2001), however, only one continued to use OmniFocus, a GTD-based tool.                                                     9 A number of personal task/time management approaches as described in Stephen Covey’s “The seven habits of highly effective people” (Covey & Emmerling, 1991), David Allen’s “Getting things done” (GTD) (2001), and Mark Foster’s “Do it tomorrow and other secrets of time management” (2006) have provided people with strategies to manage their time and tasks. As mentioned in the outset of this Chapter, a number of PTM tools are available on the market, some of which are designed based on the aforementioned methodologies. According to our participants, these tools often require their users to adapt their behaviors to the method supported by them, and as a result our participants—except for one—abandoned their tools instead of adopting the prescribed method. 27  2.3.1.2 Do-it-yourselfers (DIYers) The primary tools of DIYers were general-purpose tools either paper-based such as traditional pen & paper and paper planners, or electronic such as Word and Notepad documents. They designed their own PTM system by personalizing these tools based on their own personal rules for recording and remembering their tasks as well as maintaining and organizing their task list. Some of the factors that had led them to design their own system instead of adopting an existing dedicated PTM tool included: lack of a clear market leader among the PTM systems, and thus the time required to find a good PTM system, the mismatch between their needs and existing PTM systems known to them, and PTM systems’ steep learning curve. Five out of 11 DIYers settled as DIYers after trying to adopt a number of dedicated PTM applications. For example, P7 said about her PTM system, which was a Word document illustrated in Figure 2.2-a: “this is the best system that I’ve had to-date, after trying a number of different systems [including Palm Desktop, and something based on Stephen Covey’s book][…] it works for me.” Similarly, P1 who used a paper planner said: “[…] on my phone, I tried a whole bunch of to-do list apps, there was like … Wunderlist: that one has a desktop app too so I tried both of them. But, I dunno …’cause there was a whole bunch of to-do list apps, and none of them is quite what I need. And it’s kind of confusing to have to relearn stuff, so I was just like “forget it!” Paper is so easy! ‘cause I can just configure it to however I want to do it.” DIYers were more likely to cherry pick strategies from methodologies such as GTD for their PTM instead of adopting them as a whole. P9 described his experience with GTD: “I am using some of the strategies in GTD. But I am not committed to this methodology, since it’s too much 28  overhead for me […] GTD was so cool and I tried to do the same and be so organized but it didn’t work for me. It was over-organizing everything […].” Being aware of their characteristics and PTM needs, DIYers designed their own system in such a way that it met their needs. P1, a DIYer, reflected: “I actually am not a very organized person by nature, so I need like all this massive complicated stuff [referring to her system] to remember.” P1 designed her own PTM system using a paper planner and Post-it notes (Figure 2.2-c). She essentially personalized her paper planner. For example, due to the limited space in her paper planner for each day, she added Post-it notes to relevant days for additional tasks that did not fit in the space provided by the planner. To overcome the added effort of manually entering recurring tasks every week or month, she put these tasks on a Post-it note so that they could be easily moved to another week or month.  Also, since paper planners naturally enforce every task to be associated with a date, she used Post-it notes for time-independent tasks, so that she could also easily move them around without having to rewrite them. 29  DIYers personalized their tools, therefore they were also capable of altering their tools to accommodate changes when their PTM needs changed over time. We found that external factors such as changes in one’s job and starting to use a second monitor were two possible factors that could alter PTM needs. P7, for example, transitioned gradually from a manual weekly to-do list to creating and printing lists from a word processor, because her lists changed so frequently and manual edits became too time-consuming. Although she made a digital list, she kept on printing until she got a second monitor: “so without the screen, I wanted my to-do list to sit here ‘cause I wanted to be able to say: what should I be doing now? What am I supposed to be working on now?” Once she had her second monitor, she stopped printing the list because she could view it while working on other things on her primary monitor (Figure 2.3).  Figure 2.2 Examples of how participants used and personalized general tools.  (a) P7’s “Matrix To-do” list in a Word document comprised of four columns: (I) personal tasks, high priority ones highlighted in green, (V) work-related tasks, high priority ones in yellow, (II+IV) low+medium priority work-related tasks, (b) P2’s task list on a paper, (c) P1’s paper planner, (d) Google Calendar.   30  2.3.1.3 Make-doers  Make-doers did not use any dedicated PTM tools. The tools they did use were similar to DIYers’; they used email, calendar, and other general tools, such as paper & pen and text files. However, unlike DIYers, they used such tools without personalizing or making any changes to them. They only utilized the minimal support of general-purpose tools for PTM without adapting/personalizing them. This explains the small variation among the make-doers’ PTM behaviors we observed as compared to the relatively large variation among those of DIYers. For example, when using electronic calendars, which provide a reminding mechanism, none of the make-doers had even changed the default settings of the reminders for any of their tasks. Despite this, two out of four complained that the default reminder was set to only ten minutes ahead of a scheduled task.  We acknowledge that the line between DIYers and make-doers is somewhat grey. By comparing P6 (DIYer) and P18 (make-doer), who both use paper notepad/notebook for their PTM, we illustrate the key differences here. When P6, a DIYer, wants to make a weekly/monthly list, she divides her paper into four columns and puts her tasks in one of those columns depending on the  Figure 2.3 Changes in PTM behaviors   31  type of tasks. Creating four columns on a blank paper is what we refer to as personalizing that piece of paper as her PTM tool. In the case of P6, the personalized piece of paper illustrates a systematic way of making lists. By contrast, P18, a make-doer, uses paper to simply jot down her tasks in a haphazard manner; she does not record her tasks systematically nor use any specific format. Thus, while both DIYers and make-doers adapt general-purpose tools (such as paper) for their PTM behaviors such as making to-do lists, only DIYers personalize those tools to the extent to which they themselves consider their devised tool as their PTM tool. Two out of the four make-doers in our study settled as make-doers after trying Google Tasks, a dedicated PTM system, which they had both stopped using after a while. When asked for a reason, P19, who had tried to use Google Tasks only because it was integrated into his email, said: “part of it was that it wasn't easy to have a clean integration with calendar... another part was that it was in my gmail and at some point I didn't want it to be always visible because of visual clutter...and then I totally forgot about the tasks that were there. I used Google Tasks for the tasks that did not have a specific time, most of my urgent tasks were in the calendar. But, ultimately, I wanted to have all tasks in both [Google Tasks and Calendar] in some form.” As a meta note, we identified three types of users and grouped our participants by taking a snapshot of their behaviors at the time of the study. While not the focus of our study, we did collect some information about how their behaviors had changed over time, which showed that some people had transitioned from one type to another. As described earlier, several (7/19) participants made a transition from being an adopter to being a DIYer (5/7) or a make-doer (2/7). 32  2.3.2 PTM behaviors We observed a set of common PTM behaviors among our participants, which we categorized into three groups: 1) recording tasks, 2) remembering tasks, and 3) maintaining and organizing task lists. These groups of PTM behaviors match well with the three groups of PIM activities suggested by Jones: keeping activities, (re)finding activities, and meta-level activities which include maintenance and organization of personal information collection. However, our categories of PTM behaviors provide a classification that can better address specific aspects of PTM. To gain insight into the differences and similarities in individuals’ PTM behaviors, we examined the factors influencing their behaviors and categorized them into three types of factors: environmental (e.g. job, PTM behaviors of friends), tool-related (e.g. features and affordances of a tool), and personal factors (e.g. being optimistic, reliance on prospective memory). We found that many of the similarities and differences in individuals’ behaviors could be described by the similar and different factors influencing the behaviors. In addition to the main basic factors, there are secondary factors that are derived from the main factors. For example, we consider availability of a tool a secondary factor that is derived from both tool-related and environmental factors. The PTM behaviors, their variation among individuals, and the factors influencing the behaviors are described in the following sections, organized according to the three types of PTM behaviors.  2.3.2.1 Recording tasks Participants reported a variety of task categories that they recorded in their tools: administrative, project deliverables, scheduled events, things to read, shopping lists, “things, events, people to 33  research at a later date,” random notes to see when looking at the task list (not associated with any task), packing list, agendas for meetings, and phone calls. We found a great variety of methods for recording tasks: making lists, keeping web pages or documents open, taking pictures, flagging email messages or making them unread, and writing post-its. These behaviors were influenced by the environment/tool in which the task was created. We summarize the behaviors relevant to recording tasks into three groups of making task lists, distributing tasks across multiple tools, and estimating task completion time; and we discuss their variation among individuals.   Making task lists Making task lists was a prevalent PTM behavior among adopters and DIYers. Dedicated PTM tools imposed the format of adopters’ task lists, giving them limited formatting flexibility. However, whenever their tools (e.g., a piece of blank paper or a plain Word document) allowed, DIYers exhibited a variety of uses of space when making their task lists. Two common examples were dividing a list into multiple columns, each representing a different category of tasks, and placing high priority items at the top and low priority ones at the bottom. Although making task lists was not a dominant behavior among make-doers, they would choose the most readily available tool, likely paper, digital document, or email, if they happened to do so. There would also be little or no rules as to where and in what order tasks were placed in their lists. When we asked the participants how often they made to-do lists, responses varied from daily, weekly, monthly, to “whenever an overwhelming amount of details exists to remember.” We found that the frequency of making lists was highly influenced by the level of busyness in a particular 34  period, and the medium of their tool, whether it was digital or paper-based. In our analysis, we extracted several aspects pertinent to making lists such as the level of task details, use of color, and use of space.  >>Task details (level, reason, layout): We found that two factors affected the level of task details recorded: first, tendency to facilitate task execution by recording required information for accomplishing the task, and secondly, possibility of forgetting. The first factor led to the adoption of a low-level (detailed) approach, where participants record everything relevant to their tasks. Here, participants would perform part of the task upfront by recording task details, making it easier to accomplish the task when they eventually got to it. For example, for a task like “Call John” P1 recorded John’s telephone number in her PTM tool to save her from searching for the number at the time of calling. The second factor, possibility of forgetting, related to a person’s reliance on their memory. For a high-level approach, very high-level details were recorded and any associated low-level information would be dependent upon memory, or searching, if the information was outside the PTM system. For example, P16 who called himself “lazy” with respect to writing complete words for his tasks, avoided entering any detail for his tasks simply because “he can just remember the rest.”  Unlike adopters, who entered their tasks’ details in the respective fields provided by the software, make-doers and DIYers were less likely to follow the structure, if any, provided by their tools. For example, P18, a make-doer who used Google Calendar for most of her work-related tasks including meetings, added all the details of her meetings including the address, attendees, and subject to the ‘title’ of an event created in Google Calendar, even though Google 35  Calendar provides a separate ‘description’ field: “I always put everything into the title.  I don’t use the description, detail [because then] I will have to open it in order to see the details.” When recording tasks in a Word document, P4, a DIYer, used Word document’s comment feature to add details to her tasks, such as how to perform the task, the need to check out something before starting the task, or sending an email about the task. >>Use of color: We observed different uses of color in making lists, with the most common use for differentiating between types or importance of tasks. Four participants purposely chose colors to represent the tasks’ category, importance, or urgency. Examples include using red for urgent or important tasks, and cool colors, like blue, for personal tasks. P12 and P7 used arbitrary colors to focus their attention on the most important tasks on their lists. The main reason for using color either for focusing attention or differentiating between different types of tasks was to facilitate visual search in a task list. Some individual characteristics such as small handwriting increased the need to use color for facilitating visual search: “it’s much easier to differentiate my tasks with color because my handwriting is small” (P1). Others, like P5, used different colors simply for the sake of adding variety to their lists: “I just make them [to-do items] colored differently, I thought it was boring to just have one color.  I usually try if they’re really important then I make them red, but other than that I just color them differently because if I have everything blue then I wouldn’t look at it at all.  I tried that [meaningful colors] in the beginning but it didn’t work out because I couldn’t keep track of it.” Similarly, P1 and P6 used colored paper because it was more attractive than plain white paper. 36  >>Use of space: Whenever a tool allowed, DIYers exhibited a variety of uses of space in making their task lists.  For example, we found various uses of space in a piece of blank paper or a plain Word document. One common use was differentiating tasks from notes, which we observed through two distinct examples: 1) adding some notes to a paper list by creating a box in the corner of the paper (P16), and 2) dividing a paper in half such that the left side includes the days of the week and their corresponding tasks, and the right includes any kind of notes, either relevant or irrelevant to the tasks on the left (Figure 2.2-b).  Two other common patterns were 1) dividing a list into multiple columns, each representing a different category of tasks, and 2) placing high priority items at the top and low priority ones at the bottom. This division of tasks into different regions of a list with respect to various criteria such as viewing frequency or priority was an attempt to make optimal use of available space (Kirsh, 1995) and attention. However, participants’ behavior with respect to use of space was not always persistent. Running out of space and the difficulty to place every task legibly in one view were two reasons for non-persistent behavior in the use of space. Distributing tasks across multiple tools Participants were found to distribute some of their to-do items to tools, such as email, calendars, and web browsers. This is similar to Bellotti et al.’s finding that to-dos are stored in different resources (2004). However, while they found that people only kept a minority of their to-dos in their to-do lists, we found considerable diversity across our participants with respect to the proportion of their tasks in lists and the spread across other tools.   37  Estimating task completion time Unlike the previous two behaviors (making task lists and distributing task items), which were explicit when recording tasks, estimating task completion time was an implicit behavior manifested in the number of tasks scheduled for a day. Four participants seemed to be more optimistic than others with respect to the number of tasks they believed they could accomplish in a day. When asked “Of your overall set of tasks in a day, what percentage of them are you likely to get done,” three of them mentioned 60-70% and surprisingly, all the three were satisfied with their task performance. Through further analysis, we found that these participants tended to overestimate the number of tasks they could accomplish because they wanted to accomplish more in a day, and were fully aware of this self-enhancing bias. This is consistent with “wishful thinking” (Buehler, Griffin, & Ross, 1994), where people tend to think they will finish their tasks quickly because that is what they want. We also found that the behavior of overestimating the number of tasks was not a persistent behavior and it could depend on a number of factors including the level of busyness, task constraints imposed (deadlines), state of mind, and nature of the task, whether it was difficult to estimate its completion time or not. The following quote shows how individuals can vary on a day-to-day basis from being optimistic to realistic according to both external and internal factors: “What percentage of the ones that I expect to get thorough in the day really depends day-to-day…because sometimes I’m like ‘ok push yourself! Be optimistic! See what you can do!’ and it’s like then I get half of them done, or whatever…and other days I’m more realistic, it’s like ‘ok, I have to get these three things today’, because they’re due or whatever, and then I’ll get these three things done.” (P7) 38  Underestimating the time it takes to complete a task can be caused by estimation difficulty and planning fallacy (Buehler et al., 1994). Planning fallacy is a form of optimism in which people focus on the most optimistic scenario for their target task and do not consider their past experiences with similar tasks. When underestimation was due to planning fallacy, not accomplishing all tasks by the end of a day did not lead to any frustration. All the three satisfied optimistic participants (described above) appeared to exhibit planning fallacy. However, when underestimation was caused by the difficulty of estimating, not accomplishing all tasks by the end of a day could lead to frustration. For example, P13, a post doc, mostly referred to research related tasks such as writing and reviewing, described his main problem with PTM: “Estimation is one problem and the kind of stuff we do, we never know exactly how much time they are gonna take. […] The stuff we do is too vague, we can’t decide how much time they are gonna take […] it’s a bit frustrating when you couldn’t accomplish the things that you had planned.” 2.3.2.2 Remembering tasks Five categories of remembering strategies emerged from the data analysis. They were either chosen by participants or imposed by their tool or situation: notification-based strategy (setting reminders), polling-based strategy (checking a task list frequently), association-based strategy (associating an object or a time to a task), social-distribution strategy (relying on another person to remind), and rehearsal or trying to remember.  Notification-based strategy. This strategy refers to setting reminders such that users can rely on their tools to remind them of their tasks at the right time. Although all the focus group + 39  contextual interview participants who used a digital calendar adopted this strategy to some extent, it was the dominant remembering strategy of adopters in that study.  Polling-based strategy. DIYers and adopters checked their task list frequently. We refer to this strategy as polling-based strategy, which did not involve the overhead of setting up reminders but did require the due diligence of checking the list often. When employing this strategy, people devised strategies (such as putting high-priority items at the top) to draw their attention to particular task items when they quickly glanced at their lists. Setting up “to-do” folders for keeping track of tasks in Email—a reminding strategy people employed for managing their tasks in their email (Whittaker, 2005; Whittaker & Sidner, 1996)—belongs to the category of polling-based strategies because users have to explicitly check the folder to be reminded of their tasks. According to Whittaker and Sidner’s study, most people abandoned the polling-based strategy in email though, because of its demanding nature: they still had to remember doing something, for example checking the “to-do” folder, in order to get reminders for their to-dos.  Association-based strategy. Depending on the type of task, our participants described associating an object or a time of the day to a task in order to be reminded of the task. An external task representation such as a pile of papers on the desk was associated with the task of reading. When associating an object to a task, the object was placed somewhere to ensure that it would be noticed. We refer to this method of remembering tasks as an association-based strategy which includes some of the strategies reported by prior studies of task management and prospective memory: document piling and pile placement that have been referred to as spatial cues aiding remembering tasks (Lansdale, 1988; Malone, 1983; Whittaker, 2005), “in the way 40  property” of to-dos that referred to placing things in the physical or digital environment (e.g., placing an umbrella by their front door) (Bellotti et al., 2004), and event-based prospective remembering tasks that are supported environmentally by an activity, a person, or a place (Einstein & McDaniel, 1990). Leaving a message unread in the inbox—the most frequently used reminding strategy among email users (Venolia, Dabbish, Cadiz, & Gupta, 2001)—is an association-based strategy because a message is associated with a task and is left in the inbox where it gets attention every time that the user checks her email. Associating a task with a web page and leaving the page open in a web browser is another example of association-based strategy that we observed. Association-based remembering strategies often have an advantage over the other strategies: they provide access to the relevant information/object needed for executing the task while reminding one of the task (Whittaker, 2005). The act of recording a task is accompanied by identifying and accessing relevant information that are otherwise part of the process of executing a task. We also found associating a time with a task—introducing habits into one’s life as a remembering strategy—which corroborated with Bellotti et al.’s findings that some tasks had temporal regularities, i.e. they were likely to be done at a certain time (2004). Social distribution strategy. Some participants also reported relying on another person (e.g., a friend) to remind them of their task. We refer to this strategy as social-distribution. For example, P8 in Study One used this strategy for remembering a task such as a meeting: “If it’s [a meeting with] a friend, I probably wouldn’t put it into my calendar, if it’s like a friend that I see all the time. Because I would probably rely on the fact that we’re gonna be in constant communication and that they’ll remind me of it.” This strategy was only reported for tasks that involved others, when those others were collocated. Previous work on workplace communication has shown that 41  casual encounters with others in shared social environments help people to remember their outstanding tasks (R. E. Kraut, Fish, Root, & Chalfonte, 1990; R. Kraut, Egido, & Galegher, 1988; Whittaker, Frohlich, & Daly-Jones, 1994).  Rehearsal and trying to remember. Lastly, rehearsal and trying to remember was another strategy that participants resorted to due to unavailability of their tools for recording a task at the moment that the intention for performing the task is formed or due to the short time interval between the formation of intention and acting on that intention. In addition to the differences in remembering strategies across different user types, we found that some of the differences were across different task types (e.g., tasks that involve another person, tasks that can be associated with an object).  Similar to keeping strategies that have been coupled with retrieval strategies in PIM, the strategies for remembering tasks were coupled with the task recording methods. The five remembering strategies differed in both their companion recording method and in how a participant would be reminded. The companion recording methods included setting a reminder for a task (in notification-based), entering the task in a to-do list (in polling-based), creating associations (in association-based), telling someone of the task (in social distribution), or making mental note (in rehearsal and trying to remember) (Table 2.2). Aside from the rehearsal strategy where one would rely on their memory, individuals needed to rely on an external entity for remembering in the first four strategies: a system in notification-based, a task list in polling-based, an object in association-based, and another person in social-distribution. To summarize, 42  most differences in remembering seemed to be related to differences in recording tasks across participants and differences across task types. 2.3.2.3 Organizing and maintaining task lists  Adopters and DIYers modified their task lists and the frequency of their modification depended on several factors: the time period that their list covered, how broad their planning scope was, how accurately they estimated their task completion time, and how accurate they wanted their list to be.  Regarding the planning scope, participants who planned very far ahead would often find themselves modifying more because these future tasks were not clearly defined at the time of recording. Similarly, underestimating task completion time led to rescheduling and therefore modification of the task. For instance, when studying for exams, P8 would always set unrealistic Table 2.2 Key features of remembering strategies used by the participants: the companion recording method and how of getting reminded. Remembering Strategy The companion recording method How of getting reminded Notification-based Setting reminders Pop-ups, email, ringing Example: “I used to have pop-up reminders” [P18] Polling-based Entering the task in the list Checking to-do list Example: “I get reminded only when I choose to look at the list” [P8] Association-based Associating an object or a time with the task Encountering the object Example: “The pile is a good signal” [P13] Social distribution Tell someone of the task Being told by someone Example: “sometimes, I ask my wife to remind me to call someone ” [P19] Rehearsal and trying to remember Making mental notes [Not explained] Example: “I will make a mental note that I have to add this to my task list” [P7]  43  goals for himself by creating a large list of subjects to study.  At the end of each day, he always had to modify this list because he was not able to finish them all. Participants who always wanted an accurate reflection of their tasks and their priorities would modify their lists quite often as well (P4, P7, P3, P12, P1, P8). All of these modifications typically involved adding, changing details, or task reorganization. Regrouping and moving tasks up and down the list so that task locations on the list reflect priorities were common behaviors among our participants.  When done with the tasks on their task list, DIYers and adopters employed various post-completion strategies such as crossing, checking, archiving, or deleting the tasks. Similar to adoption of remembering strategies, adoption of each of these strategies was in part influenced by the affordances of the tool used to record tasks and by the type of tasks. For example, crossing off items was more common when using paper than digital lists since not all digital lists supported this action and tasks written on paper cannot be easily deleted. Tasks received by or related to email would typically be archived, or simply just left alone, as were Google Calendar items. Tasks on digital lists such as Google Tasks or documents were normally deleted to avoid cluttering the screen (P7, P5, P4). In addition to tool affordances, personal factors such as a sense of accomplishment and level of busyness influenced post completion strategies. For example, in order to feel a sense of accomplishment, P5, who used OneNote, first moved his completed tasks to the top of his list before deleting them at the end of the day. 2.4 Study Two: survey questionnaire To assess the viability of grouping people based on the two criteria described in Study One (type of the primary tool and personalization), we conducted an online survey one year later with a 44  more heterogeneous population and asked the respondents about the tools they used as well as their personalization behaviors. The goal was to extend our understanding of the three types of users identified in Study One by assessing the extent to which they generalize to a broader population. Here, we describe the survey design, the respondents, and the data analysis methods. 2.4.1 Survey design The results of Study One were used to guide the design of the survey that comprised four sections (see Figure 2.4). Appendix B includes all the survey questions. The first and the last sections of the survey included generic questions that were answered by all respondents. The first section asked all respondents about individual characteristics (e.g., job and busyness), and the tools they used for PTM. Depending on their responses to the first section, respondents were directed to different survey sections: respondents who had reported using a dedicated tool in the first section were directed to the second section. Among the others, those who had indicated that they made a task list were routed to the third section—see the flowchart in Figure 2.4. In the last section of the survey, all the respondents were asked about their use of other tools, such as email and web browsers for PTM.  45   2.4.2 Respondents  Survey respondents were recruited by a series of invitation emails to various departments at the University of British Columbia as well as to the authors’ friends and colleagues. The goal was to distribute the survey to people with diverse occupations.  A total of 182 people responded to the survey. To limit the participation to people who have experience with task management, the first question of the survey asked respondents whether they have ever used any tool such as a  Figure 2.4 The survey structure. Participants were directed to different sections of the survey based on their responses to the two questions of whether they use a dedicated PTM tool and whether they make lists.   46  calendar, paper planner, or a piece of paper to manage their tasks. Four respondents had never used any of these tools and were thanked for their participation after responding to this question; the remaining 178 respondents completed the survey. The majority of respondents (134/178, 75%) were female with 42 male respondents (24%), and two participants did not disclose their gender. Despite our goal of broadening our sample beyond academics, we still attracted many professors and graduate students to our study (88/178, 49%). The non-academics (51%) included nurses, teachers, administrative staff, software developers, lawyers, and consultants among others (see Table 2.3).  Table 2.3 Participants’ occupations in Study Two.Others represent occupations held by one or two respondents only: editor, publisher, financial analyst, designer, accountant, engineer, church minister, community organizer, communication professional, medical doctor. Occupation Number of Survey respondents Grad students 68 University Professor/post-doc  20 Nurse  20 Teacher 18 Administrative staff 8 Manager 7 Lawyer 5 Software Developer 4 Consultant 3 Others 25 Total 178  47  2.4.3 Data analysis We analyzed the data of 164/178 (92%) respondents for the purpose of identifying their types, namely DIYer, Make-doers, and Adopter. The remaining 14/178 (8%) respondents filled out the survey incorrectly and were excluded from the analysis10. Below, we describe how we identified the three types of users among the survey respondents. 2.4.3.1 Analysis method for identifying the three types of users To identify adopters, we used a simple question of whether or not respondents use a dedicated PTM tool. However, to distinguish DIYers from make-doers, we used a combination of methods. First, we asked non-adopters if they maintained some form of a task list, which we defined in the survey as “a physical or digital page/note on which they write/type/enter their tasks.” This was to distinguish the ones who did not make any task list (those categorized as make-doers in Study One) from the ones who made task lists and could be either make-doers or DIYers depending on their personalization behavior. Second, to distinguish DIYers from make-doers among the list-makers we used two distinct methods, namely clustering and manual classification, both based                                                    10 These participants were directed to the adopters-only section of the survey (Figure 2.4), even though they did not actually use any dedicated PTM tool. When asked for the name of their dedicated PTM tool (after indicating that they did use one), they provided the name of a non-dedicated PTM tool (e.g., wiki, calendar, etc.) instead.  48  on responses to six questions related to personalization when making lists.  We specifically focused on making lists because the most distinguishing characteristics of DIYers were that—unlike make-doers who barely kept to-do lists and managed their tasks in an adhoc way—they maintained task lists and personalized them by using color, symbols, and sketching; and they had a systematic approach to PTM. Moreover, DIYers were more likely to come up with their own layout for their task list rather than to use a default layout, and they would use different parts of their task lists. Based on these, we used the following six questions related to personalization when making lists. These questions were used for the purposes of clustering and classification. 1. Use of color: whether or not the respondent uses color when making list (Q23 or Q37 in Appendix B  ).  2. Use of symbols: whether or not the respondent uses any symbol (e.g., star, arrow, etc.) when making a list (Q23 or Q37 in Appendix B  ). 3. Use of sketching: whether or not the respondent uses any sketching in her list (Q23 or Q37 in Appendix B  ).  4. Use of space: the degree to which the respondent uses different parts of the task list (Q30 or Q44 in Appendix B  ). 5. Adhoc management: the degree to which the respondent manages her tasks in an adhoc way, i.e. no systematic way of managing tasks (Q29 or Q43 in Appendix B  ). 6. Layout of tasks page: how the respondent chooses the layout of her tasks page (using the default/built-in layout vs. coming up with a layout by themselves) (Q26 or Q40 in Appendix B  ). 49  For the manual classification method, one coder manually assigned respondents into the two groups (DIYers and make-doers) based on their responses to the above questions. Since the coder’s confidence in this categorization varied across the respondents, she also assigned her level of confidence in classifying each non-adopter on a scale of 1 (not confident at all) to 5 (being very confident). We posited that one coder would be sufficient if the result of manual classification matched that of the automatic clustering. However, if the results of the two methods did not match, further investigation on the reliability of both methods would be deemed necessary.  To validate the manual classification method, we performed an automatic clustering analysis on the same respondents. We used two clustering algorithms for this purpose: hierarchical clustering and fuzzy clustering. The analysis was done in R, using the cluster package. To compute the dissimilarities between participants, we used the daisy method with Gower as its metric. Gower’s dissimilarity coefficient was used because we had three types of variables—nominal, ordinal, and binary. With Gower, each variable is standardized by dividing each entry by the range of the corresponding variable after subtracting the minimum value; the rescaled variable has range [0, 1]. To select the best number of clusters, we used the average silhouette width, which measures how well each object belongs to its cluster. We ran PAM algorithm for several number of clusters (k=2,3,…,9) and compared the resulting silhouette plots: the largest silhouette width for all the above groups was found for k=2; in other words, the data was best described by two clusters. We compare the results of the two methods in Section 2.5.3. 50  2.4.3.2 Analysis method for assessing associations between user types and individual characteristics To investigate how user type is related to the individual characteristics (gender, level of busyness, satisfaction with one’s PTM, interest in PTM, reliance on memory for remembering tasks, and occupation), we used multinomial logistic regression analysis. The aforementioned variables were chosen because Study One suggested they might vary with users’ PTM behaviors. We discuss the results of this analysis in Section 2.6.4. 2.5 Findings: Study Two We report on the following findings: tools the survey respondents used, the results of assessing generalizability of identifying adopters, DIYers, and make-doers using our manual classification method and automatic clustering method, associations between user types and individual characteristics, and behaviors of adopters.  2.5.1 Tools used Similar to the participants in Study One, the survey respondents were found to use a tool-set, rather than a single tool, for their PTM. Each reported tool fell into one of the following categories: paper planner; other forms of paper (e.g., a piece of paper, sticky note), electronic notes (e.g., text file, Word document, spreadsheets, note-taking applications such as Evernote), electronic calendars (e.g., iCal, Google Calendar), and dedicated PTM tools (e.g., Wunderlist). We found 24 unique combinations of these tools such that each combination was used by at least three respondents (Figure 2.5). The five most frequent tool combinations (rows 20-24) include 51  paper, which might either suggest the inadequacy of electronic tools on their own or, alternatively, it could suggest the interoperability and flexibility of paper. In addition, any individual tool was rarely used solely on its own, suggesting the inadequacy of any single PTM tool on its own: eight respondents relied solely on paper (row 19), six solely on paper planner (row 17), and only three solely on their dedicated tools (row 9).  52    Figure 2.5 Tools used by the survey participants. The table on the left shows 24 different combinations of tools that were each used by at least three respondents—32 other combinations (not shown) were used by less than three respondents. Each row represents a unique combination and each column represents a single tool. For example, row #24 shows the combination of paper, paper planner, e-note, and email that was used by 13 respondents.      53  In the last section of the survey, respondents were asked about their use of email and web browsers for their PTM, since these were two interesting behaviors we observed in Study One. The majority of the respondents who made task lists and kept email messages as to-dos in their inbox preferred to have a feature to transfer their email messages to their task list (69%, 97/140). Keeping web pages open as tasks was not as common as keeping email messages in inbox among the survey respondents; 80/178 (45%) of the survey respondents kept some of their web-pages open as reminders of their tasks and the majority of them (71%, 57/80) would like to have the option of transferring their web-pages as to-dos to their task lists. 2.5.2 Identifying adopters In Study One, all the participants who used a dedicated tool used it as their primary PTM tool while using other tools only occasionally. None of the non-adopters used a dedicated PTM tool in any capacity (i.e., even as a non-primary PTM tool). We therefore used the question of whether a participant currently uses any dedicated PTM tool as a filter to identify adopters in our survey design. However, despite our observation in Study One, we found large variation in the way dedicated tools were combined with other tools based on the comments made by some survey respondents (Figure 2.5): while some respondents reported using their dedicated tool only minimally compared to their other tools, others solely relied on their dedicated PTM tool to manage their tasks (Figure 2.5, row 9). For example, a university professor who reported using the Wunderlist application commented: “Though I admit it's probably been a month since I've logged in” [SR137], and another university professor, who reported using Google Tasks said: “It does offer all kinds of advanced functionality but I don't use it usually. To be honest I don't use 54  this tool much compared to a flat email todo list and my calendar app” [SR142]. Despite the variation in the extent to which the respondents used their dedicated tools as their primary PTM tool and the limitation of using a survey instrument to capture the actual usage of dedicated tools, we tentatively continued to label those who reported using a dedicated PTM tool as “adopters.” 75/164 (46%) of the respondents were labeled as adopters. 2.5.3 Identifying make-doers and DIYers  The next step was to identify make-doers and DIYers among non-adopters—the respondents who reported not currently using any dedicated e-PTM tool (89/164, 54%). To do so, we performed the analyses described in Section 2.4.3.1.  We had 35 out of 89 non-adopters (39%) who reported not having any task list and thus were identified as make-doers. Fifty four out of 89 non-adopters (61%) made lists and thus we used the two methods of manual classification and clustering, described earlier, to identify their types. Figure 2.6 shows the number of non-adopters manually classified as make-doers or DIYers with different levels of confidence. Respondents who were classified with only low confidence (scale of 1 or 2), had some similarities with both DIYers and make-doers (see Table 2.4, row 7)  Automatic clustering. When clustering was performed (based on the questions related to personalization behaviors) for a group of respondents that were manually classified with high confidence (48%, 26/54), its results completely matched that of the manual classification method. In other words, as mentioned in Section 2.4.3.1, the clustering algorithm showed that the data was described as two clusters and the two clusters were the same as the two groups which 55  we classified the respondents to in the manual classification. The same was true when automatic clustering was performed on the combined high and medium confidence group (78%, 42/54), Figure 2.6. But discrepancies were found between the results of the two methods, when the low confidence group (22%, 12/54) was included in the automatic clustering. This is not surprising given that these respondents were originally manually classified with low confidence. Therefore, since the results of the manual classification and automatic clustering largely matched, we saw little benefit in adding a second coder to the manual classification as discussed earlier.  To summarize, of our 164 respondents, 75 (46%) were adopters, and 35 (21%) were immediately identified as make-doers because they reported neither using a dedicated PTM tool nor making task lists. Fifty four participants (33%) made task lists and therefore required further disambiguation. Out of these 54 list-makers, we were able to identify the user type of 42 respondents—31 DIYers (19%, 31/164) and 11 make-doers (7%, 11/164)—because the  Figure 2.6 User types identified in manual coding with high, medium, and low confidence. The result of the clustering of the respondents who were manually labeled with high or medium confidence matched their manual classification. (N=54)    56  outcomes of our two methods for assessing the generalizability (manual classification and automatic clustering) were identical for these respondents. However, we would need more evidence to categorize the remaining 12/164 (7%) list-makers, whose types were only identified with low confidence (Figure 2.6 and Figure 2.7).  The survey study allowed us to reach a much larger population. Although the data lacked sufficient richness compared to that collected in Study One, it extended our understanding of PTM differences across individuals. Table 2.4 summarizes the survey results compared to those of Study One. The survey showed that some individuals shared attributes with multiple user types and that caused us to rethink the distinct user types we identified in Study One: instead of belonging exclusively to one of the categories of DIYers, make-doers, or adopters, individuals demonstrated coexisting tendencies toward DIYing, make-doing, and adopting. What varied across individuals though is the relative strength of these tendencies. We further reflect on this in  Figure 2.7 Summary of user types identified among the survey respondents. Others are the respondents whose type were identified only with low confidence. (N=164)   75463112Number of respondentsUser type57  the Section 2.6 and from this point forward, we use DIYers, make-doers, and adopters to refer to those participants whose tendency was strongest towards one of DIYing, make-doing, or adopting respectively. 58   Table 2.4 Summary of the survey results confirming and extending the results of Study One. What did the survey confirm?  1. Similar to the DIYers in Study One, the 31 survey respondents classified as DIYers with medium-high confidence reported three or all four of the following:  Had a systematic approach to their PTM  Used color/symbol/sketching when making a list  Used different parts of their task list  Came up with their own layout for their task list   2. Similar to the make-doers who made lists in Study One, the 11 survey respondents classified as make-doers with medium-high confidence reported three or all four of the following:   Had an adhoc approach to their PTM  Did not use color/symbol/sketching when making a list  Did not use different parts of their task list  Used a default layout for their task list 3. Similar to the make-doers who did not make lists in Study One, the 35 survey respondents classified as make-doers did not maintain any form of task list. 4. Similar to the adopters in Study One, the 75 survey respondents classified as adopters reported using a dedicated PTM tool. How did the survey extend our understanding?  5. Unlike the adopters in Study One who used their dedicated tool as their primary tool, two of the survey respondents classified as adopters reported using their dedicated tool only minimally compared to their other tools. 6. Unlike the adopters in Study One who actively chose their dedicated PTM tool,  40/75 of the survey respondents classified as adopters used their dedicated tools because they were pre-installed and handy to use (as will be described in Section 2.6.5). These respondents shared attributes with:  Adopters because they used a dedicated PTM tool   Make-doers in that they used their dedicated PTM tool because of its handiness. 7. Unlike DIYers and make-doers in Study One who were clearly different in the extent to which they personalized their tools, 12 of the survey respondents classified as either DIYers or make-doers with low confidence shared attributes with:  DIYers because they exhibited one or two of the behaviors in Row#1 above.   Make-doers because they exhibited one or two of the behaviors in Row#2 above.     59  2.5.4 Associations between the user types and individual characteristics In this section, we first describe the survey respondents’ individual characteristics, then we report the associations that we found between those characteristics and the user types.  Busyness: The great majority of the survey respondents (93%, 165/178) considered themselves to be busy (Figure 2.8 (a)). Six out of the 10 respondents, who commented on their busyness, reported having multiple jobs and two pointed to the variability of their busyness: “Highly variable given the deadline schedules” [SR49], and “busyness ebbs and flows” [SR89].  Satisfaction with one’s PTM: We had 90/178 (50%) respondents who reported being satisfied or very satisfied with the way they managed their tasks, Figure 2.8 (b). One of the dissatisfied respondents said: “I feel the way I manage tasks is quite good in theory but it fails during  Figure 2.8 Survey responses to six questions that were chosen based on Study One: each column illustrates the responses to one question. Questions were 5-point Likert scale that were binned in three groups of agree, neutral, and disagree. (N=178)    0%20%40%60%80%100%(a) I consider myselfa busy person(b)I am satisfied withthe way I manage mytasks(c) I have alwaysbeen interested infinding new ways toimprove my taskmanagement(d) I am veryorganized in regardto managing myeveryday tasks(e) I rely on mymemory forremembering MOSTof my tasksAgree Neutral Disagree60  stressful times (in that I ignore my system in order to focus on whatever is stressing me out)” [SR158]. Failure of one’s PTM system “during stressful times” was a common source of dissatisfaction among our respondents. Lack of a needed feature in their PTM tool and having tasks recorded across multiple tools were other sources of dissatisfaction reported.   Interest in improving one’s PTM: Although only 49/178 (27%) were dissatisfied with their PTM, a majority of respondents (71%, 127/178) were interested in improving their PTM practices, Figure 2.8 (c). One of the respondents who was interested in improving her PTM said: “I'm still looking for the best ways to have in one place all tasks related to different areas of my life (work, studying, private life)” [SR163], and a disinterested respondent said: “I have always resisted new ways” [SR67]. Among respondents who were neither interested nor disinterested, SR167 said: “I know how to improve my task management. I just don't make those choices.” These comments show that the differences in individuals’ interest in PTM may be related to differences in their PTM needs, their resistance to new methods, and their self-determination in enhancing their PTM.  Being organized: We had 99/178 (56%) respondents who considered themselves very organized, Figure 2.8 (d). Two provided evidence for why they were organized: “trying to do everything on the schedule” [SR28], and “have to multitask and be open to changes” [SR143].  Reliance on memory: Although almost all the respondents considered themselves to be busy, 62/178 (35%) respondents still relied on their memories for most of their tasks, Figure 2.8 (e).  61  Occupation: The distribution of the survey respondents across different occupations was shown in Table 2.3. Since Study One was conducted with grad students and professors, we wanted to detect any differences that might exist between that group and others in the survey. Thus, for the purpose of our regression analysis described below we considered occupation as a binary variable by grouping university professors and grad students as academics (49%) and the rest as non-academics (51%). While this is an imperfect grouping, our key goal was to distinguish other groups from the group we studied in Study One. Based on a multinomial logistic regression analysis—described in Section 2.5.3.2—we found that occupation (p=0.015), reliance on memory for remembering things (p=0.019), and level of busyness (p=0.045) made a significant contribution to predicting the user type. Compared to non-academics, academics were 3.36 times more likely to be a DIYer as opposed to being an adopter. People who tended to rely on their memory for remembering tasks were 56% more likely to be an adopter as opposed to being a make-doer. People who reported lower levels of busyness were 1.7 times more likely to be an adopter as opposed to a make-doer. We found no significant association between individuals’ approach to PTM and their satisfaction with their PTM, being organized, and being interested in improving one’s PTM. We further reflect on these findings, some of which might seem counterintuitive, in Section 2.6. 2.5.5 Behaviors of adopters The ultimate goal of this research was to inform the design of personalizable PTM tools that can better support individual differences. To achieve this, we investigated what may have caused adopters to use dedicated PTM tools and if their tools accommodate their needs sufficiently. 62  Here, we present our findings about adopters’ tool use including the tools used, adopters’ awareness of their tool functionality, and their likes and dislikes.  Adopters’ tool use: Although the four adopters in Study One used four different tools, Outlook appeared to dominate among the tools used by the adopters in the survey population. Table 2.5 summarizes dedicated PTM tools used by adopters. When asked how they found out about their tools (Q15 in Appendix B), it turned out that all of the Outlook users and most Google Tasks users found out about their tool because either it was pre-installed on their computers or it was integrated into their other applications they were using (e.g., Gmail). Starting to use a dedicated PTM tool because of its handiness, although we acknowledge that other reasons might have also played a role in such adoptions, can be due to the tendency of these adopters toward make-doing. We further reflect on this in Section 2.6. Our finding that the majority of adopters use Outlook or Google Tasks needs to be interpreted with caution – together these applications appear to capture significant market share among our survey respondents, however, it is not clear whether this is because they accommodate the needs of a wide range of people. Rather, their relatively high use Table 2.5 Dedicated PTM tools used by the adopters (N=75), the number of adopters using each, and how adopters found out about their tools. Dedicated tools used # of adopters using the tool (N=75) How adopters found out about their tools Pre-installed on computer Integrated with other apps used Searching the Internet and word of mouth Outlook 41 31/41 9/41 0 Google Tasks 12 NA 8/12 4/12 Others*  (19 different dedicated PTM tools) 22 0 0 22/22    63  may be better explained by the fact that they typically come pre-installed on computers or are integrated with other applications such as calendar and email. Another reason for their high use could be the difficulty of discovering new tools.  Adopters’ awareness of their tool functionality: To gain insight into how to make users aware of the personalization facilities in personalizable PTM tools, we asked adopters how they currently became aware of their tool’s functionality and what their preferred methods would be (Q17 and Q18 in Appendix B). Participants were allowed to choose multiple methods for each of the above questions (Figure 2.9). Coming across functionality by accident (accidental discovery) was the most used method for becoming aware of tool functionality. However, that method was not their preferred method. When asked how they would like to find out about their tool functionality in the future, the two most preferred methods were intentional browsing (49%) and getting recommendations from other users (61%), which should be considered when designing personalizable tools for helping users become aware of personalization facilities. 64    Adopters’ likes and dislikes: We asked adopters what they liked and disliked about their tools to better understand if and how their needs were accommodated by the existing dedicated PTM tools (Q19 and Q20 in Appendix B). Table 2.6 summarizes the tool characteristics and features that the 75 adopters liked and disliked.   Figure 2.9 Methods of becoming aware of tools' functionality among adopters (current and preferred). Coming across the functionality by accident (accidental discovery) was the most common method for becoming aware of tool functionality and getting recommendations from other users was the most preferred method. (N=75)     3911262814 13374605101520253035404550Coming across thefunctionality byaccidentSearching the helpdocumentation for aspecificfunctionalityLooking for aspecificfunctionality (e.g.browsing menus)Gettingrecommendationsfrom other userswho use the sametoolnumber of adoptersHow did you and how would you like to become aware of your tools' functionality?current methodpreferredmethod65  2.6 Discussion In both Study One and Two, individuals differed in the tools they used for their PTM and in how they used their tools. We reported a range of PTM behaviors that differed both across individuals and across different types of tasks for an individual: recording behaviors (e.g., recording in a central task list vs. distributing across tools), remembering strategies (e.g., polling-based, notification-based, association-based, social distribution), post-completion strategies (e.g., crossing off, deleting, archiving), and organizing strategies (regrouping tasks, moving tasks up and down the list). Here we compare our findings across our two studies, and discuss how Study Two extends our understanding of individual differences in PTM. We also discuss the factors Table 2.6 What adopters liked and disliked about their tools. n is the number of adopters who liked or disliked each of the tool characteristic (N=75). Likes n Dislikes n Ease of use 28 Lack of a needed functionality Examples of functionality:  Prioritization of tasks  Location awareness  Integration with other tools 16 Reminders 24 Seeing others’ availability 10 Use across multiple devices 10 Integration with other tools 8 Not being able to make changes to the tool. Examples of changes:   Location of UI elements  View of their task lists  The way the tool prints out task lists  Default reminders 15 User friendly and simple UI  5 Features like crossing out tasks 3 Adding details to tasks 2 Drag and drop tasks 1 Not intuitive, visually appealing, or user friendly 8 Prioritization 1 Electronic device 6 Custom views of tasks 1 No access to list when not at computer 4  66  that we found to be associated with such differences, reflect on the benefits of assessing generalizability of findings, and discuss the limitations of our studies.    2.6.1 How individual differences in PTM compared across Study One and Study Two  In Study One, we identified three types of users: DIYers, make-doers, and adopters, based on the tools participants used and the extent to which they personalized their tools. When we used these criteria to categorize the respondents from Study Two, we found some clear DIYers, make-doers, and adopters among them. But we also found that some respondents shared attributes with both DIYers and make-doers and some with both make-doers and adopters (Table 2.4). We categorized these respondents based on their strongest tendency. However, this result made us rethink our three types of users originally identified in Study One: instead of being mutually exclusive, we saw individuals demonstrate coexisting tendencies toward DIYing, make-doing, and adopting, and what differed across individuals was the relative strength of these tendencies. For every participant in Study One, the strength of one of the tendencies dominated the others leading us to clearly identify three distinct categories: DIYers personalized to a great extent, make-doers were minimalistic in terms of the effort they were willing to spend on using tools for their PTM, and adopters used their dedicated PTM tool as their primary PTM tool. This could indicate that our participants in Study One are perhaps prototypical examples of DIYers, make-doers, and adopters. Alternatively, perhaps we had insufficiently rich data for some of the Study Two participants to cleanly categorize them, or perhaps those categories should be thought of as potentially shifting over time or across contexts.  67  Our finding that some survey respondents shared attributes with multiple user types is similar to the findings of the past email work that tried to classify participants using the previously reported user profiles in managing email—no filers, frequent filers, and spring cleaners (Whittaker & Sidner, 1996). Boardman and Sasse (2004) were only able identify no-filers and frequent-filers but no spring cleaners; instead, they found that many of their participants did not fall into any of these profiles because they employed multiple strategies. Similarly, Fisher et al. (2006) found little evidence of distinct email handling strategies; most of their participants fell into a middle ground. In addition, Bellotti et al. (2005) found that some of their participants shared behaviors with both frequent filers and no-filers, and based on those participants, they considered classification of people into specific categories as an oversimplification of reality.   2.6.2 Factors associated with differences in PTM across individuals Our results showed that occupation, level of busyness, and the extent of relying on memory for remembering tasks were significant predictors of individuals’ behavioral tendencies (DIYing, make-doing, and adopting), i.e. the type of their PTM tools and the extent to which they personalized their tools.  Occupation. Academics, compared to non-academics, had a stronger tendency toward DIYing than adopting. This is consistent with the finding of Study One with academics that the majority of the participants were DIYers, and this may be due to the fact that academics generally have more autonomy over their tool choices than in other professions, or that the less structured nature of tasks in academia—a characteristic of academic tasks as described by some of our participants across our different studies— appeared to have invited more DIY solutions to managing tasks. 68  That said, there could be substantial differences across disciplines which we did not investigate in our study. Similar occupational differences have been found in email practices (Cecchinato, Cox, & Bird, 2015). This suggests that personalizable PTM tools that provide flexibility to users might be more appropriate for people with certain jobs. Reliance on memory. People whose tendency toward make-doing was strongest seemed to rely less on their memory for remembering tasks compared to people whose strongest tendency was adopting. We found this result counterintuitive. One explanation is that make-doers keep their tasks in the applications they use (e.g., starred email messages or open webpages) instead of keeping them in their memory. It could also mean that adopters simply have more tasks than make-doers; they record many in their tool but rely on their memory for others.  Another possibility is that dedicated PTM tools do not support easy recording of tasks, so adopters don't bother to record every single one of their tasks and thus tend to rely on memory for some of their tasks. Busyness. People whose tendency toward make-doing was strongest reported higher levels of busyness compared to people whose strongest tendency was adopting. We suspect that having adhoc methods for managing tasks—as people with strong tendency toward make-doing had—can get unwieldy, possibly overwhelming, and thus increase people’s perceived level of their busyness.   69  2.6.3 Barriers to using dedicated PTM tools Our data on the participants who either abandoned using dedicated PTM tools or were not inclined to use one suggests some barriers for using dedicated PTM tools: 1) barrier to discovery of PTM tools as it takes time and effort to find a tool that fits one’s needs, 2) barrier to switching PTM tools due to the investment in one’s prior PTM tool11, 3) barrier to learning to use a PTM tool to its fullest capacity, 4) barrier to using an electronic PTM tool due to reasons such as the difficulty of typing for some people compared to writing on paper, 5) barrier to personalizing which is currently the result of limited support of dedicated PTM tools for personalization. These barriers may explain, in part, why many people still prefer to use general purpose tools instead of spending time and effort to find a good PTM tool to only realize that the tool is difficult to learn and is not personalizable enough to accommodate their specific needs. To increase their adoption, PTM tools need to remove these barriers. In Section 2.8, we discuss some ways for removing some of these barriers.                                                     11 We elaborate on the cost of switching PTM tools in Chapter 3. 70  2.6.4 Benefits of assessing generalizability A common caveat in both qualitative and quantitative studies is that the generalizability of their findings is rarely assessed. We tried to alleviate this problem by conducting a survey questionnaire to reach a broader population to see to what extent the findings of Study One generalize to a broader population. Although for some respondents the survey format did not elicit sufficiently rich data to enable that assessment, in general it extended our understanding of the differences across individuals and thus revealed the benefit of assessing generalizability of findings in small-scale qualitative studies similar to our focus group + contextual interviews. The importance of revisiting HCI findings is extensively discussed in the field (Hornbæk, Sander, Bargas-Avila, & Grue Simonsen, 2014; Wilson, Chi, Reeves, & Coyle, 2014).  We hope that our studies and the evolution of our understanding they have enabled provides additional motivation and evidence for the need to revisit HCI findings.  2.7 Limitations The survey methodology in Study Two, compared to other contextual inquiry methods such as that of Study One, has some limitations. However, given our goal of assessing generalizability of the results of Study One, we opted for this methodology to reach a much larger number of people than what could be reached by other methods. Sample in Study One. Our sample was weighted more toward grad students in Study One. However, that was one of the reasons that we assessed generalizability of our results to a broader population in Study Two.  71  Personalization behaviors of adopters. Our studies did not report personalization behaviors of adopters, because in Study One, personalization was a theme that emerged at the data analysis stage, more notably in the behaviors of participants who were using general tools. In fact, we did not ask the participants anything explicitly about personalization behaviors, and adopters did not report any personalization behaviors. An important future step would be to study the personalization behaviors of adopters and investigate to what extent they personalize their tools and to what extent their tools allow them to do personalization. Effectiveness of individuals’ PTM approach. In our studies, we did not investigate the effect of individuals’ PTM behaviors on their productivity. Although this is an interesting and important avenue of research, we found it to be out of the scope of our studies, given the many factors that might play in the effectiveness of individuals’ behaviors for managing their tasks.  2.8 Implications for design While our focus was on understanding individual differences, we also gained some general insights into the design of PTM tools.  PTM tools should support variation in PTM across individuals. Grounded in our findings that individuals differ in the strength of their tendencies toward DIYing, make-doing, and adopting, we recommend that PTM tools have the capacity to accommodate this variation: they should be personalizable so that people with a strong DIY desire can personalize their tool when they need to, and should be relatively effortless to use and integrate well with other systems in use to satisfy make-do tendencies. This is somewhat contrary to our previous recommendation 72  that we made after Study One—which was targeting and designing for different groups of users (Haraty, Tam, Haddad, McGrenere, & Tang, 2012)—and reflects the deeper understanding gained by our follow-up study and analysis. Some UI elements and system functionality that need to be personalizable in a PTM tool to satisfy the needs of individuals with a strong tendency toward DIYing, and to support differences across individuals are the view/layout of tasks list, the way a PTM tool prints out task lists, reminders, use of color, and the integration with other tools, such as email or web-browsers.  PTM tools should support variation in an individual’s PTM across different task types. We also observed some variations in PTM behaviors across different types of tasks (instead of across users). For example, having different remembering strategies appeared to be in part related to the different types of tasks: the social-distribution remembering strategy was used for tasks that involved other collocated people, and the notification-based strategy was used more for tasks with strict deadlines. We observed this variation across different task types only for remembering tasks. Although we do not have data to support this, we think that the variation across task types could exist for other PTM behaviors such as recording tasks and post-completion strategies as well. For example, one might choose to delete one-time tasks such as shopping tasks, but to archive work-related tasks when done with them. If true, perhaps PTM tools could allow the defining of different methods for various PTM behaviors across different types of tasks. For example, a user could define different effects for crossing off a task from each of a shopping category and a work-related category such that the tasks in the shopping category get automatically deleted, and the tasks in the work-related category get archived when crossed off. 73  In Chapter 4, we show how our personalization mechanism can be used to modify the effect of crossing off tasks for different types of tasks. Non-PTM tools should offer basic support for PTM. We learned that many people kept their tasks in tools where they were created or received. For example, open documents, open web pages, and unread/starred/flagged email messages were all representations of tasks. We also found that the majority of survey respondents preferred having the option of transferring such items to their task list, and that one of the sources of individuals’ satisfaction (or dissatisfaction) with PTM tools was provision of (or lack thereof) an overview of tasks in one place. Providing such a feature requires an integration between PTM tools and non-PTM tools. For example, an email client integrated with a PTM tool could be configured to transfer the starred messages to the PTM tool or a web-browser could be configured to transfer open pages, perhaps explicitly marked as tasks, to a PTM tool. Lack of such integration was a frequently cited reason for switching tools.   We realize that no PTM tool would be able to fully support integration with all the non-PTM tools people use, unless these tools offered some basic support for PTM. The basic support for PTM could include an easy mechanism for users to record tasks within the tool as well as a mechanism to output the tasks that were captured to a PTM tool. This would allow a PTM tool to provide users with a centralized overview of all their tasks. Such integration could have the additional benefit of preserving a task context and reinstating it, when a user selects a task in a PTM tool to work on. For example, selecting a web-browsing-related task in a PTM tool would open the relevant web pages that have been marked as to-dos within a web-browser.  74  PTM tools should support sharing of personalized tool use and practices. We found that the adopters’ preferred method of becoming aware of their tool functionality was getting recommendations from other users who use the same tool. In addition, some participants reported learning things from others and having adopted tips, strategies, and tools based on others’ recommendations, confirming the findings of prior studies (e.g. (Murphy-Hill & Murphy, 2011)). Finally, as the variation in individuals’ tendency toward DIYing showed, not everyone was willing to invest time on designing one’s own tool through personalization. Given all the above, if PTM tools were to support sharing of personalized tool use and practices, this could lower the entry bar of having a personalized tool, and could thereby enable more people to have a PTM tool that supports their specific needs. Some existing PTM tools such as “Remember The Milk” have forums where users share their personalized tool use and practices. We investigate different methods of sharing personalized tool use in Chapter 5. 2.9 Conclusion Our studies build on and extend the previous research on PTM by focusing on understanding individual differences in that domain. We reported an earlier study—a focus group + contextual interviews—on individual differences in PTM, where we found that individuals belong to one of the categories of DIYers, make-doers, or adopters based on the tools they used and the extent to which they personalized their tools. Then, we conducted a survey to assess the extent to which the results of our first study, which was conducted with an academic population, would generalize to a broader population that includes non-academics. Contrary to the findings of our first study, we found that many of the survey respondents do not belong to only one of the user 75  categories of DIYers, adopters, and make-doers.  Instead, we found that individuals demonstrate coexisting tendencies toward DIYing, make-doing, and adopting, and what differed across individuals was the relative strength of these tendencies: some preferred using what were already available to them without personalizing them (people with a relatively strong tendency toward make-doing), and others preferred using a dedicated PTM tool (people with strong tendency toward adopting) or even designing their own PTM tool by using a general-purpose tool and personalizing it (people with relatively strong tendency toward DIYing). Based on this, we believe that PTM tools need to be designed in such a way that they accommodate the varying strengths of these tendencies across individuals rather than being designed only for people with strong DIY tendency or only for people with strong make-do tendency.  The assessment of generalizability of our prior findings showed how categorizing individuals into specific user groups for the purpose of summarizing individual differences can be an oversimplification of reality. We showed how job, level of busyness, and reliance on one’s memory for remembering tasks were associated with the above tendencies. Our data also suggest four barriers to using dedicated PTM tools (barrier to discovery, barrier to learning to use, barrier to use, and barrier to customizing) that needs to be minimized in order to increase adoption of dedicated PTM tools.    76   Changes in PTM Behaviors Over Time In Chapter 2, we described differences across individuals’ PTM behaviors: some people have strong tendency toward adopting dedicated PTM tools, such as OmniFocus, Remember The Milk, or Wunderlist that are specifically designed for PTM (adopters), some are more inclined to make-do with the tools they already use (make-doers), some prefer to design their own PTM tool using general purpose tools, such as a paper or a Word document (DIYers), and others have a combination of the above tendencies. In addition to supporting differences across individuals, personalizable PTM tools are desirable for supporting changes in an individual’s behavior over time. Changes in an individual’s PTM behaviour over time have not been explored to our knowledge. In this Chapter, we study how and why PTM behaviors change over time to provide an understanding of the types of advanced personalizations that we would like to enable in our design in Chapter 4.  To investigate changes that occur in an individual’s PTM behavior, we included a question in our  a survey questionnaire–introduced and described in Chapter 2—which asked 178 people with various occupations about the changes they made in their PTM behaviors and the reasons behind those changes (Figure 3.1). To deepen our understanding of PTM changes that were reported in the survey and to see if survey respondents had made any changes to their PTM since their participation in the survey, we conducted follow-up interviews with 12 of the survey respondents  77  about a year later.  3.1 Related work We reported on the number of PTM studies in Chapter 2 (e.g., (Bellotti et al., 2004; Blandford & Green, 2001)). Although those studies provide insight into how people manage their tasks, they had little to no emphasis on understanding how PTM behaviors might change over time in order to inform design of tools that can support such changes. The goal of this Chapter is to fill this gap. A number of studies have investigated changes in PIM behaviors as well as changes in email management which we review below given that PIM and PTM are related to each other (Jones, 2007).  Bälter studied email management strategies and he developed a model of how individuals’ strategies change over time (1997); he found that the choice of strategy was affected by the tool and the number  of incoming messages, and that people exhibited both “pro-organizing” and  Figure 3.1 The survey question about changes in PTM. The question provided space for writing five instances of change and their reasons. This screenshot is filled with the response of one of the respondents.   78  “anti-organizing” transitions in their email management strategies: folderless spring cleaners started using folders and became spring cleaners (pro-organizing), and frequent filers gave up filing and became spring cleaner (anti-organizing). Similarly, some no-filers in Whittaker and Sidner’s study (1996) had been spring cleaners before giving up that strategy. Boardman and Sasse (2004) conducted a longitudinal study to track the changes both in the personal information collections (files, emails, and bookmarks) and in the strategies used to manage them over the course of eight months. Their participants reported historical changes in their email strategies that involved both increases and decreases in organizing tendency. But the changes that they observed over the course of eight months were mostly in the form of subtle pro-organizing adjustments to an existing strategy rather than any major transitions such as the ones Bälter found (e.g., no-filer to spring cleaner). Our work builds on and expands this body of knowledge by investigating differences in PTM behaviors over time. Table 3.1 Participants’ occupations in the survey and the follow-up interview studies. Other represents occupations from which we only had one or two respondents: editor, publisher, financial analyst, designer, accountant, engineer, church minister, community organizer, communication professional, medical doctor, technology coordinator, rehabilitation specialist, and user support specialist. Occupation # of survey respondents # of interview participants Grad students 68 - University Professor/post-doc 20 4/20 Nurse  20 4/20 Teacher 18 - Administrative staff 8 2/8 Manager 7 1/7 Lawyer 5 - Software Developer 4 - Consultant 3 - Other 25 1/25 Total 178 12/178  79  3.2 Methods We conducted an online survey to elicit a large number of changes that can occur in individuals’ PTM behaviors over time to inform the design of personalizable PTM tools that can support such changes. As noted in Chapter 2, the survey was distributed to people with various occupations through snowball sampling: 178 participated in the survey (Table 3.1). Respondents were asked in an open-ended question to describe one to five changes they had made to the way they manage their tasks (Figure 3.1). A total of 328 changes were reported by 162 survey respondents. In an initial review of the changes, we found that 24 of the changes were not PTM related12. Among the remaining 304 changes, 12 were not accompanied with a reason. Thus, we had 304 PTM changes and 292 reasons in our data. We used grounded theory to analyze the changes and their reasons. One coder open coded 10% of the data and discussed the codes with a second coder. After coming up with a list of codes that both coders agreed upon, another 10% of the data was coded by both coders and an inter-coder reliability of 0.8 (Cohen’s kappa) was obtained. The two coders then discussed the disagreements, and the primary coder coded the rest of the data. See Appendix G for our initial codebook. The unit of analysis was a single change and its reason(s).                                                    12 An example of a reported non-PTM change was “use of a cloud storage website to facilitate managing of documents that needed to be printed”. 80  Through merging the codes and affinity diagramming of the reasons, we identified different types of changes in PTM behaviors and the contributing factors to the changes. About a year after the survey study, we conducted follow-up interviews with 12 of the survey respondents who had indicated interest to participate in a follow-up study13. The goal of the interviews was two-fold: 1) to deepen our understanding of PTM changes they had reported in the survey, and 2) to hear what changes participants had made since their participation in the survey. Although we preferred the interviews to be at the participants’ workplace, 6/12 interviews were conducted by phone (5/12 participants preferred phone interviews and one participant required a phone interview as she was not local). The length of the interviews ranged from six to 52 minutes (median=16.5), and participants received $10 for their participation. Participants were reminded of and asked to elaborate on the changes in their PTM behaviors that they had reported in the survey. They were also asked if there have been any further changes in their PTM since completing the survey. All the interviews were audio-recorded and transcribed. Thematic analysis (Braun & Clarke, 2006) was conducted on the changes collected in the follow-                                                   13 65 survey respondents had indicated interest of which 36/65 were non-students. We wanted to follow up with non-students because the goal of this study was to assess the generalizability of our prior findings beyond our original population, which largely included students. Out of these 36, we selected 24 who had reported more changes compared to the other 12 participants. We contacted those (24/36)  and 12/24 accepted to participate. 81  up interviews. Through an iterative process of developing themes, refining, and validating them in relation to the data from both the survey and interviews, we came up with several themes for PTM changes and factors contributing to them that we discuss below. 3.3 Findings  We present our findings: categories of changes in PTM as well as the factors that have contributed to them.  3.3.1 Changes in PTM behaviors The survey revealed 304 changes in PTM. 30/304 (10%) changes were related to transitioning from relying solely on one’s memory for remembering tasks to starting to use a PTM tool (a general-purpose tool or a dedicated PTM tool).  These changes were the most trivial change in PTM. For example, P42 reported switching from memory to paper lists: “my responsibilities became more and long lists are getting longer and longer and I could not rely on my memory anymore. Aging was another reason for my recent change.” The remaining changes were categorized into three groups: strategy changes (17%, 52/304) which did not involve the use of any tool, within-tool changes (20%, 62/304) which refer to changes made to a single tool, and tool-set changes (53%, 160/304) which were changes made to a tool-set—multiple tools in combination to satisfy their PTM needs.  Strategy changes (17%, 52/304) did not directly involve use of a tool and were in the form of revising, adopting, or abandoning a PTM strategy such as breaking down tasks into smaller tasks, talking about to-dos with others, or associating objects to tasks as a remembering strategy. 82  An example of a strategy change was: “[…] I made certain days of the week to be used for [a] specific job; thus I am spending less time on switching context from one job to another.” [P26]. Although strategy changes might not directly affect tool use, PTM tools can still support them for example by encouraging positive strategy changes and supporting the potential resulting changes in tool use. Within-tool changes (20%, 62/304) were changes made to a single tool. The examples include starting to use reminders, highlight/color-code tasks, use a different view of a task list (e.g., changing monthly view to weekly view), create/remove task categories, and prioritize tasks by changing the order of tasks on a list. The range of within-tool changes seemed to be relatively limited which may have been either due to the lack of flexibility of the PTM tools used or the small number of respondents who were willing to make changes to their tools.  Tool-set changes (53%, 160/304) were changes made to a tool-set or to the relative usage of the tools in a tool-set. The examples include adding and removing a tool to and from one’s PTM tool-set, as well as making greater use of one of the tools and less of other tools in one’s PTM tool-set. The latter change, which we observed mostly in the follow-up interviews, appeared to be associated with the cyclic nature of some changing needs that will be discussed in the next section, and the relative affordances of the tools in supporting them. In 52/160 (33%) of tool-set changes, media changed as well. The most common changes in media were paper to digital (63%, 33/52) and digital to paper (23%, 12/52). 12/178 of the survey respondents reported having tried dedicated PTM tools, but abandoned them. For example, a university professor who 83  had tried several dedicated tools (Google Tasks, Remember The Milk, and Outlook) said: “I've often tried these, but find paper and pencil better for task lists” [P129].  In the next section, we explore what contributed to these changes in PTM. 3.3.2 Contributing factors to PTM changes  Understanding what contributes to changes in PTM behaviors can inform the design of personalizable PTM tools. Based on the survey study and the follow-up interviews and data analyses described in Section 3.3, we identified three groups of factors that contribute to changes in PTM behaviors: (1) changing needs, (2) dissatisfaction caused by unmet needs, and (3) opportunities revealing unnoticed needs. Some PTM changes were described as the result of changing needs, more specifically as the result of changes in factors that affect PTM needs such  Figure 3.2 Types of changes in PTM.   84  as job and busyness. The majority of PTM changes, however, were the result of dissatisfaction caused by unmet needs. Such dissatisfaction was often framed as missing support of a practice or tool for a PTM need. Lastly, there were cases, where an opportunity brought an unnoticed or infrequent need to a user's attention. In several cases, it was a combination of the above three reasons that contributed to a change. Below, we describe each in more detail (see Table 3.2 for examples for each of the contributing factors).  3.3.2.1 Changing needs Changes in factors such as busyness, job, family structure (e.g., getting married or having kids), tools used, and type of tasks managed were mentioned as reasons behind 95/304 of the PTM changes reported in the survey. Table 3.2 displays the number of PTM changes that were influenced by changes in each factor; some changes were influenced by changes in more than a single factor (e.g., some changes in job or family structure were accompanied with changes in busyness). Changes in job appeared to lead to PTM changes by increasing one’s busyness, imposing use of a specific tool, or changing the nature of tasks that need to be managed (e.g., having longer-term tasks to manage). Changes in family structure appeared to lead to PTM changes either by increasing busyness or creating new needs such as creating shared awareness of tasks. In general, changes in the factors affecting PTM needs/behaviors appeared to contribute to changes in PTM in two ways: 1) by directly imposing a change to an individual’s PTM system (e.g., being required to use Outlook in a new job), or 2) by changing PTM needs, in response to which individuals adapt their PTM behaviors. See Table 3.2 for quotes from respondents. 85  3.3.2.2 Dissatisfaction caused by unmet needs In the majority of changes (74%, 226/304), respondents cited the support (or lack thereof) of their tools or practices for a PTM need as reasons for making changes to their PTM behaviors—adopting or abandoning PTM tools or practices. We divided this group of reasons into 14 subcategories based on the PTM needs that were cited either as being supported by a new tool/practice or not supported by a previous tool/practice. Each subcategory represents a PTM need: supporting prospective memory; ease, continuity, and reliability of access to tasks; decreasing overhead of task management; appropriate view of tasks; getting a sense of satisfaction; keeping tasks in one/multiple place(s); creating shared awareness or for collaborative management of tasks; scalable PTM (larger quantity and/or diversity of tasks); prioritization; better multitasking; better task breakdown, often to avoid procrastination; allocating time to tasks; uncluttering physical/virtual workspace; and better management or keeping track of tasks (see Table 3.2 for numbers and example quotes). In 12/226 (5%) of the reasons in this category, respondents mentioned feeling stressed, overwhelmed, or confused in addition to mentioning lack of support of their tool/practice for a PTM need.   The way that many respondents described how the dissatisfaction caused by unmet needs contributed to their PTM changes indicated that they had done some form of personal evaluation and reflection. They appeared to have reflected on their practices—sometimes prompted by their negative experiences—and evaluated the support of their tools/practices against a PTM need. Reflection has also been reported as a reason for changes in PIM behaviors. For example, participants in Boardman and Sasse’s study referred to “increased reflection” on their PIM 86  practices due to participating in the study as the main factor causing changes in their PIM behaviors (2004). Bruce et al. (2010) also found that some participants in their study of changes to personal information collections were conscious of others’ perception of their ability to organize information and that triggered them to constantly reflect on their behavior and improve upon it.  3.3.2.3 Opportunities revealing unnoticed needs Buying or the availability of a device or an application and adopting suggestions by others for enhancing one’s PTM system were mentioned as reasons for 16/304 (5%) PTM changes. We refer to these types of contributing factors as opportunities; see Table 3.2 for example quotes. In four of such reasons, respondents also mentioned a PTM need that was better supported by their new tool/practice. However, it appeared that in those cases, the opportunities revealed some PTM needs that were not apparent beforehand. For example, a new smart phone (opportunity) revealed the need to access calendar while on the go for a portfolio manager: “switched from a paper planner to an electronic calendar for my personal tasks. [because] I got a Blackberry smartphone -- an easy way to have my calendar with me at all times” [P157].  This suggests that one way to make users aware of their needs is to provide them with some opportunities that they could take, which we elaborate in the next section.    87  Table 3.2 Factors contributing to changes in PTM behaviors: examples and frequency (N=304). (95) Changing needs 5 Changes in job (new job, entering grad school) “the tool we use at work” [P177], “movement from undergrad to grad school meant less day to day homework, more long-term assignments/goals” [P85] 40 Changes in busyness “I got too busy for this to be a reliable system” [P11], “more on the brain” [P132], “I was much busier all of a sudden” [P46], “On days when I have many tasks” [P142], “when the task list got bigger” [P154] 32 Changes in type of tasks managed “movement from undergrad to grad school meant less day-to-day homework, more long-term assignments/goals” [P85], “tasks that are due a later time” [P176], “started a new project which required different types of appointments” [P168], “for research collaborations” [P112] 9 Changes in family structure (having kids, getting married) “multiple children so this helps at a glance” [P140], “Kids started to have more activities” [P147] 8 Changes in tools used “I now work from a desktop, instead of a laptop” [P110], “changed my group membership and that is the default approach” [P13], “Started using two computers […] Different OS so not able to synchronize” [P53] (16) Opportunities revealing unnoticed needs 11 Buying or availability of a new device (e.g., a phone, laptop) “got a Blackberry smartphone” [P161], “New work station […] with three white boards” [P110] 5 Suggestions from others “attended a time management workshop that made me realize that I was having trouble distinguishing between high urgency-low priority tasks and low urgency-high priority tasks” [P137] (226) Dissatisfaction caused by unmet needs 55 Need for supporting prospective memory “Don't trust my own memory to keep tabs on everything” [P70], “otherwise I would forget” [P76], “I liked seeing the visual reminder (daily)” [P119], “provides reminders” [P145] 37 Need for ease, continuity, and reliability of access to tasks “I schedule a lot of things through email, and don't always have my paper planner nearby” [P45], “it was always available at home or work” [P156], “I would forget it [paper calendar] at home” [P125], “Lost/forgotten paper lists” [P127] 88  (226) Dissatisfaction caused by unmet needs (cont.) 22 General need for better management or keeping track of tasks “having more time to organize” [P135], “the faster the action is taken the less tasks you have to remember and manage” [P173], “keeping track of tasks that are due a later time” [P176], “I feel that it takes me too long to get back to people” [P172] 21 Need for decreasing overhead of task management “found keyed-entry to be a little tedious” [P73], “I find I have a hard time making a habit of processing the things I have captured” [P81], “my paper planner was an extra weight to my bag” [P48] 17 Need for appropriate view of tasks “a concise reference point where I can get an immediate snap shot of what I need to do” [P92], “needed a planner that included monthly overviews and week-by-week sections” [P42], “made it difficult to know what to work on next” [P74], “Gives me a better overview; helps me look ahead and plan” [P87] 14 Need for getting a sense of satisfaction “gives me a sense of accomplishment” [P163], “helps improve the overall flow of the week and keeps me feeling on top of and in control of my life” [P106], “helps me feel as if I'm making progress” [P60] 11 Need for  keeping tasks in one/multiple place(s) “Need to consolidate calendar using Outlook” [P15], “more efficient to centralize reminders in a calendar, beyond just meetings and appointments” [P161], “recording deadlines and making plans for action in multiple formats allowed me to benefit from an increase in perspective” [P55] 11 Need for creating shared awareness or for collaborative management of tasks “So that all in household can see and time conflicts can be avoided” [P146], “need for shared visibility of my schedule” [P112], “Easy to share to-do list with others as it is not limited to the applications that others use” [P36] 9 Need for scalable PTM (larger quantity and/or diversity of tasks) “my paper planner is just not large enough to handle all the different categories of tasks” [P9], “use to have a master list of tasks, split between school related and non-school related. These big buckets no longer suffice because they were too general and I had too much going on” [P71] 9 Need for prioritization “Very confusing to have two task lists. Was not able to prioritize” [P157], “Needed ability to sort tasks by due date and priority” [P51]   89  (226) Dissatisfaction caused by unmet needs (cont.) 6 Need for better multitasking “I am spending less time on switching context from one job to another” [P35], “too many items to attend to that competed with focus, which caused too much stress and anxiety” [P100], “I was having trouble focusing on just one task when every time I looked at my task list I saw dozens (hundreds?) of tasks” [P162], “multitasking is not my forte” [P12] 5 Need for better task breakdown, often to avoid procrastination “helps keep me from procrastinating” [P123], “Never had time for bigger tasks because there were too many small tasks to deal with” [P157] 5 Need for allocating time to tasks “found I ran out of time if I didn't put it in as an event” [P34], “long list of "to dos" not done each day so I set aside time to address the items” [P84] 4 Need for uncluttering physical/virtual workspace “To (try to) keep my desk top somewhat clean, I make "To Do" lists, then I can put some stuff away” [P57], “it is less cluttered than post-its” [P129] 3.4 Discussion and implications for design We characterized the changes in PTM behaviors over time based on whether a change is made to a strategy, a tool, or a tool-set. Within-tool changes and tool-set changes, in many cases, reflected the inherent adaptability and non-adaptability of tools respectively. Within-tool changes often were possible because of some level of adaptability of a tool. Non-adaptability of a current PTM tool, on the other hand, led to tool-set changes when a new functionality was needed but not offered by the tool. Tool-set changes which involve adding and removing a tool from one’s tool-set might be costly considering the time spent on finding a new tool and transferring data to the new tool. To reduce costs associated with such changes, PTM tools should instead be personalizable enough to accommodate changes in PTM behaviors rather than forcing users to switch tools by failing to be adapted. Below, we review what contributed to the changes in PTM 90  behaviors and suggest ways in which personalizable PTM tools could better support those changes. Implication-1: Enable documenting and reporting unmet PTM needs. We found that in 74% of the reported changes, respondents cited unmet needs and the dissatisfaction caused by those (see Table 3.2) as reasons for changes in their PTM. Although different subsets of these unmet needs are supported by many e-PTM tools, any individual e-PTM tool rarely supports the full set of a user’s changing needs unless it is fully personalizable—capable of expanding its functionality by allowing users to build and add new features. Further, as the number of possible changes in a personalizable tool grows, it might become more difficult for users to even know whether a personalization is possible or how to invoke their desired change. To address this potential challenge in personalizable PTM tools, we suggest that they allow users to report their unmet needs so that others—either other users or the tool developers—could help them find how to make their desired changes. Examples of unmet needs—taken from our data—that could be reported by clicking on a button that says “I need to…” include: “I need to have an overview of all my tasks at a glance, since my task list is getting larger” referring to the lack of an appropriate view for large number of tasks, and “I need to see my tasks on a calendar so I know when I’m focusing on what.”  Providing an easy-to-use mechanism for reporting unmet needs could help in several ways: 1) if the reported need is supported without requiring new development, it can be responded to either by a community of users who might have experienced the same need and thus have found ways to meet that need or by the tool’s support team to guide the user in how to make the change 91  needed; 2) the reported need will act as a feature request which makes developers aware of users’ unsupported needs so they can build the needed functionality into the system—or as a separate add-on/plugin; and 3) reported needs can also be used in personalization research to better understand how users express their needs which could inform the design of end-user programming languages or personalization facilities that match users’ way of expressing their needs. The goal of end-user programming languages and personalization facilities is to empower individuals to build their desired functionality when it is not supported by their tools. Implication-2: Encourage reflection on and evaluation of PTM behaviors. We found that the dissatisfaction that led to PTM changes sometimes involved user evaluation and reflection on their PTM practices. Therefore, encouraging people to reflect on and evaluate their PTM behaviors is beneficial since that might cause them to make positive changes to their PTM. In order to encourage people to reflect and make needed changes to their PTM behaviors, we suggest that PTM tools should be made reflective (Sengers, Boehner, David, & Kaye, 2005) to make people aware of their PTM behaviors and thus make people more likely to personalize their tools such that they better fit their needs. This can be done in a similar approach to that of the quantified-self applications that track and show individuals’ data to users to induce reflection and encourage behavior change (Rivera-Pelayo, Zacharias, Müller, & Braun, 2012). For example, a PTM system could present information such as number of overdue tasks since last month, number of times that a task has been postponed, number of completed tasks, how long each task has been on the list, and possibly even some user elicited attributes for each completed task, such as how long the task took and the user’s satisfaction rating for how it was accomplished. This task tracking data could be shown to users to build awareness of how they 92  are spending their time14. Presenting such information could encourage people to reflect on and improve their PTM practices, for example by taking on less tasks that can be accomplished in their desired time with a reasonable level of satisfaction, thereby reducing stress. In addition, presenting changes in such information can make users aware of changes in their behaviors, and hence make them more likely to reflect. For example, visualizing trends such as an increase in the number of appointments or tasks, which could mean increased busyness, could lead to the use of different views that better support monitoring of a larger number of tasks. Examining what elements of PTM information could encourage reflection and their variation across individuals is an important avenue for future research. Implication-3: Personalizable PTM tools should support sharing of PTM changes or personalizations. We found that friends’ recommendations—which we categorized under opportunities—contributed to changes in PTM behaviors by creating awareness of the benefit of a new practice/tool or the limitation of a previous tool/practice. Thus, if personalizable PTM tools expose users to personalizations or changes that other users have made to the tool, the users will be able to improve their own PTM practice by learning from those others’ behaviors. One                                                    14 Current applications such as RescueTime provide the service of tracking how a user is spending time.  93  way of exposing users to personalizations made by others is to link each interface component to a list of relevant user-generated personalizations that users can browse through and perhaps vote on (e.g., “like it”).  An example of a personalization that could be shared—taken from our data—is a desired feature that allows the user to define quiet hours such that she will not receive any reminders/notifications during those hours. If a user added this to her personalizable PTM tool, she could then also share this feature with others—together with her motivation of not getting distracted by reminders when focusing on a single task—using a sharing mechanism provided within the personalizable tool itself. This feature can then be linked to a relevant interface component such as reminders’ settings to enhance discoverability. We explore the design of sharing mechanisms for personalizations in Chapter 5. Designing mechanisms for informing users about potentially beneficial personalizations is an interesting avenue for research in personalization. We did not discuss potential benefits—or lack thereof—of changes in PTM behaviors in this thesis, because we did not ask our participants whether the changes they made in their PTM proved to be beneficial or not. However, the reported reasons appeared to imply that the participants expected to see some benefits as a result of making a change, and that the benefits seemed to outweigh the potential cost of making that change.   94  3.5 Limitations Asking people to recall changes in their PTM using a survey questionnaire has limitations. A different approach would have been a longitudinal investigation where participants are asked to record changes in their PTM as they occur over a period of one year for example and they are interviewed in monthly intervals. However, such a longitudinal approach is likely to suffer from the Hawthorne effect—some behavior changes would likely be a result of participating in the study. This effect is especially interfering when studying changes in behavior. The survey approach mitigates the Hawthorne effect in that the reported changes did not come about from study participation. However, the way those changes have been framed suffered from the retrospective nature of the survey which may have also elicited more major changes (tool-set changes) than minor changes (within-tool changes), since major changes are easier to remember. Our follow-up interviews were conducted to partially compensate for this limitation—a subset of participants were asked about their PTM behaviors a year after they reported their behaviors in the survey and we compared their behaviors objectively.  However, half of the follow-up interviews (6/12) were conducted by phone by participant request. Phone interviews have their own limitations since we were not able to pick up on potential changes they had made to their PTM without their conscious awareness. Phone interviews lack context that we could follow up on; the ability to see their PTM tools prompted us to ask more questions in the in-person interviews. As a result, the phone interviews were notably shorter than the in-person ones (12.6 vs 24.5 min).  95  3.6 Conclusion We characterized three different types of changes that occurred in individuals’ PTM behaviors over time: strategy changes, within-tool changes, and tool-set changes. What contributed to these changes were: changing needs, dissatisfaction caused by unmet needs, and opportunities revealing unnoticed needs. To support changes in PTM behaviors over time, we suggest that PTM tools: enable users to document and report their unmet needs, encourage reflection on and evaluation of PTM behaviors, and support sharing of PTM behaviors. We have provided concrete design possibilities on how to achieve each of these and offered suggestions for future PTM research. 96   Design and Evaluation of a Mechanism for Advanced Personalization Many applications provide personalization mechanisms through which users can make changes to adapt the system to better fit their needs or preferences. But the personalizations available are often quite basic, which cannot support the diversity of user needs. Advanced personalizations often require programming skills. In the research presented in this Chapter, we bridge the gap between simple and advanced personalization mechanisms by designing a mechanism that supports authoring of advanced personalizations without requiring the user to code. Current apps are often limited to basic personalizations such as making simple changes to the visual appearance of interface elements (e.g., changing icons or a background), customizing access to functionality (e.g., adding, removing, re-arranging commands/buttons to/in a toolbar or defining a shortcut), and modifying system behavior by choosing options from a list of predetermined alternative behaviors. More advanced personalizations such as extending system functionality are possible through mechanisms such as macros and add-ons, but these mechanisms have limitations. Recording a macro extends a system’s functionality by encapsulating a sequence of repeated user actions that can be invoked later. But sophisticated macros that add new functionality require users to edit the code generated by the macro recorder which requires programming skills. Tools such as web browsers enable users to extend system functionality by creating and installing add-ons. However, end users are restricted to using pre-existing add-ons, unless they have the programming skills to develop new add-ons.  97  To achieve our goal of designing tools that support advanced personalization, we built on ideas from end user programming (EUP) approaches such as controlled natural languages and sloppy programming (Little et al., 2010), and followed guidelines on designing personalizable tools such as meta-design guidelines (Gerhard Fischer & Scharff, 2000). We designed a prototype of a personalizable PTM tool with two key components for enabling the creation of new functionalities: 1) a self-disclosing mechanism that reveals system functionality to users and thus makes it easier for users to understand what can be changed, 2) a guided scripting personalization mechanism (ScriPer) that enables users to construct new features by combining building blocks that are familiar to them. A key difference between ScriPer and other similar scripting mechanisms used in automation tools such as Alfred or Inky is that we use a command line interface for creating new behaviors for interface elements, rather than running predefined commands that are mapped to interface elements, which is a common approach in automation tools. To investigate the strengths and challenges of our self-disclosing mechanism and ScriPer, we conducted a controlled user study.  4.1 Related work We review the guidelines on designing personalizable tools as well as EUP approaches that informed our design process.  4.1.1 Guidelines on designing personalizable tools Henderson and Kyng looked at the practice of designing in use and described three activities that change the behavior of a technology: choosing between alternative anticipated behaviors, 98  constructing new behaviors from existing pieces, and altering an artifact through modifying the source code (1991). The focus of our work is on constructing new behaviors from existing pieces. One of the comprehensive sets of principles for designing for adaptability is outlined by Moran as the principles of everyday adaptive design: overbuild infrastructure, under-build features, convey the adaptable quality of a tool as opportunity to the user, allow for recombining and  repurposing  (modularity),  and  make  adaptations  sharable (Moran, 2002). Similarly, meta-design provides another comprehensive set of guidelines. Meta-design is a theoretical framework for empowering users to design their own tools by providing them with appropriate tools and opportunities (Gerhard Fischer & Scharff, 2000). Meta-design guidelines include: provide building blocks, under-design for emergent behavior, establish cultures of participation, share control, promote mutual learning and support of knowledge exchange, and structure communication to support reflection on practice. Key common requirements of both sets of guidelines is that software systems provide mechanisms that allow users to create complex personalizations by combining building blocks (modular components), and that systems should under-design to promote personalization.  In our research, we focus on providing users with building blocks as well as mechanisms for combining the building blocks to create advanced personalizations in a PTM tool. While some of the other design methodologies (e.g., software shaping workshops (Costabile, Fogli, Mussio, & Piccinno, 2004)) include somewhat concrete practical steps for specific design situations, prior systems that have explicitly employed meta-design guidelines to our knowledge have been mostly domain-oriented design environments. Two examples are FRAMER for user 99  interface design (Lemke & Fischer, 1990), and JANUS for kitchen design (Fischer, McCall, & Morch, 1989). In these design environments, the primary user activity was to design, thus the building blocks were “design units” such as sink and refrigerator in the case of the kitchen designer, and windows and menus in the case of the user interface designer. Identifying building blocks of a non-design environment, such as a PTM tool where the primary user activity is not to design but to manage tasks, is less clear. 4.1.2 End user programming (EUP) approaches EUP methods often take one of the following approaches: programming by demonstration, visual languages, and scripting. In our work, we focus on the scripting approach. Two approaches to improving a scripting mechanism are: (1) simplifying the format or syntax, and (2) using a scripting editor that ensures creation of a correct script, often referred to as a structure editor (Cypher, Dontcheva, Lau, & Nichols, 2010; Lieberman, Paternò, & Wulf, 2006). Natural languages (Myers, Pane, & Ko, 2004) take the simplifying format approach. Sloppy programming is a form of natural language that attempts to simplify format by making programming similar to entering keywords into a Web search engine (Little et al., 2010). Systems such as CoScripter (Leshed, Haber, Matthews, & Lau, 2008) for automating repetitive Web tasks and Inky (Miller et al., 2008)—a web command interface that allows users to automate tasks by entering unstructured text—have taken the sloppy programming approach. One limitation of this approach is that users might try commands that are not supported (Miller et al., 2008).  100  The structure editor approach addresses both this limitation and the issue of poor discoverability which is a limitation with all command line interfaces. The structure editor approach enables users to create commands by choosing options from menus, and it guarantees that only correct combination of options are selected. Controlled natural languages (CNLs), which are a subset of natural languages that have restricted dictionaries and grammars for reducing the complexity and ambiguity, combine both approaches of simplifying formats and structure editor. While CNLs have been explored for ontology authoring and semantic annotation (e.g., (Bernstein & Kaufmann, 2006; Fuchs, Kaljurand, & Kuhn, 2008; Funk et al., 2007)), they have rarely been explored for the purpose of automation or personalization. Atomate is an exception. It has used a CNL interface to enable end user construction of reactive rules using information sources on the web such as one’s online calendar, email client, and messaging services (M. Van Kleek, Moore, Karger, André, & schraefel, 2010). An example of a reactive rule constructed with Atomate is: “Have Atomate automatically update your facebook status when you are at a concert.” While ScriPer is similar to Atomate in that they both use a CNL interface for creating behaviors, they differ in both the type of CNL interface and the usage context. As a result of the difference in the usage context, our approach provides finer grain building blocks as well as integration with the rest of the interface. In addition, our approach allows users to extend functionality of an under-designed PTM system by changing the behavior of already existing UI elements and defining behavior of new UI elements. Alfred is an automation tool that, similar to Inky, offers a command line for running commands (“Alfred - Productivity App for Mac OS X,” 2016). Unlike Inky, Alfred supports creation of new commands but not through its command line interface; simple commands can be created using a visual programming interface where users can define 101  flow of data between different apps; creating advanced commands requires programming knowledge. Unlike Inky and Alfred, the scripting mechanism in ScriPer is for the creation of new behaviors, and using those behaviors—which is equivalent to running commands in Inky and Alfred—is done through the GUI elements in our prototype. While some of the EUP approaches have been studied, their effectiveness for people with no to little programming experience has been largely unexplored (Miller et al., 2008; M. Van Kleek et al., 2010). The contribution of our work is in bringing the EUP techniques to the context of personalization in PTM, and providing empirical evidence on the challenges and strengths in using them. We designed and developed a personalizable PTM prototype that includes a scripting mechanism (ScriPer) for creating advanced personalization. Our design incorporated both approaches of simplifying format and structure editor by using a scripting language that resembles natural language, and by presenting the space of applicable building blocks (language expressions) that can be used at each step of composing a script. A key difference between our approach and that of automation tools such as Alfred or Inky is that we use a command line interface for creating new behaviors for interface elements using very basic building blocks such as change, move, show, etc., rather than running predefined commands. Our goal was to design an approach for command creation that does not require programming. 4.2 Meta-designing a PTM tool Following the guidelines discussed earlier, we had two primary research questions in meta-designing a PTM tool: What are the building blocks of a PTM tool? And what personalization mechanisms should be provided to users for enabling them to combine those building blocks to 102  create new functionality? Below we describe how we addressed these questions by reviewing our design process.  We developed a prototype of a basic PTM tool that supports basic functionalities such as creating task lists, adding tasks to lists, editing task attributes (e.g., color, due date, reminder), marking tasks as done, and deleting tasks.  4.2.1 Establishing the building blocks of a PTM system Providing users with building blocks is the cornerstone of the existing guidelines on designing personalizable tools  (Bentley & Dourish, 1995; Gerhard Fischer & Scharff, 2000; Mørch, 1995). However, none provided concrete actionable guidelines as to how to come up with the building blocks for a system. To address our first question (what are the building blocks of a PTM system?), we hypothesized that understanding the types of desired personalizations would provide insight into what needs to be modifiable and thus the building blocks of a system. Several types of personalizations (e.g., interface and functionality adaptation (Moran, 2002)) have been identified in the past, in domains other than PTM. However, previous categorizations of personalizations were based on the personalizations that were available in the existing personalizable tools. What we needed, by contrast, were the types of personalizations that were not necessarily available but were desired and needed to be supported for accommodating differences in PTM behaviors both across individuals and over time. Thus, we reviewed users’ various PTM needs reported in our studies in Chapter 2 and prior PTM studies (e.g., (Bellotti et al., 2004)), as well as the feature requests made by users of PTM tools such as Remember The Milk (“Remember The Milk - Forums / Ideas,” 2014) which has one of the most active feature 103  request forums related to PTM. See Table 4.1 for examples of user needs, and Table 4.2 for examples of feature requests.  We considered user needs and feature requests as forms of personalizations that users should be able to create. Thus, we treated the words (e.g., task, change, due date) mentioned in the feature requests and user needs as the building blocks of a meta-designed tool, and categorized them into UI elements, actions, interactions, external events, entities, entities’ attributes, and attributes’ values. Table 4.3 illustrates examples of the building blocks in each of these categories.  Table 4.1 Examples of user needs from prior PTM studies. Show my tasks’ deadlines on a timeline See & select appropriate tasks that can be done in a given time slot Filter & show me tasks that were recorded today Focus on the current tasks, minimize distraction by other tasks Add an icon next to the tasks that [meet a certain condition] View task lists & calendar together Print tasks that are due today in a particular format Set timer on tasks for tracking time Color code tasks based on their list / goal Strike through tasks when done   Table 4.2 Example of feature requests in RTM. Snooze button for notifications A "make current" button, that takes all selected overdue tasks and moves them to the present day Ask for date to which to postpone when postponing a task Customize reminders for specific lists/tags Make 'delete' a button instead of an option in 'more actions' Show tasks due today in bold Show overdue tasks in the 'Today' tab on the Overview screen   104  The under-design guideline of meta-design (Gerhard Fischer & Herrmann, 2011) (or overbuild infrastructure and underbuild features of (Moran, 2002)) guided our decision of what actions to include as building blocks. According to this guideline: 1) building blocks should offer enough functionality that they are useful and usable as a unit and, 2) they should not be too complex to require users to break them down in order to combine them with other blocks (Gerhard Fischer & Herrmann, 2011). In our design process, before adding a new feature based on a user need, we assessed the feasibility of building that feature using more basic blocks. If feasible, we added the new building block instead of the new feature. For example, we skipped adding an ‘archive’ feature, because archiving involves moving a completed task to a list called ‘archive’ and thus could be built by creating a list and using a ‘move’ building block which is more generic than ‘archive.’ 4.2.2 Creating new personalizations using the building blocks  After reviewing the user needs and feature requests for identifying the building blocks, we decided to focus on designing personalization mechanisms for two classes of personalizations: Table 4.3 Categories of building blocks in a PTM system. Category Examples of building blocks UI element Button, checkbox Action Change, ask me for [data], show, move, remove. Entity Task, list, reminder Attribute Color, due date, type, status, importance, etc. Value Gray, tomorrow, long-term, done, high, etc. Interaction Click, right click, double click, drag, drop, hover External event Closing a web page, starring an email, etc.    105  the first is adding a new feature to the system that can be invoked by interacting with a new interface element (e.g., a button or a menu-item); the second class is modifying the effect of an existing user interaction by adding new behaviors to it or changing its current behavior. Both classes involve trigger-action programming, thus we compare the results of our study to other studies of trigger-action programming. While both classes require a mechanism for defining a new behavior, the first class involves creating a new interface element and attributing the new behavior to an interaction with it, and the second one involves attributing the new behavior to an existing interaction. Below, we describe how we designed for the above requirements. Figure 4.1 shows the screenshot of the personalizable PTM tool we developed; each element of the figure will be explained in the upcoming sections.  4.2.2.1 ScriPer: scripting for personalizing To allow users to combine the building blocks for creating a new behavior, we designed ScriPer (Scripting for Personalizing) which is a guided scripting mechanism. ScriPer allows users to create a script—representing their desired behavior—by choosing from a list of suggested building blocks that gets updated based on users’ selected building blocks so far. ScriPer starts with suggesting a set of action building blocks (Figure 4.1.6), each of which has their own grammatical template. For example, the ‘change’ action block has the following template:  [1change] [2objects’ attributes to] [3attribute’s values] [4for (all) objects (that)] [5objects’ attributes] [6attributes’ values].  106   The numbers represent the order of ScriPer’s suggestions for the ‘change’ block. After choosing ‘change,’ it suggests all the attributes of all the objects in the system to ask users what they want to change. Once an attribute is selected, it suggests all the possible objects that have that attribute (e.g., ‘all the selected tasks’ or ‘tasks that’). If users choose objects such as ‘tasks that’ for which they have to specify conditions, then ScriPer suggests conditions in two steps: first by suggesting the attributes of the objects on which users want to apply a condition, next by suggesting possible values of the selected attributes. We chose the order of ScriPer’s suggestions such that a complete script forms a correct English sentence. This order was chosen to increase accessibility to non-programmers, at the cost of being contrary to mainstream programming paradigms (e.g.,  Figure 4.1 The prototype in personalization mode, hence the gray overlay (1). The plus button (7) is only displayed in personalization mode. In this screenshot The ‘Mark as done’ button has been clicked and thus the panel (2) is showing the effects (4) of that click event (3). The user is adding a new effect to the ‘Mark as done’ button by clicking on “Add a new action/effect” (5) which has invoked the ScriPer window (6). ScriPer starts with suggesting a set of action building blocks. Next to each action block are examples of using the block to familiarize the user with the block.    107  object oriented programming where objects come before actions). Figure 4.2 illustrates authoring of a personalization that involves the use of the ‘change’ block. See Appendix D  for the screenshots of the step-by-step process of authoring an example of advanced personalization.  The ‘move’ block has a slightly different template:  [1move] [2(all) objects (that)] [3objects’ attributes] [4attributes’ values] [5to (day)(list)(position in a list)] [6(day’s values) (list names) (positions’ values)].    ScriPer suggests values for an attribute based on the type of the attribute. For example, if the user selects an attribute such as ‘due date’ whose type is date, ScriPer shows a list of dates such as “today” and “tomorrow,” as well as a ‘pick a date’ block that when chosen, a date picker will be shown. Figure 4.3 illustrates this for the reminder’s time attribute.  Figure 4.2 An example of using ScriPer’s guided scripting mechanism.Here, a user is adding a “Mute reminders” button to postpone reminders between a user-defined period. She has already created the button (trigger is marked as 2) and the first part of its behavior which is to ask for “start time” and “end time” (labeled as [custom command-1], shown in blue (3)). The screenshot is capturing the creation of the second part of the script (4). The script is composed by selecting one of the options from ScriPer’s suggestions at each step (5). ScriPer starts with suggesting a set of actions, and updates its suggestions based on the completed part of the script. Typing in the textbox filters the suggestions. Clicking on arrows (6) cycles through usage examples of the suggestions.       134256108  When the script inside the textbox represents a complete script, ScriPer shows the two buttons of ‘and’ and ‘Save’ (Figure 4.4) to signal to users that they can either add another script or just save their current script. ScriPer is implemented as a modal pop-up window that can be invoked either through a self-disclosing mechanism (DiGiano & Eisenberg, 1995) or by clicking on a newly created interface element (e.g., a button) in the “personalization mode”; we describe the purpose of each approach next.   Figure 4.3 Working with an attribute with the data type of time. To select a value for a reminders’ time, (1) the user chooses ‘pick a time’, (2) then ScriPer shows a time picker for the user to select a time, (3) after picking 7 AM and pressing done, ScriPer adds the picked value to the script.     Figure 4.4 A grammatically correct script.    109  4.2.2.2 Personalization mode  To allow users to create new interface elements, we distinguish between the main mode, where all the regular PTM-related activities take place, and the personalization mode where personalization-related activities such as adding a new button or a menu-item happen. Switching modes is done by clicking on ‘Personalization’/‘Exit personalization’ button (Figure 4.1.1); personalization mode adds a gray overlay to the main interface and all the regular PTM-related interactions are disabled such that the user interactions will show their expected effects in a panel (through the self-disclosing mechanism described next) rather than being executed. To add a new button, users click on the plus button next to other buttons (Figure 4.1.7), name the button, and then click on it to define its behavior using the ScriPer. 4.2.2.3 A self-disclosing mechanism  The second class of personalizations that our prototype supports is modifying the effect of an interaction with an existing interface element (e.g., the effect of clicking on a button) by either adding a new effect or replacing an existing one. To support such personalization, following one of Moran’s principles of everyday adaptive design (Moran, 2002), we first needed to convey the adaptable quality of user interactions so users know they can change the effect of their interaction. To do this, we display the effect of an existing interaction, building on the idea of self-disclosing systems that disclose their behaviors to users (DiGiano & Eisenberg, 1995). Whenever a user interacts with an interface element, the interaction (Figure 4.1.3) and its effects (Figure 4.1.4) are displayed in a fixed panel at the bottom of the page (Figure 4.1.2). The interaction is shown as a trigger and its effects are shown as actions. The background color of the 110  panel changes to blue for one second when the user interacts with an interface element to clarify for the user the connection between her interaction and what is being displayed in the panel. The panel appears in both the main and the personalization modes, and its display can be toggled. The effects of an interaction—displayed as actions—can be modified via the ScriPer window. New effects can be constructed and assigned to the displayed trigger by clicking the “Add a new action/effect” button (Figure 4.1.5) which invokes ScriPer (Figure 4.1.6).  4.3 Controlled user study We conducted a controlled user study where people with various levels of programming experience used the tool to perform a set of predetermined personalization tasks. The goal of our study was twofold: (1) to evaluate our design decisions and understand the strengths and potential challenges involved in using the two components of our meta-designed tool—ScriPer and the self-disclosing mechanism—for personalizing software, and (2) to assess the effect of programming experience on the ability to perform personalization tasks.  4.3.1 Participants Twenty four participants completed the study (13 females). Participants were recruited by posting signs around the University of British Columbia campus as well as emails to different departments. Participants were told that the experiment is about evaluating a to-do management application. Participants ranged from 21 to 31 years of age. They were all university students, and only six (three females) were from the computer science department. 111  Prior to signing up for the experiment, interested participants filled out a short questionnaire (Appendix E.1) to describe their programming expertise and rated it on a scale of 1-3 (1 little to no programming background, 2 some programming background, and 3 proficient in programming). While our design was targeted at the first two groups, we included the third group for comparison purposes. We were able to recruit eight participants in each expertise category. From the statistical analysis perspective, we wanted to balance gender in each category, but there seemed to be both a shortage of males with no programming background and of females who were proficient in programming who were interested in participating in our study. This is despite waiting several extra weeks to fulfill recruitment. In the end we had six females with no to little programming background, four females with some background, and three females as proficient programmers. Participants received $15 for their participation. In the rest of the paper, participants are referred to as their gender (M/F) + expertise (N/S/P) + a number (1-8). For example, a male proficient programmer is referred to as MPx where x is between one and the number of participants in that category.  4.3.2 Tasks Evaluating personalization mechanisms in a relatively short lab study is not straightforward. Designing personalization tasks that are ecologically valid and that motivate the need for participants to personalize (even in an artificial setting) requires special attention. To maintain ecological validity in designing personalization tasks, we reviewed user needs from prior PTM studies as well as the feature request forum of a PTM tool, Remember the Milk. Table 4.4 shows 112  examples of user needs and feature requests that influenced our tasks. Based on those examples, we designed six personalization tasks, all of which could be performed using our prototype.  Four of the tasks involved creating a new button and defining its behavior. For each of these tasks, we designed a group of two tasks—henceforth a task group (TGi). The second task in each group was a personalization task, which explicitly asked participants to create a button that performs a desired personalization. The first task was to perform what the button would do prior to performing any personalization. The first task involved some repetition, which was to motivate the need for the personalization. Creating such motivation is often challenging when designing personalization tasks for lab studies. Another goal of the first task was to familiarize participants with the goal of the personalization they were asked to perform in the 2nd task. The task groups TG1, TG2, TG5 and TG6 were designed this way (see Table 4.4).  The remaining task groups (TG3 and TG4) involved using the self-disclosing mechanism. TG3 included three tasks, and TG4 included four tasks. The last task of each of these task groups was a personalization task that asked participants to change the effect of an interaction with already-existing interface components (e.g., ‘mark as done’ button). The other tasks in each of the task groups were designed to familiarize the participants with that interaction and its current effects. In those tasks, participants performed an action, e.g., marking a few tasks as done, and explained its current effect.  113  4.3.3 Procedure First, to familiarize participants with the system, we walked them through performing a personalization task. Next, they were given all tasks one at a time in the order15 shown in Table 4.4. Participants were asked to think aloud while performing the tasks. There was no time limit. The screen and the audio were recorded. After finishing the tasks, participants were asked about their experience with ScriPer and the self-disclosing mechanism both in a semi-structured interview (Appendix E.2), and a post-questionnaire (Appendix E.3). The session took on average 42 minutes (min=20, max=60).                                                     15 We intentionally chose a fixed order for the tasks because we were aware that there may be some carry over effects between tasks, and we did not have sufficient participants to run a fully counterbalanced experimental design. Had we varied the task order (perhaps through randomization), it could have led to carry over effects differentially impacting participants’ performance for each task. Our primary interest was to compare performance for each task between participants, rather than to compare task-to-task performance differences. 114  Table 4.4 Tasks used in the user study. Participants were given one task at a time. TG3 and TG4 involved the use of self-disclosing mechanism. Task groups Tasks TG1 T11: Change the due date of the following tasks to tomorrow: “finish paper review,” “learn javascript,” “do yoga” T1P: Imagine that you’d like to do the previous task for a whole bunch of tasks and that you might need to do this again in the future. For this situation, you decide to create a button (called ‘postpone to tomorrow’) that when you click on, the system modifies the due dates of the to-dos that you have selected to tomorrow. TG2 T21: Find tasks that are overdue (i.e., due before today) and change their color to red so that you won’t miss them. T2P: To save time on the previous task in the future, you decide to create a button called “Highlight overdues” that when you click on, it automatically turns the overdue tasks into red.  TG3 T31: Mark the following tasks as done to indicate that you are done with them: (Code, do yoga) T32: What did the system do when you pressed the ‘Mark as Done’ button? (please explain) T3P: Imagine that you would like the system to move your tasks to the bottom of the list when you are done with them, in addition to crossing them off. So, make the system do that.  TG4 T41: Create a list called “tomorrow”. T42: Add the following new tasks to the tomorrow list and set their due dates to tomorrow: “buy bread,” “register,” “return book” T43: What does the system do when you add a task to a list? (please explain orally) T4P: In addition to adding the task to the bottom of the list, make the system set the tasks’ due date to tomorrow by default when you enter a task in this list.  TG5 T51: You do not want to disturb your sleep by the automated task reminders that are sent when you are asleep. So, find tasks that their reminders are set to be sent out between 10 pm and 7 am and postpone them to 7 am. T5P: Imagine that there are other times that you you’d like to define quiet hours so that you tell the system a time period in which you don’t want to receive any reminders and the system postpones sending the reminders to the end of that period. In this situation, you decide to create a button called ‘mute reminders’ that when you click on, the system asks you to enter the time period and then the system changes the reminders that are supposed to be sent out within that period such that they will be sent out at the end of that period. TG6 T61: Today, you want to focus on the following three tasks (Read paper, learn javascript, Finish paper review). Gray out the rest of your tasks so they don’t distract you.  T6P: Imagine that you’d like to do the previous task again in the future. For this situation, you decide to create a button called ‘Focus’ such that when you select the tasks that you want to focus on and click on the ‘Focus’ button, it makes the other tasks gray.    115  4.3.4  Data analysis We collected usage log data (the personalization scripts and task completion), the screen recordings, the interview data, and the notes of the participants’ actions and comments. The screen recordings and the transcriptions of the think aloud were coded against the number and type of mistakes, and whether or not they reused their created personalization in subsequent tasks. The semi-structured interviews were transcribed and coded against the strengths and challenges of different parts of the design. We ran mixed-model regression to analyze participants’ success in completing the tasks without mistakes and the number of mistakes. In the regression models, we included the fixed effects of the number of tasks already attempted, programming expertise, gender, and age, as well as the random effects of the task that was being attempted and participant.  We only analyzed the personalization tasks (T1P-T6P) from the task groups. Some participants performed personalization even for the non-personalization tasks in which they were not explicitly asked nor expected to personalize, i.e. the first task in TG1, TG2, TG5, and TG6. Some of them then skipped the personalization task in the task group as they recognized that they had already performed that task. In these cases, we considered their first tasks as their personalization tasks. 116  4.4 Findings 4.4.1 Task completion and mistakes made Except for two participants who gave up completing T5P, participants did complete all personalization tasks, albeit some with mistakes (due to the iterative nature of writing a script, only uncorrected mistakes are counted as mistakes).  A mistake meant that the solution was either a slightly or completely different personalization than the intended one. Out of the 144 (24x6) trials of the six personalization tasks, only two were left incomplete, 94 were completed successfully with no mistake (Figure 4.5). In the remaining 48 trials, 54 mistakes were made in total. The number of mistakes made in a single task ranged from one to three, with only one participant ever making three mistakes on a task (T5P). Table 4.5 shows the distribution of mistakes across the tasks. Below, we describe how we counted and categorized the mistakes. We grouped the similar mistakes, and labeled the four emergent groups as: lack of precision, terminology related, mental model mismatch, and wrong trigger. In 41% of the mistakes, a  Figure 4.5 Number of participants who successfully completed each task with no mistake (N=24 participants).    117  wrong block was chosen due to lack of precision, e.g., choosing ‘for all tasks’ instead of ‘for all selected tasks’ or ‘yesterday’ instead of ‘before today.’ Examples of Terminology related mistakes (37%) were using ‘completion date’ instead of ‘due date’ or ‘time’ instead of ‘date.’ Mistakes that were due to changing the effect of a wrong trigger (18%) were most common in the tasks that required use of the self-disclosing mechanism (T3P, T4P). For example, when Table 4.5 Correct personalization for each personalization task and the number of participants who made 0, 1, 2, 3 errors in their scripts when performing each task. Each correctly selected building block of a personalization is shown within a bracket. * The last column shows the number of participants who left the tasks incomplete. Task  Correct personalization script # of mistakes in the script  0 1 2 3 --* T1P [When clicked on “Postpone”],  [Change] [tasks’ due date to] [tomorrow] [for all selected tasks] 17 7 0 0 0 T2P [When clicked on “Highlight overdues”], [Change] [tasks' color to] [Red] [for tasks that] [their due date is] [before] [today/now] 10 12 2 0 0 T3P [When clicked on “Mark as done”], [Move] [all selected tasks to] [location in the list: ] [bottom of the list] 20 4 0 0 0 T4P When clicked on “add” [Change] [tasks' due date to] [tomorrow] [for all tasks in this list] 12 11 1 0 0  T5P When clicked on “Mute reminders”, [Ask me for] [a Time called "start"], [a Time called "end"] [Change] [reminders' time to] [*end*] [for reminders that] [their time is] [between] [*start*] and [*end*] 13 7 1    1 2 T6P When clicked on “Focus”, [Change] [tasks' color to] [Gray] [for all unselected tasks] 22 2 0 0 0 Total Number of trials = 144 94 48 2  118  performing T4P, which required attributing a new effect to the ‘add’ button, some participants added the new effect to an irrelevant trigger that was displayed through the self-disclosing mechanism because that irrelevant trigger happened to be their last interaction with the system.  Finally, only 4% of the mistakes were due to a mental model mismatch such as a mismatch between the functionality of a building block and what users expected it to do. For example, three participants made an unnecessary use of the ‘show’ block in T21—where they were asked to find tasks that were overdue and change their color to red—before using the ‘change’ block. For example, FN4 created the following two scripts:  “Show tasks that their due date is before today on calendar” and “Change tasks’ color to Red for tasks that their due date is before today.” But all those three participants mentioned “I probably didn’t have to use ‘show’” right after using the ‘change’ block. Figure 4.6 illustrates a breakdown of mistake types across programming expertise. Compared to programmers, participants with no to some programming background made disproportionately more mistakes due to lack of precision and choosing a wrong trigger.  119  To examine if the number of mistakes were associated with programming expertise and other aforementioned factors, we ran a Poisson mixed model regression. Also, to analyze success in completing a task (0 mistakes vs. 1 or more), we ran a logistic mixed model regression. Neither of these analyses identified any significant predictors.   4.4.2 Unexpected personalization behaviors  Personalizing when performing non-personalization tasks: As mentioned, some participants performed personalization in tasks where they were not directly asked to do so (T11, T21, T51, and T61). Out of the 96 trials of these four tasks, 57 were done by creating a personalization. When asked about their choice, participants mentioned one of the following: 1) they were aware of the manual method but thought that the task involved “too much hassle” if done manually, 2) they mentioned that they never would have thought that they should do the task manually, or 3) they could not figure out how to do the task without personalizing. In an extreme case, FS4 couldn't find the 'Mark as done' button, when she was asked to mark two tasks as done in T31,  Figure 4.6 Breakdown of mistakes across types and programming expertise (N=54 mistakes).   120  and she created a button called 'completed.' There was no significant predictor, based on a logistic mixed model regression, for whether or not a participant personalized in these tasks. Creating more generalizable personalizations than asked for: Not only did some participants personalize for non-personalization tasks, four participants (MP2, MS2, MP3, MS4) even created more generalizable personalizations; instead of the simpler anticipated solution of “Change tasks' start date to tomorrow for all selected tasks” for T11, they combined the following two scripts: “ask me for a Date called ‘dateChange’” and “change tasks' start date to *dateChange* for all selected tasks.”  Reusing a personalization created in prior tasks: Some participants reused their personalizations without being instructed to do so. Task T42 asked the participants to add three tasks to a list and set their due dates to tomorrow. To save time on this, five participants (MP3, FP2, MP5, FN5, FN6) reused the ‘postpone to tomorrow’ button that they had created in T1P, instead of manually changing the due dates of the three tasks. We expect to see such reuse behavior when users build their own personalizations in a real-world setting, and it was reassuring to see this behavior in the lab setting. 4.4.3 ScriPer: strengths and challenges 4.4.3.1 Flexibility of the system  For each personalization task, we had anticipated a single solution, but participants performed some of the tasks differently. This showed the system’s flexibility in supporting different ways of expressing a feature. For example, for T3P where they were asked to add a new effect to the 121  ‘Mark as done’ button such that it moves the tasks to the bottom of the list, three participants performed the task with this script: “change tasks' location to the bottom of the list for all selected tasks” instead of our anticipated solution of “move all selected tasks to location: bottom of the list.” MP5, who used Todoist (a dedicated PTM tool), was eager to provide suggestions for improving the PTM support of our prototype before realizing that he could achieve some of his suggestions through personalization: “that's the nice thing about the system, you can always edit and do anything you want. So for example, I want 'Mark as done' to move all of this into a 'done' list. I can easily do it with personalization.” Then he went ahead and changed the effect of ‘Mark as done’ button and said: “So, it's hard to criticize the system because you can create anything you want. Like if you have anything missing, it's like a plugin, you can just create it.” Some of the participants speculated about the potential usefulness of ScriPer in other apps: “I like that you can construct features. Other apps don't do that. The gray out thing [referring to T6P], I have an app similar to this but it doesn't do the gray out…you can prioritize them [your tasks], but you have to do it individually, you can't say all these ones are priority ones together” [FS1]. MP2 could see ScriPer being used for macro creation in spreadsheets: “Instead of having to record your macro you can actually do something that's a bit more plain text [as in ScriPer] it's really frustrating to make those macros; they are always very strict…they won’t allow any easy process.” 122  4.4.3.2 Overall ease of use Overall, participants liked the concept of creating their own features, and found ScriPer easy to use for the most part. Even programmers appreciated not having to code for the purpose of personalization: “I loved the feature construction window. I’m a programmer but I don’t like doing it when I don’t have to especially for something like personalization” [MP1]. One participant commented on the value of the save button in ScriPer: “I liked the fact that when you want to save you need to complete every step. It's not like you complete half the steps and you can save it, that didn't work. It's giving you some feedback that you are right” [MN2].  4.4.3.3 Findability of the building blocks among the suggestions Participants took different approaches to finding their desired block among the suggested blocks: some participants typed a keyword they were looking for to filter the suggestion list, and others visually scanned the list to see what fits. In cases where the list of suggestions was long, participants who filtered seemed to find their desired block faster, based on observation as the time to select a suggestion was not logged. However, the filtering behavior led some participants to either make precision-related mistakes or create less efficient scripts, as they sufficed to the first best-matched block and did not find the correct block, which was filtered out. For example, MP3 inefficiently performed T6P; while the intention was to gray out the unselected tasks, he created the following two scripts: "change task’s color to gray for all tasks" + “change tasks’ color to green for all selected tasks."  He missed the option of “for all unselected tasks” when writing the first script. Some participants preferred scanning the options rather than typing and filtering, because they anticipated a potential mismatch between their own vocabulary and that of 123  the system:  “there wasn't so much stuff that I thought that [the filter] was necessary. I also realized I was scared I'd miss something if I didn't type it as exactly the way it was typed” [MS4]. 4.4.3.4 Match between the order of blocks and users’ expectations ScriPer imposes an order in which users are expected to express their desired personalization. While the majority of the participants mentioned that the order made sense to them, 7/24 pointed to a mismatch between the order they thought about the personalization and the order the blocks were suggested. All those seven participants wanted to first identify objects and only then apply an action on them; however, in our prototype, the action building blocks were suggested first. For example, FP3 said: “I like how it constructs it for you. But sometimes I felt like I have to think about the order of how to construct things in a certain way. First I need to select and then I need to do change whatever it was. I think it's more helpful to have the system help you construct it like this, but at the same time the rigidness made it hard to figure out.” However, many participants (without prompting) pointed to the learning curve in getting to know the order of ScriPer’s suggestions and that by their last tasks, they knew what suggestions they should be expecting and when. 4.4.3.5 Difficulty in creating composite data types The ‘ask me for’ block was designed to be used for instructing the system to ask for data (e.g., a time, a text). Part of T5P involved using this block to instruct the system to ask users to enter a ‘time period’—a composite data type. Figure 4.7 illustrates the steps involved; see Appendix D  124  for complete screenshots. Following the under-design guideline, we chose not to include composite data types such as ‘time period’ as a building block, since they could be built using more basic blocks such as time. Thus, in T5P participants were expected to instruct the system to ask them for two time inputs, which proved to be difficult for some of the participants with little to no programming background. Two of the participants gave up completing T5P.  For example, when performing this task, FP2 thought aloud “I want the system to ask me for a range but this is only asking for one time. I don’t know how to enter a time period.”     Figure 4.7 Steps involved in using the 'Ask me for' block to perform part of T5P.      125  4.4.4 Self-disclosing mechanism: strengths and challenges The panel that disclosed the system behavior was available in both the main and the personalization modes. Most participants liked having it in the main mode. MS2 supported our design decision to have it in both modes: “that's kinda cool because it visually shows you what's being done for whatever is pressed live, whereas in personalization [mode] you have to go and click and see.”   However, as mentioned earlier in the findings on mistakes, some participants had difficulty finding the right trigger, and made mistakes by attributing a new behavior to a wrong trigger (10/54 mistakes). The triggers shown in the panel are updated on each user action. Thus, to change the effect of an action through the self-disclosing mechanism, a participant has to first perform the action so that the panel displays it as a trigger. If done in the personalization mode, the action and its expected effects are displayed without being executed. However, participants who performed an action in the main mode just to make the panel display the right trigger had to undo the effect of their action. One approach to alleviate this issue is to show a history of user interactions in the panel, instead of only the last interaction, and allow users to choose their desired trigger manually from the panel without going through the process of performing and undoing an action, or having to switch to the personalization mode.  In addition, there was a mismatch in how the system set triggers and some of our participants’ mental model of it. In our prototype, triggers are general actions such as “when clicked on the checkbox next to a task.” To attribute a new behavior to a trigger such that the behavior is only applied to certain types of objects (e.g. tasks that are in the ‘shopping’ list), participants had to 126  specify those objects when constructing the new behavior, and not when setting the trigger. However, some participants expected to first define a more specific trigger such as “when clicked on the checkbox next to a task in the ‘shopping’ list,” and then to add a behavior to it. Therefore, they avoided changing the effect of a general trigger: “I was a bit scared of using the panel because it applies to very general actions… If I click on a task and then add an action I say oh my God I'm gonna screw up every time I click on that very generic action. I'll leave that for really general behaviors unless there are some sort of filtering built into that” [MP3]. To resolve this issue, triggers need to be editable so that users can add conditions to them.  4.5 Discussion and conclusion To identify the building blocks of a PTM system, we reviewed user needs and feature requests of an existing PTM application. Our high level categories—UI elements, actions, interactions, external events, entities, entities’ attributes, and attributes’ values—provide a practical starting point for designers of personalizable tools to identify building blocks of a system. We found the under-design guideline helpful in deciding whether to add a new feature. It led us to include more basic building blocks instead of adding more features or composite blocks. This approach increases the number of possible features users could build. However, our decision about not including the composite data type of ‘time period’—since it could be built using two times—did not work out for some participants. Thus, under-design decisions need to be tested carefully to ensure that composite blocks can be built intuitively, especially for people with no programming background.  127  ScriPer is one possible design of a mechanism for combining building blocks to create advanced personalizations, and our preliminary evaluation shows that it is promising. One of the most encouraging results is that many participants intuitively personalized even when not required. Out of the 96 trials of four of the non-personalization tasks, 57 were done by creating a personalization. Further, most users were able to create advanced personalizations when instructed to. On the downside, however, ScriPer does not prevent the user from making mistakes, and indeed about one third of the created personalizations were either slightly or completely different than the intended ones. Part of the issue is the ability for the user to easily spot a mistake. Our evaluation did not include having participants using our tool with their own tasks, something that would have likely highlighted any mistake quickly. Further investigation is needed to see how well users are able to recover from their mistakes. Beyond recovery, it is important for a personalization mechanism to limit the possibility of making a mistake in the first place. Mistakes due to choosing a wrong trigger in the self-disclosing mechanism were related to not noticing the trigger part in the panel. These mistakes might indicate an inadequate understanding of trigger-action programming among our participants with no programming background, which is contrary to the findings of Ur et al. who found that average users can successfully engage in trigger-action programming (Ur, McManus, Pak Yong Ho, & Littman, 2014). To avoid mistakes due to choosing a wrong trigger, the self-disclosing mechanism should emphasize the trigger part and ask users—once they are done with creating a personalization—to confirm that their personalization is attributed to the right trigger. Although not statistically significant, compared to programmers, participants with no to some programming background made disproportionately more mistakes due to lack of precision, i.e. mistakes such as choosing 128  ‘yesterday’ instead of ‘before today’. One approach to reduce the possibility of such error is to suggest other conceptually similar options to what they have selected or are about to select. This can be done, for example, by highlighting those similar options when a user is about to select an option. This approach requires designers to determine clusters of conceptually similar building blocks which might be an added step to the design. Alternatively, a data-driven approach may be adopted by tracking how errors are made and then corrected. One usability issue with ScriPer was related to the order it imposed on using the blocks. Action building blocks such as change and move were so basic (i.e., low-level) that they did not necessarily correspond to any user interaction or interface element in the system. Therefore, ScriPer had to be able to provide specific suggestions for each step of composing a script to compensate for users’ lack of familiarity with the actions and their parameters. Thus, we chose to impose an order for combining building blocks so the number of suggestions at each step would be manageable for users. The order allowed the personalization scripts to form an English sentence and provided the benefit of knowing what building block should be selected at each step. However, it did not match some participants’ preferred order. Part of the problem was due to lack of visibility of the next steps, which was partly due to their dependence on user’s prior selections. An alternative approach is to show all the steps to users and let users choose the order in which they want to complete each step. However, that might make the interface crowded and confusing. Also, systems that use higher-level building blocks that are more familiar to users can support a flexible order. Inky is an example of such a system, since it replaces a GUI with a command line interface, where users are likely to be familiar with the available commands and their parameters.   129  Similar to our prototype, even a fully developed personalizable PTM tool can be limited in its coverage of primitive building blocks. To overcome this, users should be able to add building blocks to the system; but this could be challenging. For example, adding an action building block such as ‘hide’ requires determining all the possible building blocks that can be combined with it, a template to represent arrangement of the building blocks, and the underlying functionality associated with the block. Although the first two can be achieved by people with no to some programming background, defining the underlying functionality associated with an action block perhaps needs to be left to programmers who can then share with others. 130   Understanding Online Personalization Sharing More and more users are taking advantage of software customizability16 to expand software’s capabilities through additional features or to enable personalized workflows. Most of these users are benefitting without having the required skills or the time to create these customizations on their own. Instead, they are adopting customizations made by others, through plugins and other mechanisms. For example, 85% of Firefox users have chosen to customize by installing add-ons (Scott, 2011). This phenomenon has been enabled by 1) software applications that are designed as open platforms that offer public APIs, thus allowing developers to create plugins and cross-application customizations using tools like IFTTT (ifttt.com) and Alfred (alfredapp.com), and 2) customization sharers who are willing to create customizations, and 3) the sharing technologies that enable those sharers to share their customizations with other users. To support the important role of customization sharers17, we need to understand what motivates sharers to create and then                                                    16 We use the term customization, instead of personalization, in this Chapter to make a clear link back to the relevant work on customization sharing within organizations which used the term customization (e.g., (Mackay, 1990a)).  17 We use customization sharers to refer to people who create and publish their customizations, and we use re-users to refer to people who use sharers’ customizations. 131  share customizations, what mechanisms they use to do so, and how those mechanisms either support or hinder sharing practices.  Sharers contribute to customizable software—often commercial—by extending its functionality. There is a vast literature on motivations for contributing to free and open source software (FOSS) and its infrastructure designed to leverage those motivations. We were curious to see to what extent similar motivational factors drive creation and sharing of customizations for proprietary software. Research on customization sharing has focused predominantly within organizational boundaries (Draxler et al., 2012; Kahler, 2001; Murphy-Hill & Murphy, 2011). It remains unclear how the customization sharing practices translate from within-organization settings to online settings where sharers come from diverse contexts and may have other motivations to share. Little empirical research has been conducted to investigate online environments and mechanisms in terms of their conduciveness to customization sharing. Our research takes that next step in understanding customization sharing practices beyond those within an organization. To understand what mechanisms sharers use to share their customizations and what motivates them to share,, we conducted interviews with 20 customization sharers of four diverse systems: Sublime Text, Minecraft, Alfred, and IFTTT; the first two being customizable systems and the others being customizing tools used for creating customizations.  Being a game, Minecraft adds an interesting perspective on sharing as will be seen. In fact, customizing is so commonplace in games that is considered an important part of using the system (Dyck, Pinelle, Brown, & Gutwin, 2003). 132  5.1 Related work Our research draws on the literature on customization and customization sharing. Sharing customizations that involve programming have some similarities with developing and participating in FOSS projects, thus we also briefly review literature on roles and motivations in FOSS.  5.1.1 Types of customizations The customization literature has identified a range of different customizations, although there is no standardized terminology. For example, Opperman and Simm talk of two broad customization categories: functionality and interface adaptations (1994). Bentley and Dourish similarly distinguished surface customizations, in which users select from a set of predefined options, from deep customizations, in which users customize the deeper aspects of a system, for example by adding a new behavior (1995). In Chapter 4, we defined advanced personalization/customization broadly as customization that goes beyond changing the look and feel, and involves changing functionality.  5.1.2 Motivations to customize MacKay identified several triggers (motivations) and barriers to customizing one of the earliest Unix environments targeted at non-technical users. Some of the most common triggers were noticing one’s own repeated patterns, retrofitting when the system changed, and seeing a “neat” customization; the most common barrier was lack of time (Mackay, 1991). Motivations of game customizers, however, are quite different. They consider customizing an artistic endeavor, 133  allowing them to make games “their own” and thus increase their enjoyment of game play, and helping them acquire a job in the game design industry (Postigo, 2007; Sotamaa, 2010).  5.1.3 Customization sharing: benefit, roles, and medium Several studies have documented customization sharing habits and the different types of users who are involved in the sharing process within an organization (Gantt & Nardi, 1992; Kahler, 2001; Mackay, 1990a; MacLean, Carter, Lövstrand, & Moran, 1990). Most of these studies identified a continuum of three types of users: ordinary users, local developers (sometimes referred to as translators and tinkerers), and professional programmers or lead users. Lead users created customizations for their own use, and translators created simplified and task-specific versions of the customizations created by lead users (Mackay, 1990a). Similar to translators, local  developers in Gantt and Nardi’s study also created customizations for the employees of their organization (1992). Some local developers, referred to as gardeners, were paid to do so in certain organizations (Gantt & Nardi, 1992).  We further compare these roles with respect to our findings.  MacKay’s pioneering study of sharing Unix configuration files and email filtering rules revealed the importance of sharing customizations by showing that only a small percentage of people customize, and most people prefer to ask others about a customization or to modify an existing customization file (1990a).  In a recent within-organization study of customization sharing, Draxler et al. looked at appropriation practices in using Eclipse, a customizable development environment. Their study 134  suggests three principles for supporting customization sharing: the ability to browse plug-ins installed by colleagues, providing an awareness of peers’ customization activities, and the ability to install tools that are already in use by peers (2012). Murphy-Hill and Murphy (2011) found that peer observation and peer recommendation are programmers’ primary means of discovering new plugins. Practitioners in their study indicated a preference for peer interaction compared to other information sources such as forums and Twitter for discovering and learning about new customizations. Several studies identified email as an effective way of sharing customizations within an organization (Kahler, 2001; MacLean et al., 1990). For example, Kahler’s study of sharing Word add-ons via email found it effective for small work groups, but suggested that to scale beyond an organization, shared customizations need to have rich annotations and comments to provide context for others (2001). Studies of sharing customizations via wikis revealed users’ dissatisfaction with allowing others to edit their customizations (Lafreniere, Bunt, Lount, Krynicki, & Terry, 2011), and difficulty of knowing who can see their scripts and who is affected by their edits (Leshed et al., 2008). Altogether this suggests that designing mechanisms to support sharing customizations is not straightforward. Online customization sharing has also been investigated in the context of remixing behaviors in maker communities (Oehlberg et al., 2015), where little user activity was observed around generated (remixed/customized) designs. 135  5.1.4 FOSS: background, roles and motivations FOSS projects used to start with a single programmer solving her own problem, and then making the solution available to others. Once a FOSS project attracted developers who would like to contribute to the project, the owner became a coordinator (Raymond, 1998). In addition to the coordinator, the following roles exist in a typical FOSS project: core developers who write most of the code and review submitted code, contributors who can become core developers after sufficient contributions (as voted by the core developers), problem reporters, user support, and end users (Mockus, Fielding, & Herbsleb, 2002). More recently, open source projects are increasingly being driven by companies trying to gain competitive advantage; see (Fitzgerald, 2006) for how FOSS development has evolved.  Individuals have heterogeneous motivations to participate and contribute to FOSS. Although no one motivation dominates in the community (Lakhani & Wolf, 2003), the promise of higher future earnings (Haruvy, Wu, & Chakravarty, 2003), the need to solve one’s own problem (Lakhani & Von Hippel, 2003), and intellectual curiosity (Lakhani & Wolf, 2003) have been reported as the most important drivers to contribute to open source projects. Some contribute to improve their programming skills, some enjoy programming, some have a personal need for the code, some feel an obligation to the community because of using FOSS and believe that the code should be open, and some want to enhance their reputation (Lakhani & Wolf, 2003). For FOSS projects driven by companies, programmers are paid for their work (Fitzgerald, 2006).  To summarize, the existing literature on sharing customization focuses on understanding customization sharing for a single tool and/or within a single organization. Thus, the broader 136  landscape of online customization sharing is relatively unknown. This paper builds on and extends this body of work by investigating online customization sharing practices for a variety of tools. 5.2 Methods We conducted a semi-structured interview study with 20 users of four systems to investigate the mechanisms they use to share customizations as well as their motivations for creating and sharing customizations.  5.2.1 Systems investigated To find customizable systems that support sharing and re-using of customizations, we searched the Web for the following keywords: “share” plus each of “customization,” “personalization,” and “configuration” keywords. In addition, we asked friends and colleagues to introduce us to any customizable tools that they were aware of. From this initial list, we chose to review 10 systems that represented good coverage of the sharing mechanisms found. The 10 systems included: two blogging platforms (WordPress, Tumblr), two text/code editors (Vim, SublimeText), an application launcher (Alfred), an automation tool (IFTTT), a game (Minecraft), a task management tool (RememberTheMilk), a web browser (Google Chrome), and a desktop customization program (Rainmeter). We identified key dimensions across which the shared customizations and the sharing mechanisms in these systems differed. Shared customizations differed in their human readability, granularity, and authoring accessibility. Customization sharing mechanisms differed in where the customizations were shared, whether the platform 137  ensured security of customization, whether it provided meta-data and supported commenting on shared customizations, and whether it allows for customization requests. Then, to further investigate the characteristics of sharing mechanisms from sharers’ perspectives, we chose four systems – Sublime Text, IFTTT, Alfred, and Minecraft – that represented the diversity across the ten systems in terms of the dimensions we identified. Each of these systems is briefly described below. We intentionally chose to include two customizing tools, ones that are used to customize other apps and services, namely Alfred and IFTTT. The other two systems that we chose are customizable systems: Minecraft and SublimeText.  Sublime Text is a customizable text/code editor. Creating an advanced customization in Sublime text is done through developing plugins in the Python language using the Sublime Text API and wrapping it into a package. For example, “All Autocomplete” is a package that extends Sublime’s autocomplete by finding matches in all open files—instead of only the current one.  Minecraft is a game where users create worlds by breaking and placing construction blocks. Modifications (mods) of the Minecraft code add a variety of gameplay changes ranging from new blocks, to new items, to entire arrays of mechanisms to craft. For example, “Advanced Genetics” is a mod that gives the player and other entities in the game supernatural abilities such as teleporting or flying by injecting genes using a syringe. Creating a mod requires programming in Java. Alfred is an application launcher and productivity tool. Users can automate their tasks by creating customizations (called workflows). Creating a workflow in Alfred involves trigger-action programming (Ur, McManus, Pak Yong Ho, & Littman, 2014) in a visual programming 138  environment, where users can connect triggers to actions. Creating an advanced workflow involves writing a script in a programming language of choice. An example of a workflow was “Movie Ratings” with which users can search for a movie and see its IMDB, Rotten Tomatoes, and Metacritic ratings. Workflows allow users to use and interact with their apps and web services more efficiently. IFTTT is a web-based service that allows users to extend the functionality of their applications by creating “If This Then That” customizations, called recipes, which connect their different applications. Recipes are created using a visual programming environment. An example of an IFTTT recipe is one that connects one’s Facebook to Dropbox by automatically saving new Facebook photos in which one is tagged to a Dropbox folder.  5.2.2 Participants We recruited 20 participants who were actively sharing customizations in the four systems. To obtain their contact information, we used the “top sharer” lists from each system (Table 5.1).  From these lists, we contacted those whose contact information was publicly available. The response rate ranged from 24% (for Sublime Text) to 60% (for Alfred) and this difference did not seem to be related to any characteristics of the systems or sharers. All participants were male (four unemployed, four student/postdoc, and 12 developer/engineer) and were from eight countries. Their age ranged from 20 to 46, with 18 of the participants in their twenties or early thirties. They received $10 for their participation in the form of direct payment or donation to their favorite charity (three declined any compensation). 139   Initially we considered recruiting users who only re-use others customizations as well, but we found it near impossible since they were often not identifiable. All of our sharers were also re-users and so we do capture some of the re-user perspective here. We reflect on this limitation later.  5.2.3 Procedure All the interviews were conducted via Skype. During the interviews, which were semi-structured, we asked participants about their experience with sharing their customizations. Specifically, we asked them to describe their motivation for creating and sharing customizations, their process of customizing and sharing their customizations, their interactions with users of their customizations, and their use of others’ customizations. Appendix F  includes the interview questions which we personalized for each participant based on his online sharing activities. The interviews lasted from 10 to 75 minutes, depending on the participants’ willingness to talk about their various experiences. Interviews with IFFTT participants tended to be shorter than the others; as we will describe below, sharing in IFFTT is generally a simpler process (both Table 5.1. Systems investigated in the study, URLs where we found lists of top customization sharers for each system, response rates, and how we refer to the participants from each. System List of active sharers Response rate Number of  Participants Participants’ Labels Alfred www.packal.org/contibutors 60% 6 Alfx IFTTT ifttt.com/top_chefs 50% 5 IFTx MineCraft  www.curse.com/mc-mods/ 36% 4 MCx Sublime Text packagecontrol.io/browse/authors 24% 5 SUBx  140  technically and socially), meaning that those participants had less to talk about. The interviews were audio recorded, and then later transcribed in full. The transcriptions were qualitatively analyzed using inductive thematic analysis (Braun & Clarke, 2006), focusing data related to mechanisms used for sharing and re-using customizations, motivations to share, and social interactions around the shared customizations. We initially coded the data. Identified themes were then refined collaboratively, with frequent revisits to the raw data.  5.3 Findings  Our findings revealed that the mechanisms our participants used to share their customizations were supported by sharing ecosystems. These are systems comprised of people and tools interacting with each other to support customization sharing. We begin by describing the components of these ecosystems, as well as various roles that people play within them. Finally, we discuss what drives the ecosystems: what motivates people to create and share customizations, what makes a customization shareable, and how re-users trust and use the shared customizations. Throughout our findings, we compare and contrast our results with the prior work on customization sharing within organizations as well as the FOSS literature.     141  5.3.1 Customization sharing ecosystems 5.3.1.1 Ecosystems’ components The customization sharing ecosystems consist of customizations, customization groups, customizable software, customizing software, discussion places, customization managers, customization repositories, and source code repositories. Not every ecosystem included all components. Table 5.2 summarizes each ecosystem based on these components. We briefly explain each next. Customizations: Customizations in these ecosystems vary across two of the dimensions we identified in our initial review of customizable systems: the authoring accessibility and readability (see Table 5.2). In addition, we also found customizations differ with respect to their specificity to their author’s needs. We will discuss how these properties of customizations drive the customization sharing ecosystems by influencing re-users' trust in customizations and sharers’ decision on whether to share a customization. Table 5.2. Summary of the ecosystems in terms of their components and properties Components | systems Minecraft Sublime Text Alfred IFTTT Customizations investigated Mod Package Workflow Recipe           -Is authoring a customization accessible to non-programmers? No No Depends on the workflow Yes          -Are customizations human readable? Yes if open source Yes if open source Yes Yes Customization group Modpack None None None Source code repository GitHub, CurseForge GitHub GitHub N/A Customization repository Curse website None Packal, personal websites IFTTT  Customization manager CurseClient, FTB PackageControl None IFTTT Discussion place Forum, GitHub GitHub, Forum Forum, GitHub Twitter Ecosystem model Pipeline Pipeline Collection-of-islands One-stop shop  142  Customization groups: In the Minecraft ecosystem, our participants reported using modpacks, a group of mods that are put together to fit a specific need or a theme. In addition to simplifying downloading, one benefit of using modpacks instead of individual mods is that gamers normally play with lots of mods and the conflicts between the mods are taken care of in modpacks. We did not see the notion of grouping customizations in the other ecosystems. Source code repositories: places where sharers upload the source code of their customizations include online generic code repositories such as GitHub, and dedicated code repositories such as CurseForge (for games).   Customization repositories: places where sharers upload their customizations. Different platforms offer different functionalities such as facilitating browsing and searching of their hosted customizations. Some provide meta-data on the customizations (e.g., number of downloads, ratings), and some ensure the security of customizations through a human moderation process. Customization managers: tools that connect the customizable software and the source code repositories. They allow re-users to browse and install customizations, and sharers to distribute updates to their customizations. Discussion places: All the ecosystems include public discussion places such as forums, IRC channels, GitHub, and Twitter. Our participants reported announcing their customizations, receiving feedback on their customizations, and supporting their customizations in these places. The feedback ranged from thanks or admiration, to feature requests, bug reports, contributions, 143  and suggestions for improvements, both from users of their customizations and other sharers. Customization sharers, in both Alfred and Minecraft ecosystems, support each other in these places for developing better customizations. These discussion places are critical for ecosystems such as Minecraft where a substantial part of the modding knowledge is embedded within the community, according to MC2. Participants’ attitudes towards the discussion components of the ecosystems varied (both within and across the different systems we studied). For example, IFTTT participants did not expect to support their customizations in any way: “I wasn't really sharing the recipes to support and manage them, it was just kind of a throw it out there and if they want to use it they can” [IFT2]. It is more common in Alfred and Minecraft ecosystem to support their customizations, however, the desire to do so often depends on the nature of the requests and the customization: “the one [workflow] that I received the most bug reports for was […]. I never really investigated [the reports] because it worked for me [Alf5].  5.3.1.2 Degree of ecosystem component integration: three models We briefly describe how our respondents reported interacting with the different components described above to share and reuse customizations in each ecosystem.  Considering how the ecosystems’ critical components are integrated together, each ecosystem can be best described as one of the following three models: a collection of islands (where the critical components are disconnected from each other), a pipeline (the critical components are connected such that to share a customization, sharers only need to upload its source code to a code repository and do not have to do anything else to facilitate re-using), or a one-stop shop (a single component does the job of all the critical components). 144  IFTTT: The simplest among the ecosystems we studied, the IFTTT website is the single place that supports all the processes of creating, sharing, browsing, searching, and installing recipes. Sharing/publishing a recipe is as easy as a single click in the process of creating a recipe, with no written code involved. Users find shared recipes by searching or browsing the recipes, however, the ease of creating recipes caused some of our participants to create a recipe without searching first: "I just have a need and start creating it. IFTTT process is so simple and quick that it's just as fast, if not faster, to just go ahead and make it and customize the field the way that I'm thinking than try to work off somebody else's existing recipes" [IFT3]. This has led to many duplicate recipes on IFTTT (Ur et al., 2016). There is no dedicated discussion place for IFTTT recipes, which does not appear to be missed by our participants: “They [IFTTT recipes] are just such small things and such an almost an incidental part of my day to day work. I can't imagine engaging in commenting back and forth on any of them” [IFT3].  Sharers do, however, receive questions about their recipes on Twitter which is also used for requesting a recipe. In addition, users can tweet their created recipes from within IFTTT.  Overall, this ecosystem is best described as a one-stop shop. Minecraft: Mod authors upload the source code of their mods in various code repositories such as GitHub (a generic code hosting service) and CurseForge (a dedicated one for games). Re-users can browse and download mods from mod repositories such as the Curse website, or install through a mod manager such as CurseClient. We found other customization repositories and customization managers for Minecraft, but none were used by our participants. Users ask questions, report bugs, and request features mostly in the Minecraft forum, but also in the Curse 145  forum and GitHub.  Users find new mods to play by watching Youtubers who publicize and demonstrate how to use a mod or through modpacks that are listed and featured by the game launchers. Much of the Minecraft sharing ecosystem can be described as a pipeline, because once the customizations’ source code are uploaded to the source code repository, they will be available on the customization repository and the customization manager to install, and the installed mods will be kept updated for their users as developers update them.  Sublime Text: Package developers upload their packages’ source code to GitHub, and request their package to be listed in the Sublime Text’s customization manager (Package Control), which is integrated with Sublime Text such that users can search, install, and update packages from within Sublime Text. Despite not having to leave Sublime Text for finding new packages, our participants reported going to Package Control for finding new packages, because data such as number of installs—which they find helpful in deciding between similar packages—are available only there. Discussions around a package include bug reports and feature requests and they happen in GitHub. The Sublime Text sharing ecosystem is also best described as a pipeline, because once sharers upload their packages to GitHub, they will be available to install both from the customization manager and from within Sublime Text, and the installed packages will be automatically updated for their users when updated by their authors. Alfred: Alfred workflow developers upload both their workflows and their source code separately in two disconnected places: GitHub, and Packal—a website that is supposed to be the central repository for Alfred workflows. In addition, they announce their workflows in the Alfred forum’s “share your workflow” thread. Bug reports and feature requests are received in both the 146  forum and GitHub. Users find new workflows by regularly checking the forum, rather than through Packal even though the latter is designed specifically for this purpose. The Alfred sharing ecosystem can be best described as a collection of islands, since no integration exists between its components. Some of our participants commented on how they used to share their customizations in the past and how that has changed. We found that the way sharing is supported in each system has evolved over time, and the evolution, except for IFTTT, has been quite organic. Customization sharers used to share their customizations in forums. Over time, users or third party developers began to contribute to the sharing process by developing dedicated tools that facilitate sharing and reusing of customizations. These contributions have given rise to what we refer to as ecosystems. For example, Sublime Text’s customization manager was developed by a Sublime Text customization sharer as a way to distribute updates to his package (Bond, 2015).  5.3.1.3 Roles in the ecosystems Through our interviews with customization sharers, we also gained insights into other roles that occur in the sharing ecosystems. The descriptions above point to different activities various people perform in these ecosystems. Here, we consolidate this discussion into a set of roles: customization sharers, reviewers, re-users, problem reporters, requesters, helpers, publicizers, and packers. Some of these roles were common in all the four ecosystems, while others were only found in one or two ecosystems. As we describe below, these roles have some overlap with those previously identified in both within-organization customization sharing and FOSS projects. 147  The roles consist of two main categories:  sharers and re-users, each of which has subcategories. Reviewers are a subset of sharers, and problem reporters, requesters, and helpers are subsets of re-users. Publicizers and packers are two secondary roles.  Sharers: People who author and share customizations online with others. Sharers communicate with and help interested users to use their customizations. Some of our sharers also reported creating customizations for others upon request. As a result, customization sharers often have a more complex and multi-faceted role than identified in previous studies of within-organization sharing, for example, combining the roles of lead users and translators in MacKay’s study (1990a), and local developers and gardeners in Gantt and Nardi’s study (1992). Using FOSS terminology, sharers often take on the combined roles of owner, core developer, and contributor.  --Reviewers: A subset of customization sharers in the Alfred ecosystem play this role, by providing feedback to other sharers in the discussion places for the purpose of assisting the creation of higher quality customizations. They do so completely voluntarily upon requests: “when it's a new user who is saying this is my first workflow, even if I don't use it or intend to use it. I always download it and see how it's constructed to be able to say what would have I done differently here […] to incentivize them to work for better solutions in the future” [Alf3]. This feedback-driven role differs from the role of local developers in (Gantt & Nardi, 1992) who consulted with end users to create customizations to suit their needs. The reviewer role is common in FOSS projects (Mockus et al., 2002), however, reviews there tend to be the necessary prerequisite to having a piece of code included in the central code base. The fact that review sometimes takes place in customization sharing ecosystem even though customizations 148  neither become part of an official code base nor have to be used by other users was an unexpected finding.  Re-users: People who re-use customizations created by others. Some re-users play other roles such as problem reporters, requesters and helpers which we describe below. --Problem reporters: A subset of re-users of a given customization report its problems. They do so in various discussion places, some of which like GitHub are more suited to the task of bug reporting. The inclusion and support for problem reporters is one of the benefits of sharing customizations online: “Those [bug reports] are very good because it fixes the bug for me and everyone else” [Alf3]. Sharers commented that a system like GitHub makes it easier to track and organize the reported problems compared to forums, where the problems are buried in other posts leading to redundant reports and responses. Despite this preference, many bug reporters continue to use the forums. The same role of problem reporter exists in FOSS projects, where users of the software are relied on to report problems, however, we did not see an analogous role in studies of within-organization customization sharing. --Requesters: A subset of re-users who solicit a customization from others. Alf1 reported receiving a direct request from someone, and IFT4 responding to a public request on Twitter from someone he follows on Twitter. While this role has not been explicitly identified in prior studies, as mentioned earlier, local developers in (Gantt & Nardi, 1992), and translators in (Mackay, 1990a) created customizations for their colleagues sometimes in response to requests. 149  --Helpers: A subset of re-users help other re-users who have difficulty using a customization. Some sharers reported relying on these helpers: “If I have a relatively new mod, it's usually like you answer questions and help them out but once the mod gets bigger you have people who already know about the mod and know how to solve its problems. They usually take care of answering all the questions” [MC4]. Helpers monitor the discussion places, and provide answers to users’ questions. In contrast, within an organization, employees direct their questions about a customization to someone who is likely to know the answer (Gantt & Nardi, 1992; Mackay, 1990a). Helpers’ job is similar to the mundane but essential task of user support in FOSS projects (Lakhani & Von Hippel, 2003).  Publicizers: In the Minecraft ecosystem, a few famous YouTubers publicize mods by demoing them. They not only create awareness of new mods, they make it easy for others to use them too. This role is similar to FOSS advocates who blog about various FOSS projects to raise awareness of them. Packers: In the Minecraft ecosystem, a group of people—called modpackers—put mods together and take care of the conflicts between the mods. In the same way that translators in (Mackay, 1990a) created task-specific sets of customizations by reusing the customizations created by the lead users, modpackers create theme-specific sets of mods using the mods created by mod authors.  To summarize, compared to customization sharing in organizational settings, we found more roles in the online settings, many of which have analogies to those needed to build and maintain a FOSS. The expansion of roles over the within-organization setting could be due to online peer 150  interactions being facilitated by various discussion places. The transparency of the online interactions could contribute to online reputation building. We return to ways to better support these emerging roles in the Discussion. 5.3.2 What drives the customization sharing ecosystems  Prior studies of customization sharing have shown that only a small percentage of users of customizable tools create and share customizations with others (Mackay, 1990a). Understanding the motivation of this small group is crucial for supporting them effectively, hence keeping the ecosystem alive. The other factor that plays a role in driving these ecosystems is re-users’ trust in shared customizations. After all, if no one other than the original authors use their customizations, sharing becomes worthless. In this section, we describe our participants’ motivation to create and share customizations, describe the characteristics of unshared customizations, and report what characteristics of the ecosystems help re-users to trust the shared customizations and use them.  5.3.2.1 What motivates sharing customizations We found that a combination of motivations drive customization sharers’ behaviors. While our participants from different ecosystems shared common motivations, some motivations were more pronounced in some ecosystems than others. We describe the motivations below, and further elaborate the properties of ecosystems that contributed to such differences in Section 5.4.  Being influenced by a sharing culture, particularly open-source culture: Many of our participants across the ecosystems share their customizations because they embrace a sharing 151  culture: “I guess it's more of a fundamental piece of myself where I really like open source software. I like the idea and motivation behind it, of always sharing things. So, I'm pretty public on trying to share as much as possible that I can with whether it's stuff that I do outside of IFTTT but also recipes” [IFT2]. The sharing culture seems to be influenced by the use of FOSS. Sub4, Sub5, and Alf3 talked about their reliance on open source in their job as their motivation to share their customizations: “Nearly everything that I rely on for my job is open source, […] so it would be just silly not to open source it” [Sub4]. This sense of obligation to the community has also been identified as one of the motivations for contributing to FOSS (Lakhani & Wolf, 2003).  Building reputation: Two of the Alfred participants referred to reputation building as part of their motivation.  Alf4 and Alf1 specifically mentioned that their customizations contributed to their GitHub profile: “To be honest, it's also good to have a GitHub profile every now and then, because then you get attention. That's also something that I don't want to play down; […] It's just a plus, it's nice to have. I can just put it on GitHub and other people might like it and is good for me as well” [Alf4].  Sharing customizations on GitHub seems to be one way of managing one’s activities to form good impressions, since online activity traces in GitHub are used for recruiting (Marlow & Dabbish, 2013), as well as for forming impressions about ones’ expertise (Marlow, Dabbish, & Herbsleb, 2013). This is similar to self-marketing that promises future monetary awards, one of the motivations for contributing to FOSS (Hars & Ou, 2001; Lerner & Triole, 2000).  Having an online backup of customizations: A side benefit of sharing customization is that it gets backed up online: “Some is just so that have it some place. If I stop using a particular 152  workflow and later on I wanted it back again, if I have deleted it, I can just pull it back down from Packal [Alfred’s unofficial customization repository]” [Alf1].  Zero or minimal effort needed for sharing: IFTTT has made it super easy to share by adding only a single click to the process of creating a recipe. Such ease of sharing in IFTTT affected IFT3’s decision to share: “IFTTT makes it relatively easy to publish those [recipes]. So, it felt relatively inconsequential for me to just hit the share button and let it go out.” In addition to asking about motivation for sharing customizations, we asked about motivation for creating them in the first place.  Having a personal need: This is the dominant motivation across the participants in all the ecosystems (except for Minecraft): “all of them [workflows] are meant to scratch an itch” [Alf3], “I usually find deficiencies in my workflows and try to find ways to improve them” [SUB5], “to make things a little easier for me” [IFT4]. Personal need for a solution has also been identified as one of the most common triggers to customizing (Mackay, 1991), and one of the most important drivers of contributions to FOSS (Lakhani & Von Hippel, 2003).   Increasing enjoyment of the game: Echoing prior findings (Postigo, 2007; Sotamaa, 2010), this is the main motivation for our Minecraft participants: “The main reason is I just really enjoy the game. With any game that you enjoy you just find ways that you could improve various things” [MC2]. This motivation is so strong that even lack of programming knowledge has not been a deterrent: “My only [programming] background is what I've done with Minecraft. When I would 153  watch tutorials they would suggest learning Java first but I went against that and just learned as I went along” [MC1]. Self-development: Similar to some FOSS contributors (Lakhani & Wolf, 2003), learning or practicing one’s programming skills is a motivation for creating their first customizations for a few participants: “my motivation was just learning about programming. I basically learned programming building Alfred workflows” [Alf5], “I always wanted to do something with my knowledge of java, and when a friend told me "You should do something like that [creating mods]" while playing Minecraft, I started” [MC3].  Responding to others’ requests for a desired customization: Although uncommon, this is another motivation to create a customization: “A guy on Twitter that I follow was mentioning that he wanted to do something and I made this [recipe name] recipe and sent that to him and he was pretty appreciative of that” [IFT4].  Job’s responsibility: Finally, IFTTT’s community managers create customizations as part of their jobs, similar to the gardeners (Gantt & Nardi, 1992) and the paid contributors to FOSS (Lerner & Triole, 2000). 5.3.2.2 Unshared customizations: what, when, and why Many participants mentioned that it is silly not to share a customization once they create it. Despite that, all the participants, except for Minecraft, reported creating some customizations that they chose not to share. We describe the characteristics of customizations that our participants referred to as influencing their decision about sharing. 154  Customizations with private information are not shared. Being unsure as to how to anonymize a customization or hide private information is a reason for not sharing customizations: "some of them [I didn’t publish] probably because they had more private information and I wasn't really sure how to anonymize them. They referred to a specific directory somewhere on my computer, or they referred to a RSS feed that were private to me" [IFT3]. Overly specific customizations are not shared. All the IFTTT and Alfred participants mentioned that if their customization is very specific to their needs, they will not share it:  “I have a few that I don't think that as many people would find it useful or might have something custom to the way I manage folders, Google drive, or something like that” [IFT2]. It is possible to make a specific customization useful to others, but it can require extra effort as Alf6 put it: “To make a workflow usable to everyone, there is certain level of quality that you have to reach. You need to write a bit of documentation, you have to add the configuration UI.” Such effort can be more than some are willing to invest: “if it would take more work to make them good enough whether it's pretty or simple so everyone can use, I won't share because I'm lazy” [Alf5].  Too straightforward customizations are not shared. Our participants also tended not to share customizations that are “too straightforward” to create: “they were too straightforward to share; basic stuff like if there is a new post in the RSS feed, email it to me. I feel like because most IFTTT users know that functionality exists, it's probably not necessary to share that” [IFT3]. In the case of Alfred, “too straightforward” to create workflows are the ones that could be created by the Alfred GUI without coding.  155  In both of the above cases, participants commented on wanting their customizations to be useful for a broader audience and that popularity (or lack thereof) of customizations could boost or hurt their ego: “The one [recipe] with Nest thermostat, I didn't publish because it requires you have Nest. So, it just didn't seem like it would be necessarily popular. So, I publish the ones that seem generally could be used by a wider audience…if I feel like over a period of time, they [recipes] remain relatively unpopular, I'll probably delete them. There is a little bit of ego to it, you know having a popular recipe is interesting. If something is sitting there and not being popular, I might unpublish it" [IFT1]. Unsharing a customization because it is not popular is an implication of customization repositories exposing the usage data of shared customizations. We will discuss the trade-offs in making such data transparent in Section 5.4.  Unlike Sublime Text, IFTTT, and Alfred, where private customizations are common, private Minecraft mods makes no sense to our Minecraft participants: “that [creating a mod and not sharing it] wouldn't really make sense because you kind of make the mod for other people to use. Also the knowledge needed to create a private mod makes it not worth it to just create it for yourself, unless it's a very small thing” [MC4]. The large investment to create a customization in Minecraft does not justify not sharing it. 5.3.2.3 Trust in shared customizations Previous studies have shown that being able to trust shared customizations (Draxler et al., 2012) and their sharers (Murphy-Hill & Murphy, 2011) is critical. Buggy customizations could break software, and improper treatment of user data could raise security concerns. We gained some insights into re-users’ trust since our participants also re-used others’ customizations.  Our 156  participants generally expressed either no or little concern in reusing shared customizations, because of the following characteristics of the ecosystems: human moderation, exposed popularity, customization readability, and sharer reputation.   Human moderation: The customization manager in Sublime Text (Package Control) and the code repository in Minecraft (CurseForge) provide human moderation on the customizations.  This made our participants trust in the security of the customizations: “I know from uploading my own mods they check things out before allowing others to download files” [MC1].   Exposed popularity: Some participants pointed to popularity of a customization as a cue to its security: “when the mod is really famous and thousands of thousands of people playing it, then I wasn't too worried. It can't be dangerous when so many people are playing it” [MC2]. Customization readability: Several participants mentioned that the availability and readability of customization files increases their confidence in the security of the file, even though they do not necessarily investigate every customization they use: “the fact that all plugins' repo is freely available, they basically are code that you can read directly makes it more trustworthy in my opinion. I try to be careful, I usually try to have a peak on the code” [SUB1]. Sharer reputation: In the Alfred and Minecraft ecosystems where there is a sense of community among users and customization authors, our participants mentioned knowing good authors whose customizations they trust: “Now that a few years have gone by I know which developers can be trusted” [MC1]. 157  5.4 Discussion and implications for design An important finding of our study is the notion of customization sharing ecosystems: different tools and people in various roles working together to support various aspects of sharing and re-using of customizations. Considering the multiplicity of tools that are used to support sharing and re-using in each ecosystem, we could have not reached our current understanding of sharing practices had we only studied an individual component (e.g., a forum) within the ecosystem—an approach taken by some of the other studies of customization sharing (Cheliotis & Yew, 2009; Oehlberg et al., 2015). Grounded in our findings, we discuss some implications for the design of customization sharing ecosystems, and highlight some important design tradeoffs. Both the pipeline and the one stop shop are appropriate approaches for customization sharing ecosystems, choosing one depends on the complexity of customizations. The degree of integration between an ecosystem’s components impacts ease of sharing and reuse. Compared to the collection of islands ecosystem, both the one-stop shop and pipeline ecosystems make it easier for sharers to publish customizations and distribute updates, as well as for re-users to find customizations and keep them up-to-date. But a one-stop shop may only be applicable for relatively simple, lightweight-to-create customizations as in IFTTT, given that for advanced customizations that require programming, each of the components of the ecosystem such as discussion places, customization repositories, and code repositories are complex systems in their own right.  Using generic well-used code repositories such as GitHub has advantages. Most customization sharers in our study (except for IFTTT) share their customizations’ source code on 158  GitHub. Sublime Text sharers have to do so in order for their customization to be listed in the customization manager, which is integrated with GitHub. However, Alfred and Minecraft sharers are not required to use GitHub, but choose to because: 1) of the importance of having a good GitHub profile (which ties into reputation building as one of the motivations for sharing), and 2) GitHub facilitates tracking and organization of bug reports and feature requests for their customizations (which points to the maturity of GitHub, offering useful functionality).  Motivations to create and share customizations overlapped considerably with those in FOSS, but there are some differences. The common motivations are having a personal need, self-development, building reputation, and a sense of obligation to contribute/share because of using FOSS. Such overlap in motivations was not entirely expected because the contexts are different – although many shared customizations are open source, they are often contributing to closed source commercial software. Indeed, this difference in context helps explain some differences in motivations. For example, while no one motivation tends to dominate in FOSS (Lakhani & Wolf, 2003), we found personal needs to be the dominant motivation across the customization sharers. In addition, the zero to minimal effort needed for sharing a customization in some ecosystems actually motivated some of our participants to share. This motivation does not seem to exist in FOSS, and it could be explained by the intense peer review process , which is required in some FOSS projects, since the contributions affect a common code base (Raymond, 1998). Such process can in fact add to the effort needed for contributing to a FOSS project as it raises the bar for the contributions – they need to reach a quality level and meet project-specific code styles and conventions. In contrast, the review process is much lighter weight in customization sharing ecosystems, if it happens at all. Ecosystems could be designed to 159  make sharing a customization almost as effortless as not sharing it. Indeed, we saw with IFTTT that some participants were motivated to share a recipe because it was so easy to do so.  Discussion places are beneficial to both sharers and re-users. MacKay identified a lack of feedback to lead users who created and shared customizations in her early within-organization study (1990b). We found that discussion places—such as forums, Twitter, and IRC channels—in online sharing are helping to address this problem. Although we did not observe the use of general programming Q&A websites such as Stack Overflow as discussion places, they seem to be better suited to support sharers and re-users than forums for several reasons: the reputation can be gained by responding to many kinds of questions including customization-related ones, and it is easier to find a question or answer in Q&A websites than it is in forums. Some of the key benefits of dedicated discussion places, which is what we did observe, are building trust between the sharers and re-users, clarifying problems with customizations, and providing feedback to the sharers, which sometimes led to customization improvements. While having a discussion place is critical, we saw a tradeoff in having one versus multiple such places. In both the Alfred and Minecraft ecosystems, some users report bugs in one place (forum) and others in another place (GitHub). On one hand, this makes it hard for sharers to keep track of reported bugs. On the other hand, it lowers the barrier to bug reporting.  For example, reporting an issue in GitHub requires an account. To resolve this tension, ecosystems could do more to integrate their various discussion places. For example, users could flag their forum post as a bug report causing the post to be filed as a bug in another component of the ecosystem (e.g., the issue tracker of the source code repository).  160  Customization packs can add value. The idea of customization packs and the role of packers, appeared only in the Minecraft ecosystem, but could be valuable for other sharing ecosystems. Grouping relevant customizations would make it easier for users to discover and use relevant customizations without having to worry about potential conflicts between the customizations: a concern raised in prior studies (Murphy-Hill & Murphy, 2011). Further study is needed to understand the motivation of packers and how such a role can be encouraged and supported in other customization sharing ecosystems. Demoing customizations should increase their adoption, and thus keep the sharers encouraged to share. We learned that many sharers care about the popularity of their customizations, yet they rarely publicize them. The publicizer role in the Minecraft ecosystem is the one exception. All our Minecraft participants reported discovering new mods mostly through publicizers and mod managers that feature popular and new mods. Publicizers effectively create awareness of customizations and demonstrate how to use them in video. Other ecosystems could leverage this approach by providing a video channel. Sharers could be incentivized to demonstrate their customizations, perhaps through a “weekly winner” mechanism. Altogether, publicizing should increase the adoption of the shared customizations, which in turn will keep sharers, the most crucial role in these ecosystems, encouraged. Trust deserves more attention. Our participants expressed almost no concern about using others’ customizations. This was unexpected given that prior work had shown that people tend to trust their colleagues more than strangers when re-using customizations (Draxler & Stevens, 2011). In retrospect, however, heavy sharers may not be representative when it comes to trust 161  concerns, and so this finding should be interpreted with caution. The factors that engender their trust, however, point to possible areas for improvement. For example, many of the Sublime Text and Minecraft participants were confident in the security of others’ customizations because, through their own sharing, they were aware of the moderation process. The visibility of this moderation for re-users, especially novice re-users, is questionable. Readability of customizations is another factor that aids trust, however, this will again be limited to people, like our participants, who can easily read others’ customization code. Source code repositories could leverage the effort of those re-users who choose to read a customization’s source code by allowing them to indicate their trust of a customization. Although we found that re-users rely on popularity in part as a proxy for trustworthiness, this penalizes newly shared customizations that have not yet had time to gain popularity. Publishing trust data could help these new customizations find an audience more quickly.  Providing formal support for the reviewer role would bring both benefits and costs. Related to reading customizations for their trustworthiness, we found that some sharers voluntarily review others’ customizations to provide feedback. Formalizing the reviewer role into the ecosystems could be useful, but there are tradeoffs. It could encourage newcomers to attempt creating a customization (knowing feedback will come) and it will increase re-users’ trust. The flip side, however, is the time it takes to review. Either reviewing needs to become easier, or an incentive structure needs to be in place to motivate users to contribute in this role. Unpublished customizations may be a lost opportunity. In some ways, understanding when people do not share is as interesting as learning when they do. Reasons for not sharing included 162  uncertainty about how useful a customization would be for others, too much effort to ready it for others, or it being too straightforward. This could be a lost opportunity. For example, a customization might indeed be deemed too straightforward for another top sharer to bother with, but what about a newcomer? In essence, the straightforwardness of a customization may be in the eye of the re-user. If the ecosystems could support sharers to announce a possible customization, in order for the sharer to assess interest, this could help the sharer decide if it is worth the effort to publish it, or even solicit effort from others who want to re-use it to do the “cleaning” and publishing. Perceived difficulty/ease of authoring a customization affects both sharing and re-using. As mentioned above, when authoring a customization is perceived to be easy, the authors choose not to share. We also saw with Minecraft that the difficulty of authoring does not justify keeping a customization private once it is authored. In addition, some IFTTT participants mentioned authoring a customization without bothering to search existing ones, because the cost of searching and re-using was equivalent to the cost of authoring for them. In the latter case, the ease of authoring affected the decision about whether to reuse or to author.  Customization authors in different ecosystems decide about sharing their customizations in different points in time. In the Minecraft ecosystem, the decision to share appears to precede the authoring, since our participants reported authoring a mod only with the intention to share it. On the other hand, the majority of Alfred and Sublime Text participants reported authoring a customization to address a personal need, and sharing it once it proves its value or they think it might be useful for other people too. In IFTTT, authors often decided about whether or not to 163  share a recipe while authoring a recipe. We attribute that to how IFTTT supports sharing, i.e. selecting a checkbox to make the created recipe public as part of the process of authoring it. 5.5 Conclusion In our efforts to understand how customization sharers go about sharing and what motivates them, our study uncovered online customization sharing ecosystems. We documented various components of these ecosystems, described the design of the ecosystems based on how their various components are connected, and discussed the tradeoffs among designs. We also identified various roles that occur in the ecosystems, compared and contrasted them with similar roles in customization sharing in organizational setting as well as in FOSS projects, and discussed how to provide support for those roles.  164   Conclusion In this chapter, we briefly summarize the steps we took to achieve the thesis goal, elaborate on the contributions, and outline some directions for future work.  The overarching goal of this dissertation was to design for enabling advanced personalizations targeted at supporting individual differences as well as changes in behaviors over time. To achieve this goal, we first selected the PTM context and studied individual differences in PTM as well as changes in individuals’ PTM over time. We learned about the types of advanced personalizations that need to be supported. We then used our findings and the theoretical guidelines on how to design personalizable tools and desgined a personalizable PTM tool with a mechanism for authoring advanced personalization. Finally, we studied how users of various personalizable tools share their personalizations to understand how personalization sharing could be supported. 6.1 Thesis contributions 6.1.1 Rich characterization of individual differences in PTM, informing the design of personalizable PTM tools Understanding individual differences and changes in behaviors are two essential steps for the design of personalizable tools. Both have been under explored in prior PTM studies in HCI. Our studies — first to investigate PTM with the lens of understanding individual differences — made several contributions which we describe below. 165  Characterization of PTM behaviors that differ across individuals, revealing the types of personalizations needed in a personalizable PTM tool We identified three groups of PTM behaviors (recording tasks, remembering tasks, and maintaining and organizing task lists), which match well with the three groups of PIM activities (Jones, 2007), and reported how these behaviors differ across individuals (Chapter 2). Those differences revealed the types of advanced personalizations needed and were directly incorporated into the design of a personalizable PTM tool (described in Chapter 4). Identification of three types of users (Adopter, DIYers, and Makedoers) that encapsulate the individual differences in PTM along two identified dimensions – personalization and type of tool used – with an academic population In our focus group + contextual interviews (Study One in Chapter 2), we found that individuals belong to one of the categories of DIYers, make-doers, or adopters based on the tools they used and the extent to which they personalized their tools. DIYers used and personalized a general purpose tool, make-doers used a general purpose tool without personalizing it, and adopters used a dedicated e-PTM tool. DIYers’ behaviors gave us insights in the types of advanced personalizations needed in a PTM tool.  Assessment of the generalizability of the three user types to a broader population, revealing that users can exhibit tendencies towards these three types rather than matching one type only Contrary to the findings of Study One, our survey study (Chapter 2) showed that many of the survey respondents did not belong to only one of the user categories of DIYers, adopters, and make-doers.  Instead, we found that individuals demonstrate coexisting tendencies toward 166  DIYing, make-doing, and adopting, and what differed across individuals was the relative strength of these tendencies: some preferred using tools that were already available to them without personalizing them (people with a relatively strong tendency toward make-doing), and others preferred using a dedicated PTM tool (people with strong tendency toward adopting) or even designing their own PTM tool by using a general-purpose tool and personalizing it (people with relatively strong tendency toward DIYing). Based on this, we concluded that personalizable PTM tools need to be designed in such a way that they accommodate the varying strengths of these tendencies across individuals rather than being designed only for people with strong DIY tendency or only for people with strong make-do tendency, which was our original recommendation based on Study One. Our PTM system design (Chapter 4) followed this recommendation; the interface was simple enough to satisfy make-do tendencies and supported authoring of advanced personalizations for people with strong tendency toward DIYing. The assessment of generalizability of our initial findings from Study One showed how categorizing individuals into specific user groups for the purpose of summarizing individual differences, which is not uncommon in HCI, can be an oversimplification of reality. Our studies and the evolution of the understanding they have enabled provides motivation and evidence for the need to revisit and extend HCI findings. Characterization of changes in PTM behaviors over time and the factors that contribute to those changes We identified three types of changes based on whether a change is made to one’s PTM strategy, to an individual tool, or to one’s tool-set by adding or removing a single tool to/from it (Chapter 167  3). These changes often reflect the adaptability and non-adaptability of the tools used. To address this, personalizable tools could allow for tool changes (personalizations within an already used tool) instead of forcing a tool-set change (which causes users to have to switch tools). We found that people make changes to their PTM for one or more of the following reasons: changing needs, dissatisfaction by their unmet needs, and opportunities revealing unmet needs. We suggested ways for the design of personalizable PTM tools to utilize these contributing factors to better support changes in PTM behaviors over time. We further reflect on some of our suggestions below in Section 6.2.2.  6.1.2 Design and controlled evaluation of a prototype of a personalizable PTM tool  We designed and developed a prototype of a personalizable PTM tool that enables advanced personalization using a self-disclosing mechanism and a guided scripting mechanism, ScriPer (Chapter 4). A key difference between ScriPer and other similar scripting mechanisms used in automation tools such as Alfred or Inky is that we use a command line interface for creating new behaviors for interface elements, rather than running predefined commands that are mapped to interface elements which is a common approach in automation tools. Design of ScriPer for authoring advanced personalizations  ScriPer enables users to author advanced personalizations without coding. The design of ScriPer built on ideas from end-user programming, following the guidelines on designing personalizable tools, and was informed by our studies on individual differences in PTM and changes in PTM behaviors over time (Chapters 2 and 3). ScriPer’s design incorporated a structure editor that 168  presents the applicable building blocks that can be used at each step of composing a script—representing a personalization. ScriPer’s scripting language resembled natural language to simplify the format. Our design contributes to the body of work in EUP by bringing the EUP techniques of controlled natural languages and sloppy programming to the context of authoring advanced personalizations. Empirical evidence of the strengths and weaknesses of ScriPer and its accessibility to non-programmers  While some of the EUP approaches have been studied, their effectiveness for people with no to little programming experience has been largely unexplored (Miller et al., 2008; M. Van Kleek et al., 2010). We showed through a controlled experiment (Chapter 4) that participants with no to some programming background were able to use ScriPer to perform advanced personalization tasks, except in two out of 96 trials. Participants made some errors and we found error patterns differed across programming expertise. We provided design recommendations to overcome the weaknesses of ScriPer. We further reflect on those in Section 6.2.2.  In our controlled evaluation, we designed task groups in which a non-personalization task preceded a personalization task to motivate the need for personalization in the later. Participants personalizing in the non-personalization tasks provided evidence on the effectiveness of our task group design in creating the need for personalization in a lab study. Researchers studying personalization in lab settings can leverage our approach to the design of personalization tasks.    169  Design process of a personalizable PTM tool The theoretical guidelines on how to design personalizable tools have rarely been put into practice. To our knowledge, our work is first in following such guidelines to design a personalizable tool that is not a design environment as in (Fischer et al., 1989; Lemke & Fischer, 1990). Our work paves the way for designing personalizable tools by revealing its detailed design process and putting the theoretical guidelines into practice, specifically by providing insights into how to identify the building blocks of a system and how to under-design. In addition, our high level categories of building blocks—UI elements, actions, interactions, external events, entities, entities’ attributes, and attributes’ values—provide a practical starting point for designers of personalizable tools to identify building blocks of a system. That said, future research will be needed to see to what extent these building blocks generalize to personalizable tools in other domains.  6.1.3 Methodological approach to the creation of a personalizable tool Our methodological approach to developing a personalizable tool included rich qualitative research using focus groups and contextual interviews with a small and narrow sample, followed by a less rich questionnaire survey with a much larger and broader sample to assess generalizability, followed by a prototype design that instantiates all that had been learned in the formative studies, culminating in an evaluation with users with varying backgrounds. Our approach can serve as a model for developing personalizable tools that support individual differences. Designers of personalizable tools can pick and choose various steps of our approach 170  depending on the domain, its complexity, and the prior knowledge of individual differences in the domain for which they are designing. 6.1.4 Identification and characterization of online personalization sharing ecosystems  Research on personalization sharing has focused predominantly within organizational boundaries (Draxler et al., 2012; Kahler, 2001). Little empirical research has been conducted to investigate online environments and mechanisms in terms of their conduciveness to personalization sharing. Our interview study (Chapter 5) took that next step in understanding customization sharing practices beyond those within an organization.  Identification of ecosystem components that support different aspects of personalization sharing and re-using Our study uncovered online customization sharing ecosystems that consist of various components to support the different aspects of customization sharing and re-using: hosting customizations, discussing them, finding them, installing them, and keeping them updated. We described the design of the ecosystems based on how their various components are connected, and argued that the pipeline and the one stop shop are both appropriate approaches for personalization sharing ecosystems; choosing one depends on the complexity of personalizations. We learned that using generic well-used code repositories such as GitHub for hosting personalizations allows shared personalizations to contribute to sharers’ profiles and allows sharers to track and organize bug reports and feature requests for their personalizations. We pointed to the importance of providing discussion places for shared personalizations by showing the benefits these places have both for sharers and re-users. Based on the notion of 171  modpacks in the Minecraft sharing ecosystem, we suggested ecosystems to take advantage of such concept since they can make it easier for users to discover relevant personalizations without having to worry about potential conflicts.  Identification of roles that people play within these ecosystems and their motivations We identified various roles that occur in the ecosystems: sharers, reviewers, re-users, problem reporters, requesters, helpers, publicizers, and packers. Compared to customization sharing in organizational settings, we found more roles in the online settings, many of which have analogies to those needed to build and maintain a FOSS. We elaborated on the similarities and differences between the roles in sharing ecosystems and those in FOSS. Our work suggests that the expansion of roles in online sharing ecosystem over sharing within an organization could be due to online peer interactions being facilitated by various discussion places.  6.2 Directions for future research 6.2.1 Further work on informing the design of personalizable tools Investigation of underlying factors contributing to individual differences and changes in behaviors. Understanding the underlying reasons for individual differences could potentially contribute to a better understanding of the differences and thus to providing better support for them. For example, a person’s age or cultural background may influence individual differences. Previous PIM studies have shown that many PIM activities vary with age (e.g., finding web-based information in web search (Olmsted-Hawala, Bergstrom, & Rogers, 2013)). It is worth exploring what kind of changes occur in individuals’ PTM behaviors as they grow older: do they 172  tend to rely more on their PTM systems, or have they developed sufficient PTM skills that the need for using a PTM tool is lessened. In addition, the cultural differences in how individuals measure or treat time (Levine, 2005) may influence how they manage their tasks. Knowing how PTM behaviors are influenced with age and culture may help personalizable tools to suggest personalizations that better suit those differences. Other factors that could influence individual differences in task management include gender, life experiences, and roles. Studying individual differences and changes in behaviors over time in domains other than PTM. We encourage designers and those who collect user requirements in various domains to add the lens of individual differences to their data collection and analysis. That lens would help them to value each idiosyncratic behavior, even those exhibited by few individuals, which is important for the design of personalizable tools. Considering every individual behavior rather than only the common behaviors that the majority of people exhibit could lead to the design of personalizable tools that better support individual differences.  In Chapter 3, we suggested that PTM tools do the following in order to support changes in PTM behaviors over time: enable users to document and report their unmet needs, encourage reflection on and evaluation of PTM behaviors, and support sharing of PTM behaviors or personalizations. We acted on our third recommendation, in Chapter 5, by studying the current personalization sharing practices to learn how to support sharing of personalizations. The other two recommendations offer avenues for future research on personalization which we describe below.  Allowing users of personalizable tools to report their unmet needs. Providing mechanisms that allow users to report their unmet needs would help personalizable tools to better support users’ 173  changing needs. The design of such mechanisms, however, needs to be explored to address several questions: How should users report their needs? In a plain language or structured form-based? Once a need is reported, how would the tool find personalizations in its repository of shared personalizations that are likely to address the need? How should the tool encourage the community of users to respond with sharing their personalizations that could address the reported needs of another user? How can the reported needs inform the design of end-user programming languages or personalization facilities that match users’ way of expressing their needs?  Encouraging users to reflect on their behaviors. We also suggested encouraging users to reflect on their behaviors so they can both make useful changes to their behaviors as well as take advantage of their tools to support those changes. Some possible ways to encourage reflection include presenting users’ own behaviors to them, presenting others’ behaviors, and exposing them to potentially relevant personalizations. Further research is needed to explore how and when the behaviors/personalizations should be presented to be most effective.  6.2.2 Further work on designing for authoring of advanced personalization Preventing errors when authoring advanced personalizations. The preliminary evaluation of our personalization mechanism, ScriPer, showed that our approach is promising, and revealed various areas for improvement. For example, ScriPer does not prevent the user from making mistakes, and as a result some of the created personalizations were different than the intended ones. Part of the issue was the ability for the participants to easily spot a mistake. Our evaluation did not include having participants use our tool with their own tasks, something that would have likely highlighted any mistake quickly. Further investigation is needed to see how well users are 174  able to recover from their mistakes in order to support them in the process of recovery. Beyond recovery, it is important to limit the possibility of making an error when choosing between options suggested by ScriPer, in the first place. We suggested highlighting other alternative options that are relevant to the option the user is about to select. In addition, given that the cost of making errors can be quite high if an unintended personalization is executed, we recommend showing a preview of the effect of a user’s created personalization to the user before executing the personalization for the first time. This would allow users to verify the correctness of their personalizations and give them the opportunity to correct their personalizations. Further, users should always be given the opportunity to undo the effect of their personalizations in case any errors were missed.   Revisiting the order of combining building blocks for authoring personalizations. Another issue with ScriPer that needs further work is the order it imposed on using the building blocks. For some practical reasons (discussed in Chapter 4), we chose to impose an order for combining building blocks which did not match some participants’ preferred order. It is worth exploring alternative approaches such as showing all the steps to users and letting users choose the order in which they want to complete each step. Longitudinal study. Although the personalization tasks in our study were ecologically valid, they were not derived from our particular participants. Longitudinal studies are needed to assess if users can translate their own needs into personalizations and whether they can reuse those personalizations effectively.  175  Extension to other types of personalizations. Finally, ScriPer was a proof of concept and was not intended to offer complete language expressivity; it covers trigger-action rules where the trigger is a user interaction with an interface element. ScriPer’s approach needs to be further explored to see if it can extend to other types of personalizations that require the use of more complex programming concepts such as loops. In addition, replicating our approach for other domains will provide insights into its applicability to other domains.    6.2.3 Further work on designing for sharing of advanced personalization Design and evaluation of a personalization sharing ecosystem for a PTM tool. We discussed some implications for the design of personalization sharing ecosystems which should help in developing such ecosystems. Based on our recommendations, a one-stop shop sharing ecosystem appears to be the most appropriate for a personalizable PTM tool, such that users can author personalizations within the PTM tool itself (using a mechanism such as ScriPer), share from within the tool, and find and add new personalizations from within the tool. To understand the strength and potential challenges of this approach in the PTM domain, we need to develop such a sharing ecosystem for a personalizable PTM tool and study its personalization sharing practices in a longitudinal study. Recommending beneficial shared personalizations. Once personalization sharing is supported, a next step would be to inform users about potentially beneficial personalizations that other users have authored. Our participants reported searching and browsing as two methods through which they became familiar with others’ personalizations. We suggested (Chapter 3) exposing users to others’ personalizations since this would help them to improve their PTM practice by learning 176  from others’ behaviors. Recommending potentially useful personalizations seems similar to, but perhaps more complex than feature recommendations (Li, Matejka, Grossman, Konstan, & Fitzmaurice, 2011); performing an advanced personalization such as creating a new view for tasks, or creating a new functionality that would change some aspects of tasks (e.g., due dates) when triggered may not be as predictable/straightforward as using a feature. Thus, depending on the personalization mechanism used, sharing a personalization might require capturing the steps involved in performing the personalization and presenting the steps to the users in a way that is easy to understand and reuse. In addition, recommending a personalization to users may require understanding the motivation behind it, which needs to be sourced from the original user who authored the personalization.  Studying personalization re-users. We studied personalization sharing mostly from the perspective of people who had extensive experience with sharing personalizations. Although that focus allowed us to collect rich data about the sharing ecosystems, it did not fully capture the perspectives of those who do not share but only reuse customizations. All of our sharers were also re-users and so we do capture some of the re-user perspective here.  Future studies with re-users would provide additional insights into their motivations for re-use, their motivations for reporting problems with customizations, how they trust the shared customizations, and how the difficulty of the language for authoring customizations affects reusing practices. Re-users were not identifiable in the ecosystems we studied. Sharing ecosystems would likely benefit from identifying re-users, since that could influence other re-users decisions about whether or not to use a personalization; knowing that some of your friends are using a 177  personalization might be more influential than knowing that it has been downloaded many times. In addition, such information could enrich the recommendations of personalizations. Studying personalization sharing from the re-users’ perspective could provide insights into how to improve the experience of re-using personalizations. One of the benefits for users of personalizable software is that they control the changes to the software (rather than the system making the adaptations). Using shared personalizations created by somebody else helps re-users to personalize without having the required skill or the time to author a personalization. However, whether or not re-users experience feeling in control remains unclear. Additional research that will further contribute to our understanding of how to design personalization sharing ecosystems include: (1) tracking particular shared personalizations from multiple ecosystems over time, to see how they each evolve from the point of publishing; as well as (2) uncovering how success is defined, and the factors that make some shared personalizations more successful than others. Finally, as people are spending more time on their mobile devices using various apps, future work needs to design for authoring and sharing of advanced personalization on smaller displays. We need to understand what types of advanced personalization are needed on mobile devices in a given domain, and what mechanisms would work well for authoring personalizations on mobile devices. Given the rapid advances in personal assistants on mobile devices (e.g., Siri on Apple devices), it would be interesting to see how these assistants could help in authoring advanced personalizations or even providing the desired personalized services outright (saving users from authoring the personalizations).   178  6.3 Concluding comments The question that I received the most often when discussing my dissertation research with others is: why would people want to personalize when they can use a well-designed tool? Part of the answer is that people’s unique needs are rarely fully anticipated at design time. Non-personalizable tools either force everyone to adapt their behaviors to certain prescribed behaviors which is disempowering (Nardi & O’Day, 1999), or they can become too complex if they try to support all the PTM needs that could be anticipated; such complexity makes it hard for people to use the system and thus the effectiveness drops (Tainter, 1990) as users need to spend more time finding their desired features. This dissertation has provided empirical evidence of individual differences in the personal task management domain and has shown that some people—those with a strong tendency toward DIYing—personalize general purpose tools to make them work as their PTM tool, most often due to not having found a single PTM tool that can support all their needs. Thus, despite the plethora of PTM tools available, finding one that has fully anticipated a user’s need in advance seems elusive. Further, in the personalization sharing ecosystems we studied, we saw that people did personalize because they were able to, i.e. they were programmers, or programmers who had similar needs had already authored and shared personalizations that addressed the needs of those people. We believe that many users of interactive technologies need to and would be willing to personalize; they will personalize if either they are given accessible mechanisms to author their desired personalizations or when they can re-use personalizations created and shared by others. This dissertation has moved this argument one step further by providing insights into the design of personalization sharing 179  ecosystems and by showing that even people with no programming background are able to author advanced personalizations when given accessible mechanisms to do so.  It is important to note that personalization solutions, including ours, only tilt the balance towards using the technology in a more empowered and enabled way rather than being constrained by it. Although the goal with our ScriPer personalization mechanism was to extend the range of behaviors that a PTM tool can support, we found our mechanism to still have constraints in that more complex personalizations—in which users have to add new building blocks to the system and define what the building blocks are supposed to do—will still be left to programmers; at least with our current design. Therefore, any personalizable tool should let its users know when they are hitting the limits of possible personalizations; this can be done by allowing users to report their unmet needs and to get feedback on whether or not their needs can be supported given the tool’s personalization mechanism. What we have accomplished in this dissertation provides a possible answer to the question of how to enable advanced personalization in other domains. We recommend studying individual differences with a heterogeneous sample of users to learn the types of advanced personalizations that need to be supported. Such a study would reveal user needs in that domain which can then be used to identify the building blocks of the domain. Our high-level categories of building blocks could be used as a starting point. Then, a mechanism for combining the building blocks should be designed; ScriPer provides an example. Finally, a sharing ecosystem should be enabled around the personalizable tool to support sharing of personalizations. While taking all these steps took us several years to complete, this is mainly due to the rigorous processes 180  involved in academic publishing. Practitioners in industry would be able to take these steps more efficiently. If the goal is to make an existing software tool more personalizable, an additional source for identifying the building blocks of the software would be the software’s current features. We recommend decomposing those features into a set of basic building blocks such that users would be able to construct the features as well as some additional features through the use of the basic building blocks. Finally, we reflect on the question of whether we should aim to support more people in authoring advanced personalizations or whether we are better off supporting personalization sharing, such that the very few people who author personalizations can share their personalizations which others can then re-use. In other words, was the research on Chapter 4 needed, given the rich customization sharing ecosystems we saw in Chapter 5? Past research on customization sharing has shown that only a small percentage of people customize, and most people prefer to ask others about a customization or to modify an existing customization file (1990a). Based on anecdotal evidence, this proportion of sharers to re-users still holds true today. By looking at such a proportion, it might be tempting to conclude that the vast majority of people will only ever be interested in re-using a customization rather than developing one, hence, the support for sharing customizations is far more important than the support for authoring customizations. However, there is a chicken and egg issue at play: the proportion of sharers to re-users is related to the fact that authoring advanced personalization/customization has been limited to people with programing expertise. By developing mechanisms for authoring personalization that are accessible to non-programmers, we are likely to change that proportion by empowering more people to author advanced personalizations that better fit their unique needs.   181  Bibliography Alfred - Productivity App for Mac OS X. (2016, August 2). Retrieved August 2, 2016, from https://www.alfredapp.com/ Allen, D. (2001). Getting things done: The art of stress-free productivity. Penguin Group USA. Bälter, O. (1997). Strategies for organising email. In H. Thimbleby, B. O’Conaill, & P. J. Thomas (Eds.), People and Computers XII (pp. 21–38). Springer London. Bellotti, V., Dalal, B., Good, N., Flynn, P., Bobrow, D. G., & Ducheneaut, N. (2004). What a to-do: studies of task management towards the design of a personal task list manager. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 735–742). New York, NY, USA: ACM. Bellotti, V., Ducheneaut, N., Howard, M., & Smith, I. (2002). Taskmaster: recasting email as task management. Position paper for CSCW 2002 workshop Re-designing Email for the 21st Century. New Orleans, Louisiana, USA. Bellotti, V., Ducheneaut, N., Howard, M., & Smith, I. (2003). Taking email to task: the design and evaluation of a task management centered email tool. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 345–352). New York, NY, USA: ACM. 182  Bellotti, V., Ducheneaut, N., Howard, M., Smith, I., & Grinter, R. E. (2005). Quality versus quantity: E-mail-centric task management and its relation with overload. Human–Computer Interaction, 20(1-2), 89–138. Bentley, R., & Dourish, P. (1995). Medium versus mechanism: supporting collaboration through customisation. In Proceedings of the fourth conference on European Conference on Computer-Supported Cooperative Work (pp. 133–148). Norwell, MA, USA: Kluwer Academic Publishers. Bergman, O., Beyth-Marom, R., & Nachmias, R. (2006). The Project Fragmentation Problem in Personal Information Management. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 271–274). New York, NY, USA: ACM. Bernstein, A., & Kaufmann, E. (2006). GINO–a guided input natural language ontology editor. In The Semantic Web-ISWC 2006 (pp. 144–157). Springer. Blandford, A. E., & Green, T. R. G. (2001). Group and individual time management tools: what you get is not what you need. Personal and Ubiquitous Computing, 5(4), 213–230. Boardman, R., & Sasse, M. A. (2004). “Stuff goes into the computer and doesn’t come out”: a cross-tool study of personal information management. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 583–590). Vienna, Austria: ACM. 183  Bond, W. (2015). About - Package Control. Retrieved May 6, 2016, from https://packagecontrol.io/about Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. Bruce, H., Wenning, A., Jones, E., Vinson, J., & Jones, W. (2011). Seeking an ideal solution to the management of personal information collections.Information Research, 16(1), 14. Buehler, R., Griffin, D., & Ross, M. (1994). Exploring the “planning fallacy”: Why people underestimate their task completion times. Journal of Personality and Social Psychology, 67(3), 366. Cecchinato, M. E., Cox, A. L., & Bird, J. (2015). Working 9-5?: Professional Differences in Email and Boundary Management Practices. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 3989–3998). New York, NY, USA: ACM. Cheliotis, G., & Yew, J. (2009). An Analysis of the Social Structure of Remix Culture. In Proceedings of the Fourth International Conference on Communities and Technologies (pp. 165–174). New York, NY, USA: ACM. Corbin, J., & Strauss, A. (2008). Basics of qualitative research 3e. London: Sage. Costabile, M. F., Fogli, D., Mussio, P., & Piccinno, A. (2004). Software environments for end-user development and tailoring. Psychnology, 2, 99–122. 184  Covey, S. R., & Emmerling, J. (1991). The seven habits of highly effective people. Covey Leadership Center. Cypher, A., Dontcheva, M., Lau, T., & Nichols, J. (2010). No Code Required: Giving Users Tools to Transform the Web. Morgan Kaufmann. DiGiano, C., & Eisenberg, M. (1995). Self-disclosing design tools: a gentle introduction to end-user programming. In Proceedings of the 1st conference on Designing interactive systems: processes, practices, methods, & techniques (pp. 189–197). ACM. Draxler, S., & Stevens, G. (2011). Supporting the Collaborative Appropriation of an Open Software Ecosystem. Computer Supported Cooperative Work (CSCW), 20(4), 403–448. Draxler, S., Stevens, G., Stein, M., Boden, A., & Randall, D. (2012). Supporting the Social Context of Technology Appropriation: On a Synthesis of Sharing Tools and Tool Knowledge. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2835–2844). New York, NY, USA: ACM. Ducheneaut, N., & Bellotti, V. (2001). E-mail as habitat: an exploration of embedded personal information management. Interactions, 8(5), 30–38. Dyck, J., Pinelle, D., Brown, B., & Gutwin, C. (2003). Learning from Games: HCI Design Innovations in Entertainment Software. In Graphics Interface (Vol. 2003, pp. 237-246). Einstein, G. O., & McDaniel, M. A. (1990). Normal aging and prospective memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 16(4), 717. 185  Fischer, G., & Herrmann, T. (2011). Socio-technical systems: a meta-design perspective. International Journal of Sociotechnology and Knowledge Development (IJSKD), 3(1), 1–33. Fischer, G., McCall, R., & Morch, A. (1989). JANUS: Integrating Hypertext with a Knowledge-based Design Environment. In Proceedings of the Second Annual ACM Conference on Hypertext (pp. 105–117). New York, NY, USA: ACM. Fischer, G., & Scharff, E. (2000). Meta-design: design for designers. In Proceedings of the 3rd conference on Designing interactive systems: processes, practices, methods, and techniques (pp. 396–405). ACM. Fisher, D., Brush, A. J., Gleave, E., & Smith, M. A. (2006). Revisiting Whittaker & Sidner’s “Email Overload” Ten Years Later. In Proceedings of the 2006 20th Anniversary Conference on Computer Supported Cooperative Work (pp. 309–312). New York, NY, USA: ACM. Fitzgerald, B. (2006). The transformation of open source software. Mis Quarterly, 587–598. Flanagan, J. C. (1954). The critical incident technique. Psychological Bulletin, 51(4), 327. Forster, M. (2006). Do it tomorrow and other secrets of time management. Hodder & Stoughton. Franklin, A. (2011, December 5). Which is the best simple task management tool for individuals? - Quora [Q&A]. Retrieved July 18, 2016, from https://www.quora.com/Which-is-the-best-simple-task-management-tool-for-individuals#!n=12 186  Fuchs, N. E., Kaljurand, K., & Kuhn, T. (2008). Attempto Controlled English for knowledge representation. In Reasoning Web (pp. 104–124). Springer. Funk, A., Tablan, V., Bontcheva, K., Cunningham, H., Davis, B., & Handschuh, S. (2007). Clone: Controlled language for ontology editing. Springer. Gantt, M., & Nardi, B. A. (1992). Gardeners and Gurus: Patterns of Cooperation Among CAD Users. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 107–117). New York, NY, USA: ACM. Gwizdka, J., & Chignell, M. (2004). Individual differences and task-based user interface evaluation: a case study of pending tasks in email. Interacting with Computers, 16(4), 769–797. Haraty, M., Tam, D., Haddad, S., McGrenere, J., & Tang, C. (2012). Individual differences in personal task management: a field study in an academic setting. In Proceedings of the 2012 Graphics Interface Conference (pp. 35–44). Toronto, Ont., Canada: Canadian Information Processing Society. Hars, A., & Ou, S. (2002). Working for Free? Motivations for Participating in Open-Source Projects. International Journal of Electronic Commerce, 6(3), 25-39. Haruvy, E., Wu, F., & Chakravarty, S. (2003). Incentives for Developers. Contributions and Product Performance Metrics in Open Source Development’. Working Paper. Available 187  from: Http://www. Iimahd. Ernet. in/publications/data/2005-03-04sujoy. Pdf [accessed 13 August 2010]. Henderson, A., & Kyng, M. (1991). There’s no place like home: Continuing Design in Use. 1991) Design at Work: Cooperative Design of Computer Systems. Lawrence Erlbaum Associates, Hillsdale, NJ, 219–240. Hornbæk, K., Sander, S. S., Bargas-Avila, J. A., & Grue Simonsen, J. (2014). Is Once Enough?: On the Extent and Content of Replications in Human-computer Interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 3523–3532). New York, NY, USA: ACM. Jones, W. (2007). Personal information management. Annual Review of Information Science and Technology, 41(1), 453–504. Jones, W., Bruce, H., Foxley, A., & Munat, C. F. (2006). Planning personal projects and organizing personal information. Proceedings of the American Society for Information Science and Technology, 43(1), 1–24. Jones, W., Dumais, S., & Bruce, H. (2002). Once found, what then? A study of “keeping” behaviors in the personal use of Web information. Proceedings of the American Society for Information Science and Technology, 39(1), 391–402. 188  Jones, W., Munat, C. F., Bruce, H., & Foxley, A. (2005). The universal labeler: Plan the project and let your information follow. Proceedings of the American Society for Information Science and Technology, 42(1). Kahler, H. (2001). More Than WORDs - Collaborative Tailoring of a Word Processor. Journal of Universal Computer Science, 7(8), 826–847. Kirsh, D. (1995). The intelligent use of space. Artificial Intelligence, 73(1-2), 31–68. Krämer, J.-P. (2010). PIM-Mail: consolidating task and email management. In Proceedings of the 28th of the international conference extended abstracts on Human factors in computing systems (pp. 4411–4416). New York, NY, USA: ACM. Kraut, R. E., Fish, R. S., Root, R. W., & Chalfonte, B. L. (1990). Informal communication in organizations: Form, function, and technology. In Human reactions to technology: Claremont symposium on applied social psychology (pp. 145–199). Citeseer. Kraut, R., Egido, C., & Galegher, J. (1988). Patterns of contact and communication in scientific research collaboration. In Proceedings of the 1988 ACM conference on Computer-supported cooperative work (pp. 1–12). ACM. Kwasnik, B. (1989, June). How a personal document's intended use or purpose affects its classification in an office. In ACM SIGIR Forum (Vol. 23, No. SI, pp. 207-210). ACM. Lafreniere, B., Bunt, A., Lount, M., Krynicki, F., & Terry, M. A. (2011). AdaptableGIMP: designing a socially-adaptable interface. In Proceedings of the 24th annual ACM 189  symposium adjunct on User interface software and technology (pp. 89–90). New York, NY, USA: ACM. Lakhani, K. R., & Von Hippel, E. (2003). How open source software works: “free” user-to-user assistance. Research Policy, 32(6), 923–943. Lakhani, K. R., & Wolf, R. G. (2003). Why Hackers Do What They Do: Understanding Motivation and Effort in Free/Open Source Software Projects (SSRN Scholarly Paper No. ID 443040). Rochester, NY: Social Science Research Network. Lansdale, M. W. (1988). The psychology of personal information management. Applied Ergonomics, 19(1), 55–66. Lemke, A. C., & Fischer, G. (1990). A cooperative problem solving system for user interface. In AAAI (Vol. 90, pp. 479–484). Lerner, J., & Triole, J. (2000). The simple economics of open source. National Bureau of Economic Research. Leshed, G., Haber, E. M., Matthews, T., & Lau, T. (2008). CoScripter: automating & sharing how-to knowledge in the enterprise. In Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems (pp. 1719–1728). New York, NY, USA: ACM. 190  Leshed, G., & Sengers, P. (2011). I lie to myself that I have freedom in my own schedule: productivity tools and experiences of busyness. In Proceedings of the 2011 annual conference on Human factors in computing systems (pp. 905–914). ACM. Levine, R. (2005). A geography of busyness. Social Research: An International Quarterly, 72(2), 355–370. Lieberman, H., Paternò, F., & Wulf, V. (2006). End user development (Vol. 9). Springer. Little, G., Miller, R., Chou, V., Bernstein, M., Lau, T., & Cypher, A. (2010). Sloppy programming. Morgan Kaufmann. Li, W., Matejka, J., Grossman, T., Konstan, J. A., & Fitzmaurice, G. (2011). Design and evaluation of a command recommendation system for software applications. ACM Transactions on Computer-Human Interaction (TOCHI),18(2), 6. Mackay, W. E. (1988). More than just a communication system: diversity in the use of electronic mail. In Proceedings of the 1988 ACM conference on Computer-supported cooperative work (pp. 344–353). New York, NY, USA: ACM. Mackay, W. E. (1990a). Patterns of sharing customizable software. In Proceedings of the 1990 ACM conference on Computer-supported cooperative work (pp. 209–221). New York, NY, USA: ACM. Mackay, W. E. (1990b). Users And Customizable Software: A Co-Adaptive Phenomenon, PhD-Theses. 191  Mackay, W. E. (1991). Triggers and barriers to customizing software. In Proceedings of the SIGCHI conference on Human factors in computing systems: Reaching through technology (pp. 153–160). New York, NY, USA: ACM. MacLean, A., Carter, K., Lövstrand, L., & Moran, T. (1990). User-tailorable systems: pressing the issues with buttons. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 175–182). New York, NY, USA: ACM. Malone, T. W. (1983). How do people organize their desks?: Implications for the design of office information systems. ACM Transactions on Information Systems (TOIS), 1(1), 99-112. Marlow, J., & Dabbish, L. (2013). Activity traces and signals in software developer recruitment and hiring. In Proceedings of the 2013 conference on Computer supported cooperative work (pp. 145–156). ACM. Marlow, J., Dabbish, L., & Herbsleb, J. (2013). Impression formation in online peer production: activity traces and personal profiles in github. In Proceedings of the 2013 conference on Computer supported cooperative work (pp. 117–128). ACM. Marshall, C. C., & Bly, S. (2005). Saving and using encountered information: implications for electronic periodicals. In Proceedings of the Sigchi conference on human factors in computing systems (pp. 111-120). ACM. McGrenere, J., Baecker, R. M., & Booth, K. S. (2002). An evaluation of a multiple interface design solution for bloated software. In Proceedings of the SIGCHI conference on 192  Human factors in computing systems: Changing our world, changing ourselves (pp. 164–170). New York, NY, USA: ACM. Miller, R. C., Chou, V. H., Bernstein, M., Little, G., Van Kleek, M., & Karger, D. (2008, October). Inky: a sloppy command line for the web with rich visual feedback. In Proceedings of the 21st annual ACM symposium on User interface software and technology (pp. 131-140). ACM. Mockus, A., Fielding, R. T., & Herbsleb, J. D. (2002). Two case studies of open source software development: Apache and Mozilla. ACM Transactions on Software Engineering and Methodology (TOSEM), 11(3), 309–346. Moran, T. (2002). Everyday Adaptive Design (keynote). Presented at the Designing Interactive Systems (DIS). Mørch, A. (1995). Application units: Basic building blocks of tailorable applications. In Human-Computer Interaction (pp. 45–62). Springer. Murphy-Hill, E., & Murphy, G. C. (2011). Peer interaction effectively, yet infrequently, enables programmers to discover new tools. In Proceedings of the ACM 2011 conference on Computer supported cooperative work (pp. 405–414). New York, NY, USA: ACM. Myers, B. A., Pane, J. F., & Ko, A. (2004). Natural programming languages and environments. Communications of the ACM, 47(9), 47-52. 193  Nardi, B. A., & O’Day, V. (1999). Information ecologies: Using technology with heart. Mit Press. Oehlberg, L., Willett, W., & Mackay, W. E. (2015). Patterns of physical design remixing in online maker communities. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 639–648). ACM. Olmsted-Hawala, E., Bergstrom, J. C. R., & Rogers, W. A. (2013). Age-Related differences in search strategy and performance when using a data-rich web site. In Universal Access in Human-Computer Interaction. User and Context Diversity (pp. 201–210). Springer. Oppermann, R., & Simm, H. (1994). Adaptability: User-initiated individualization. Adaptive User Support–Ergonomic Design of Manually and Automatically Adaptable Software. Hillsdale, New Jersey. Payne, S. J. (1993). Understanding calendar use. Human–Computer Interaction, 8(2), 83–100. Postigo, H. (2007). Of mods and modders chasing down the value of fan-based digital game modifications. Games and Culture, 2(4), 300–313. Raymond, E. (1999). The cathedral and the bazaar. Knowledge, Technology & Policy, 12(3), 23-49. Remember The Milk - Forums / Ideas. (2014). Retrieved March 30, 2016, from http://www.rememberthemilk.com/forums/ideas/ 194  Rivera-Pelayo, V., Zacharias, V., Müller, L., & Braun, S. (2012). Applying Quantified Self Approaches to Support Reflective Learning. In Proceedings of the 2Nd International Conference on Learning Analytics and Knowledge (pp. 111–114). New York, NY, USA: ACM. Scott, J. (2011, June 21). How many Firefox users have add-ons installed? 85%! Retrieved June 19, 2016, from https://blog.mozilla.org/addons/2011/06/21/firefox-4-add-on-users/ Sengers, P., Boehner, K., David, S., & Kaye, J. “Jofish.” (2005). Reflective design. In Proceedings of the 4th decennial conference on Critical computing: between sense and sensibility (pp. 49–58). Aarhus, Denmark: ACM. Siu, N., Iverson, L., & Tang, A. (2006). Going with the flow: email awareness and task management. In Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work (pp. 441–450). Banff, Alberta, Canada: ACM. Sotamaa, O. (2010). When the game is not enough: Motivations and practices among computer game modding culture. Games and Culture. Tainter, J. (1990). The collapse of complex societies. Cambridge University Press. Ur, B., McManus, E., Pak Yong Ho, M., & Littman, M. L. (2014). Practical Trigger-action Programming in the Smart Home. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 803–812). New York, NY, USA: ACM. 195  Ur, B., Pak Yong Ho, M., Brawner, S., Lee, J., Mennicken, S., Picard, N., … Littman, M. L. (2016). Trigger-Action Programming in the Wild: An Analysis of 200,000 IFTTT Recipes. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 3227–3231). New York, NY, USA: ACM. Van Kleek, M. G., Styke, W., & Karger, D. (2011, May). Finders/keepers: a longitudinal study of people managing information scraps in a micro-note tool. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2907-2916). ACM. Van Kleek, M., Moore, B., Karger, D. R., André, P., & schraefel, m. c. (2010). Atomate It! End-user Context-sensitive Automation Using Heterogeneous Information Sources on the Web. In Proceedings of the 19th International Conference on World Wide Web (pp. 951–960). New York, NY, USA: ACM. Venolia, G. D., Dabbish, L., Cadiz, J. J., & Gupta, A. (2001). Supporting email workflow. Microsoft Research, 2088, 2001. Whittaker, S. (2005). Supporting collaborative task management in e-mail. Human-Computer Interaction, 20(1), 49–88. Whittaker, S., Bellotti, V., & Gwizdka, J. (2006). Email in personal information management. Communications of the ACM, 49(1), 68–73. 196  Whittaker, S., Frohlich, D., & Daly-Jones, O. (1994). Informal Workplace Communication: What is It Like and How Might We Support It? In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 131–137). New York, NY, USA: ACM. Whittaker, S., Jones, Q., Nardi, B., Creech, M., Terveen, L., Isaacs, E., & Hainsworth, J. (2004). ContactMap: Organizing Communication in a Social Desktop. ACM Transactions on Computer-Human Interaction (TOCHI), 11(4), 445–471. Whittaker, S., & Sidner, C. (1996). Email overload: exploring personal information management of email. In Proceedings of the SIGCHI conference on Human factors in computing systems: common ground (pp. 276–283). Whittaker, S., Swanson, J., Kucan, J., & Sidner, C. (1997). TeleNotes: Managing Lightweight Interactions in the Desktop. ACM Transactions on Computer-Human Interaction (TOCHI), 4(2), 137–168. Wilson, M. L., Chi, E. H., Reeves, S., & Coyle, D. (2014). RepliCHI: The Workshop II. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems (pp. 33–36). New York, NY, USA: ACM. Winner, L. (1978). Autonomous technology: Technics-out-of-control as a theme in political thought. Mit Press.  197  Appendices    198  Interview Script (Study One in Chapter 2) Instructions to the interviewer are in blue. General Questions So we’re just going to start off the interview with a few general questions to get familiar with how you feel about your task organization capabilities.  Following this will be an exercise where you will demo the tools you use to us, and then we will finish off with some final questions to fill in the gaps.  1. How organized would you consider yourself in regards to handling your everyday tasks? [1=not very well organized, 2=somewhat organized, 3=about average, 4=organized most of the time, 5=very well organized]  2. What does it mean to be organized to you?  3. Do you feel as though you have a difficult time keeping track of your tasks?  4. Of your overall set of tasks in a day, what percentage of them are you likely to get done? (ie. 100% - all tasks, 50% - half the tasks...)  5. What kind of task lists do you normally have?  (ie. work related, school related task lists) Observation  1. Can you show us the tools you are using for handling your everyday tasks?  2. Can you tell us a little bit about the various tasks you are dealing with? [If they didn’t start to talk about some of their tasks that they had entered, we will probe them to tell us the story behind some of their tasks, how did they record? for what reason?...] [If there was no task from the categories of meeting, event, deadline, …, ask: How do you handle your meetings/events/or deadlines]  3. What about stuff like payments, grocery shopping, or other routine tasks?  4. What about the tasks that don’t have time/date associated with them?  199  5. Is there any other kind of task that you normally keep track of?  6. What kind of reminder mechanism are you using to get notified of your tasks? For each tool that they are showing, ask the following after they are done with their own explanations:  1. What do you like about this tool?  2. What do you dislike about this tool?  3. How long have you been using this tool?   4. How would you improve it? [Ask the following only if not answered]  1. How many tasks do you typically have in your task list (for a day)?  2. How often do you create tasks in your task manager? a. At specific times of the day (ie. every morning) b. Whenever I find a free time during the day c. Never d. Other (please specify)  3. Do you record your tasks directly into your primary task manager right from the start? Or do you ever record them in a temporary location first to allow for some organization before you record them in your primary task manager?  (ie. writing them down first on a piece of paper before entering into the tool itself)  4. Do you record each task as it comes in? Or do you wait until you have several task items before you record them in your task manager?  (ie. just trying to remember tasks throughout the day and recording them at the end of the day) Viewing tasks (activity): Observe the participants as they view their tasks to answer the following questions. Ask questions when needed.  1. Do you like to view all of your tasks in one view?  Or do you like to only view a subset of your tasks at once? 200  a. If they view all tasks in one view: i. Do you rely on specific task attributes to organize your view in order to differentiate the tasks from another? (ie. using colour or category names to group tasks, or a sequential view of tasks sorted by date or priority).  b. If they view only a subset at once: i.How is this subset determined? (ie. category? only tasks for today, or this week?  2. How do you view your tasks.....(depends on what they use) a. on your computer b. on cell phone c. on your piece of paper  3. How often do you view your tasks? a. Specific times of the day (e.g. every morning) b. Whenever I find a free time during the day c. Never d. Other (please specify) Managing tasks (activity):  [Ask some general task managing questions if not answered]  1. Do you often modify your tasks between the time of recording the task in your task manager and completing the tasks?  If so, what do you modify?  2. Do you often find yourself reorganizing your tasks before they get completed? (i.e. moving tasks around between categories)  If so, what do you reorganize?  3. Do you keep completed task items around for reference?  Or do you permanently clear them from your task list? a. If they keep them: i.Why? b. If they clear them: i.How often do you cross off tasks from your task list? 1. Immediately after the task is complete 2. After a group of tasks is complete 3. Never 4. Other (please specify)  4. What do you do when you have a task to record, but at that very moment, you do not have time to record the task? 201  a. What if you had the time, but none of your tools were available?  5. Are there any other tools that you stopped using? a. If so: i.Why? (ie. was it due to any changes in your job or due to any changes in your tasks or did the way you manage your tasks change?) ii.What did you dislike about these tools that caused you to stop using them?   6. How much time on average in a day do you spend managing your task list(s)?   7. Out of that time, what proportion of it would you say is used for adding details to those tasks, viewing them, and crossing them off?  202  Survey Questions (Study Two in Chapter 2) <<<<<<<<<Questions 1-13 : Figure 3’s Section-1 (generic questions)>>>>>>> 1)  Have you ever used any tool such as a calendar, a paper planner, a piece of paper, or a dedicated task list application for managing and keeping track of your tasks?  2)  What is your gender?   3)  What is your occupation? For Q4-9, Please indicate the extent to which you agree/disagree with the following statements 4)  I consider myself a busy person.  5)  It is EASY for me to find time for personal things (e.g. going to gym, seeing friends).  6)  I am very organized in regard to managing my everyday tasks.  7)  I am satisfied with the way I manage/keep track of my tasks. 8)  I have always been interested in finding new ways to improve my task management.  9)  I rely on my memory for remembering MOST of my tasks.  10)  Which of the following do you CURRENTLY use for managing your tasks? (Check all that apply)  Plain text files (e.g. Notepad, TextEdit)  Word processor files (e.g. Word document, Google document)  Digital Note-taking tools (e.g. Microsoft OneNote, Evernote, Google Notebook)  Email  Paper planner  Other paper (e.g. sticky notes, pieces of paper, physical paper notebook) 203   Other (please specify)          Please answer q12 and q13 considering the following definition of "dedicated task list application" Dedicated task list application: “An electronic application that is used solely for the purpose of task management. Note that this does NOT include general purpose electronic applications such as email, wiki, calendar, or a word document that you might use for managing your tasks.” 12)  Which of the following "dedicated task list applications" have you used IN THE PAST and stopped using?  AbstractSpoon  Errands  Google Tasks  GTD TiddlyWiki  Nirvana  OmniFocus …  13)  Which of the above "dedicated task list applications" do you CURRENTLY use most frequently? <<<<<<<<<End of Section 1>>>>>>> <<<<<<<<Questions 14-32 : Figure 3’s Section-2 (Adopters’ only section)>>>>>>> Please answer the following questions for the task list application that you are CURRENTLY using (specified in the previous question).  14)  Which form of this application do you use? (Check all that apply) Desktop    Web-based  Mobile  Other (please specify) 15)  How did you find out about this application? (Check all that apply)   Searching the Internet  Word of mouth (e.g. from a friend)  A time/task management workshop Book  Advertisement  It was installed on my computer  It was integrated into an application that I use for other purposes 204   Other (please specify) 16)  Approximately what percentage of the functionality provided by this application do you use?  17)  Which of the following best describe(s) how you HAVE BECOME aware of your tool’s functionality? (Check all that apply)  Coming across the functionality by accident  Searching the help documentation for a specific functionality  Looking for a specific functionality (e.g. browsing menus)  Getting recommendations from other users who use the same tool  Other (please specify)  18)  Which of the following best describe(s) how you WOULD LIKE to become aware of your tool’s functionality? (Check all that apply)   19)  Name one to three features that you LIKE the most about this application:  20)  Name one to three features that you DISLIKE the most about this application:  21)  What are the typical types of tasks that you record into your tool? (Check all that apply) Things to read  Administrative   Project/course deliverable   Scheduled events   Other   22)  What do you do with a recorded task in your tool once it is completed? Cross off  Delete  Archive  Add detail of how it was done  Do nothing                 Depends on the type of tasks (please explain in the "additional comments" field)  23)  Which of the following do you use when making or modifying tasks recorded in your tool? color  symbols (e.g. star, arrows, circle)  sketching  Other (please specify)  205  24)  Which of the following pictures is most similar to the arrangement of tasks recorded in your system?   25)  If the arrangement of tasks in your tool is slightly different from the chosen one in the previous question, please describe the difference:  26)  Which of the following best describes how you chose the arrangement of tasks in your tool?  I came up with this arrangement by myself  This was the DEFAULT arrangement supported by the tool I use  This arrangement was built in the tool I use, but it was not the DEFAULT arrangement  Other (please specify)  27)  How often do you change the arrangement of your tasks in your tool?  28)  Which of the following attributes determines the spatial location of a task in your tool? (Check all that apply) Task category  Task importance  Task urgency  Task's due date  Task's recording date  29)  I manage my tasks in an ad hoc way. (I don't have a consistent method for recording my tasks) (Strongly Agree…Strongly Disagree)  30)  I type my tasks into different parts of the page based on their attributes (e.g. category or priority). (Strongly Agree … Strongly Disagree)  31)  Do you keep your tool open all the time so that you can check it regularly?  206  32)  How often do you revisit your tasks in your tool? <<<<<<<<<End of Section 2>>>>>>> <<<<<<<<Questions 33- 47 : Figure 3’s Section-3 (List makers’ only section)>>>>>>> Terminology "Tasks-page": A physical or digital page/note on which you write/type/enter your tasks. The following are examples of digital and physical tasks-pages.  33)  Do you have a "tasks-page" where you write/type your tasks on/into?  34)  What are the typical types of tasks that you record into your "tasks-page"? (Check all that apply)  35)  What do you do with a task on your "tasks-page" once it has been completed? (Check all that apply)   36)  Which of the following describes your "tasks-page"?   Digital document (e.g. a Word document, a text file)  Physical (e.g. a piece of paper, sticky notes, notebook, paper planner)   Digital calendar (e.g. Google calendar, iCal)  Other (please specify)  37)  Which of the following do you use when making or modifying your “tasks-page” ?  207  38)  Which of the following pictures is most similar to the arrangement of tasks on your "tasks-page"?    39)  If the arrangement of tasks on your "tasks-page" is slightly different from the chosen one in the previous question, please describe the difference:  40)  Which of the following best describes how you chose the arrangement of tasks on your "tasks-page"?  I came up with this arrangement by myself  This was the DEFAULT arrangement supported by the tool I use  This arrangement was built in the tool I use, but it was not the DEFAULT arrangement  Other (please specify)  41)  How often do you change the arrangement of your tasks on your "tasks-page"?   42)  Which of the following attributes determines the spatial location of a task on your "tasks-page"? (Check all that apply) Task category  Task importance  Task urgency  Task's due date  Task's recording date  For the following two questions, Please indicate the extent to which you agree/disagree with the following statements:  43)  I manage my tasks in an ad hoc way. (I don't have a consistent method for recording my tasks)  208  44)  I write/type my tasks on/into different parts of my “tasks-page” based on their attributes (e.g. category or priority).  45)  Do you keep your “tasks-page” visible all the time so that you can check it regularly?  46)  How often do you revisit your tasks-page?  47)  Would you like to have the option of setting reminders for some of your tasks in your "tasks-page"? <<<<<<<<<End of Section 3>>>>>>> <<<<<<<<Questions 48-58 : Figure 3’s Section-4 (Other generic questions)>>>>>>> 48)  Do you keep email messages in your inbox to remind yourself of the tasks that need to be done?  49)  I would like to have the option of easily transferring the email messages that act as reminders of my tasks to my "tasks-page.”  50)  Do you keep web pages open in your web browser as reminders of the tasks that need to be done?  51)  Approximately how many web pages do you usually keep as reminders of your tasks?   52)  On average, how long do you keep web-pages open as reminders of your tasks? Please indicate the extent to which you agree/disagree with the following statements.  209  53)  I would like to have the option of having links in my "tasks-page" to these web pages.  54)  If I had links to the web pages which act as to-dos in my "tasks-page,” I would not keep the web pages open.  55)  I would like to have an overview of all my tasks in one place.  Thank you very much for your participation! <<<<<<<<<End of Section 4>>>>>>>  210  Interview Script for Survey Follow Up Study (Chapter 3) Back in February 2012, we asked you about the tools you use to manage your tasks. You mentioned that you were using [reported PTM tools].   1. Has anything changed regarding the tools you use, the way you use them, or more generally have you made any changes to how you manage your tasks since then?  2. What do you think has led to that change?  [A question related to their response to the survey question about changes in their PTM] 3. You also mentioned some changes that you had made to your PTM before. For example, you reported you had made the following changes to the way you manage your tasks: [changes and reasons reported by that participant]. Can you elaborate more on that change or its reason?   211  Screenshots of ScriPer (Chapter 4) The following series of screenshots illustrate use of ScriPer for authoring an advanced personalization, described in the task T5P of our controlled evaluation. T5P: Imagine that there are other times that you you’d like to define quiet hours so that you tell the system a time period in which you don’t want to receive any reminders and the system postpones sending the reminders to the end of that period. In this situation, you decide to create a button called ‘mute reminders’ that when you click on it, the system asks you to enter the time period and then the system changes the reminders that are supposed to be sent out within that period such that they will be sent out at the end of that period.  Step 1: User needs to go the Personalization Mode by clicking on the ‘Personalization’ button on the top right corner.   212  Step 2: In the Personalization Mode which adds a gray overlay to the main interface and all the regular PTM-related interactions are disabled. To add a new button, user will click on the plus button next to the other buttons on the top left.  Step 3: Clicking on the plus button opens up a textbox where the user can type a name for the new button.  213   Step 4: “Mute reminders” is typed as the name of the button is and it can be added to the tool by clicking on the ‘add’ button.   Step 5: The “Mute reminders” button is created. The user needs to click the button to define its behavior.    214  Step 6: Clicking on the “Mute reminders,” opens up ScriPer—in a modal window—which was named ‘Feature Construction Window’ so that participants can easily refer to it.  Step 7: ‘Ask me for’ is selected. ScriPer is suggesting the next set of building blocks which are data types.   215  Step 8: Upon selecting the Time data type, ScriPer provides the preview of the system asking the user for a specific time.  Step 9: The user has named the time that the system will later ask for as ‘start’, and has clicked on the ‘Ask for more data’ button to make the system ask for another time and she has called it ‘end’.  216  Step 10: The first part of the personalization script is completed, thus ScriPer shows the ‘save’ button  Step 11: The first part of the personalization is saved and displayed as [custom command-1].    217  Step 12: For the second part of the personalization, ‘Change’ has been selected. ScriPer is suggesting all the objects’ attributes.   Step 13: The user has typed the first two characters of ‘reminder’ which filters the list of ScriPer’s suggestions.  218   Step 14: reminders’ time has been selected. ScriPer is suggesting the possible values for the time data type.  Step 15: ‘end’, which is the name of the variable the user had defined in the first part of the personalization, has been selected. ScriPer is suggesting the objects that have reminders’ time as their attribute.  219  Step 16: “reminders’ that” has been selected. ScriPer is suggesting the attributes of reminders’ time.  Step 17: ‘time’ has been selected. ScriPer is suggesting the possible values for time.   220  Step 18: ‘Between’ has been selected. ScriPer is suggesting the possible values for a time for the first parameter of between.  Step 19: ‘start’ has been selected. ScriPer has added an ‘and’ to the script and is suggesting the possible values for a time for the second parameter of between.  221  Step 20: ‘end’ has been selected. ScriPer does not have any further suggestion, since the personalization is complete. Thus, it is showing the ‘Save’ button.  222  Materials of the Controlled User Study (Chapter 4) E.1 Pre-questionnaire a) Email address  (we will contact you to confirm your registration using this email address)  b) Gender  c) Pleae rate your proficiency at coding (computer programming) No to little knowledge / experience  Some knowledge / experience  I'm proficient  Other:  d) Please briefly explain your experience with coding.    E.2 Interview script 1. What did you think of changing the mode in cases where you had to go to the personalization mode? Did you face any challenges in figuring out where to start for those tasks?  2. In some of the tasks, you had to create a button first and then click on it to define its behavior. Did this make sense to you?   (Did the ordering of the steps make sense?)  (How would you prefer to do it?)  223   3. For some of the tasks, you had to start from the "self-disclosing panel" to go to the feature construction panel.   What did you think of the "self-disclosing panel"?   Did you face any challenges in figuring out that you should start from there?   Was the role of self-disclosing panel clear to you?  What do you think of having this always on...  4. Was the "feature construction panel" clear and easy  to use?   What challenges did you face when using it?  Did the order of the suggested blocks match with how you thought of the feature?   5. Do you have any comments on your overall experience using the system?  6. Have you ever felt that an everyday application you use is missing an important feature that you'd like to see added?   Can you give an example?  Can you try the feature construction panel to write how you would like to tell the system about your new feature. 7. Which of the following do you CURRENTLY use for managing your tasks?  Plain text files (e.g. Notepad, TextEdit)  Word processor files (e.g. Word document, Google document)  Digital Note-taking tools (e.g. Microsoft OneNote, Evernote, Google Notebook)  Email  Electronic calendar (e.g., Google Calendar)  Paper planner  Other paper (e.g. sticky notes, pieces of paper, physical paper notebook)  None  Other:   8. Do you currently use any to-do list electronic application? 224   Yes (Please name it)  No  E.3 Post questionnaire a) Participant ID (entered by the experimenter)    b) Gender Female  Male  Other:  c) Age    d) Which of the following do you CURRENTLY use for managing your tasks? (Check all that apply Plain text files (e.g. Notepad, TextEdit)  Word processor files (e.g. Word document, Google document)  Digital Note-taking tools (e.g. Microsoft OneNote, Evernote, Google Notebook)  Email  Electronic calendar (e.g., Google Calendar)  Paper planner  Other paper (e.g. sticky notes, pieces of paper, physical paper notebook)  225  None  Other:  e) Do you currently use any to-do list application?*Required Yes  No  f) If Yes to the above question, please enter the name of the application you use:   g) How often do you feel an application you use is missing an important feature that you would like to see added? Never  Rarely  Sometimes  many times  Other:  h) The "feature construction panel" could be appropriate for adding a new feature to other everyday applications I use.   1 2 3 4 5  Strongly disagree      Strongly agree  i) If the "feature construction panel" and "self-disclosing panel" were available in the everyday applications you use, how likely would you be to use them to add a new feature?  1 2 3 4 5  Very unlikely      Very likely 226  Please explain your response to the above question.   j) How difficult was it to start the tasks that involved going to personalization mode and creating a new button?  1 2 3 4 5  Very easy      Very difficult  k) How difficult was it to start the tasks that involved use of the "self-disclosing panel"?  1 2 3 4 5  Very easy      Very difficult  l) How difficult was it to relate changes happening in the self-disclosing panel to your actions.  1 2 3 4 5  Very easy      Very difficult  m) To what extent did the order of the suggested blocks match with how you wanted to tell the system about the new feature? Extremely  Very much  Moderately  Little  Very little  Don't know / no opinion  227   n) The steps in performing the personalization tasks were clear.  1 2 3 4 5  Strongly disagree      Strongly agree    228  Interview Script for Sharing Personalization Study (Chapter 5) The interview questions were personalized to each participants based on their online personalization sharing activities. Below are the generic questions which were then personalized.  A. Background questions (What are the characteristics of people who share their customization with others)  1. Age (age group)  2. Job   3. Technical/programming skills (How do you rate your programming skills? Are you a programmer?)  4. Country  5. Gender (if unclear)   B. Motivation for sharing (making your customization available to others) You have created x customizations.    1. What was your motivation for creating customization X, Y? [get them to focus on sth concrete]  2. What motivated you to share your customizations on [x: a sharing platform]?  3. Do you use any other methods--other than posting them on [x]--to share your customizations with others? a. What are they? How do they compare with posting on [x]? How do you decide which method to use?  4. [if commenting was NOT supported] a. Are you interested in getting feedback from others on your customizations? b. Other sites support commenting…Do you have an opinion about  x not supporting commenting? Is it a good thing or a bad thing? Why?  5. [if commenting was supported]  229  a. What type of comments do you usually receive when you post your customization? i. (Thanks, asking for clarifications, reporting bugs, alternative customizations?) ii. Are there any other types of comments you hope to receive? iii. Do you find the comments useful? In what way?  1. (e.g., Do they influence your willingness to post or the frequency of your posts?) a. To what extent are the comments you receive from the people whom you know vs. you don't know? i. How familiar are you with them? Does familiarity play a role in commenting or responding to comments?   6. [if submission of incomplete customizations (work in progress) was supported (e.g., Rainmeter)]  a. Have you ever posted or considered posting your in-progress customization in the WIP category? b. [If yes]   . Can you describe a recent example to me? a. Why did you post it when it wasn’t yet complete? What are the benefits of doing so?   7. [if created customizations are adaptable] a. Has anybody ever modified or extended a customization that you shared? Can you describe it to me?   C. Ease of creating, sharing, and reusing customization   1. For the last customization that you shared, can you remember how long it took you to create it?    2. Is this typical of the customizations that you create?   3. How do you rate the ease of sharing customization on [X]?   4. Do you think it's easy for others to use your customizations?     D. Motivation for using others' customizations   1. Have you also used other people's customizations?  . Why? Why not?   2. [if no] skip the rest    230  3. Can you give an example?   4. How did you come to find that customization?  a. Do you have a need and you search to see if anyone has created a customization that addresses that need? b. Or you browse through the customizations and find something that interests you?   5. If you had to categorize yourself as sharer of customization or users of others' customization, which would it be?   6. How did you assess the trustworthiness of the customizations that you have reused? (Do you have concerns that the file might be malware) a. Do comments and ratings affect your decision about using a customization?   7. [if commenting was supported]   a. Have you ever posted any comments on others' customizations?  i. why/why not?  ii. [if yes] What type?   8. How easy have you found it to use others' customizations?  a. (Do you end up asking questions from the creators?)   E. Payment 1. Do you have a paypal account? For the very small honororium that we would like to give. Another option is an amazon gift card.  2. Do I have the right email?    231  Codebook for the Analysis of Changes and Reasons in Survey (Chapter 3) added a new tool added a dedicated PTM tool added a non-PTM tool added a notepad/notebook added a paper calendar added a paper planner added a paper to-do list added a wall calendar added a whiteboard added a word document added digital calendar added digital note taking app added excel spreadsheets added index cards added sticky notes adopting a PTM behavior (non-tool related) making list recording tasks strategy: associating objects strategy: breaking down the tasks strategy: checking off finished tasks strategy: creating a new task list 232  strategy: customized post-completion strategy strategy: drawing attention / visual reminder strategy: keeping tasks in multiple formats strategy: note taking strategy: prioritizing strategy: scheduling strategy: simplifying strategy: talking about to-do items strategy: tracking due dates strategy: tracking work hours strategy: when to record awareness of abilities aging experiencing memory loss awareness of needs not needed any more the need to be more organized change: consolidating tools change: feature change in layout daily planner to monthly include less detail use archiving 233  use of color-coding use of prioritization use of reminders change: goal oriented/Intentional  change: gradual change: involving other people change: media Digital to online Digital-to-paper Memory to paper Memory-to-digital Online to paper Paper to online Paper-to-digital paper to whiteboard online to digital change: non-reflective change: not goal oriented/ unintentional change: possibly reflective change: reflective change: rely less a PTM behavior change: satisfactory change: stopped a PTM behvior 234  prioritizing change: stopped using a tool stopped using a dedicated tool stopped using a papaer calendar (planner) stopped using digital calendar stopped using general purpose tools (notebook) change: syncing devices change: time change: tool digital calendar to paper planner digital calendar to wall calendar email to digital calendar email to text file general purpose tool to calendar pad of paper to notebook paper calendar/planner to plain paper paper planner to digital calendar paper planner to e-planner paper planner to wall calendar paper planner to whiteboard paper to notebook text file to google calendar wall calendar to plain text file 235  change: trying a tool for a limited period trying a dedicated tool change: unsatisfactory change: usage change:task specific incoherent change incoherent reason reason: benefits of the change centralized  easier maintanance of the new behavior feeling in control reason: features of the new tool automatic reminders no overhead pleasure in using something tactile syncing to other tools used ubiquitous access visibility reason: external factor reason: buying a new device reason: family need reason: job reason: ongoing cost 236  reason: social interaction reason: task's nature reason: use of an app/device reason: getting busier reason: getting less busy reason: involving other people creating shared awareness keeping track of others' activities sharing to-do lists reason: personal experience reason: problem forgetting things problem with the prev behavior Feeling overwhelmed caused stress and anxiety confusing to have two separate tasks list procrastinating problem with the prev tool Lacking a feature cluttered difficult to add or change difficult to carry difficult to keep it updated 237  difficult to prioritize difficult to retrieve difficult to use lack of enough space for writing loosing paper not always handy redundant with prev tools synchronization issue tedious data entry the overhead of thinking about another place to enter/retrieve unmet needs  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0319244/manifest

Comment

Related Items