@prefix vivo: . @prefix edm: . @prefix ns0: . @prefix dcterms: . @prefix skos: . vivo:departmentOrSchool "Science, Faculty of"@en, "Computer Science, Department of"@en ; edm:dataProvider "DSpace"@en ; ns0:degreeCampus "UBCV"@en ; dcterms:creator "Zheng, Qixing"@en ; dcterms:issued "2010-01-08T01:48:23Z"@en, "2006"@en ; vivo:relatedDegree "Master of Science - MSc"@en ; ns0:degreeGrantor "University of British Columbia"@en ; dcterms:description """Most co-authoring tools support basic annotations, such as edits and comments anchored at specific locations in the document. However, they do not support higher-level communication about a document such as commenting on the tone of a document, giving more explanation about a group of basic annotations, or having a document-related discussion. Such higher-level communication gets separated from the document, often in the body of email messages. This causes unnecessary overhead in the write-review-edit workflow inherent in co-authoring. To address the problem, we first established user-centered requirements for annotation support. We conducted a small field investigation of email exchanges including document attachments, among three small groups of academics (3 to 5 people each). We categorized the higher-level communication from the email and developed a set of eleven requirements to support document annotations. We next developed document-embedded structured annotations called "bundles" that incorporate higher-level communication into a unified annotation model meeting the set of requirements. We also designed and implemented a high-fidelity prototype called the "Bundle Editor" that illustrates our structured annotation model. Finally, we conducted a usability study with 20 participants to evaluate the annotation reviewing stage of co-authoring. The study showed that the annotation bundles in our high-fidelity prototype reduced reviewing time and increased accuracy, compared to a system that supports only edits and comments."""@en ; edm:aggregatedCHO "https://circle.library.ubc.ca/rest/handle/2429/17720?expand=metadata"@en ; skos:note "Structured Annotations to Support Collaborative Writing Workflow by Qixing Zheng A.Sc., Gainesville College, 1999 B.Sc, The University of Georgia, 2001 A THESIS SUBMITTED IN PARTIAL F U L F I L M E N T OF T H E REQUIREMENTS FOR T H E D E G R E E OF Master of Science in The Faculty of Graduate Studies (Computer Science) The University of British Columbia December ZOOS © Qixing Zheng 2005 Abstract M o s t c o - a u t h o r i n g tools s u p p o r t bas ic a n n o t a t i o n s , s u c h as edits a n d c o m m e n t s a n c h o r e d at specific locat ions i n the d o c u m e n t . H o w e v e r , t h e y do not s u p -p o r t h igher- leve l c o m m u n i c a t i o n a b o u t a d o c u m e n t s u c h as c o m m e n t i n g o n the tone of a d o c u m e n t , g i v i n g m o r e e x p l a n a t i o n a b o u t a g r o u p of bas ic a n n o t a -t ions , or h a v i n g a d o c u m e n t - r e l a t e d d i scuss ion . S u c h h igher- leve l c o m m u n i c a -t i o n gets s epara ted f r o m the d o c u m e n t , often i n the b o d y o f e m a i l messages. T h i s causes unnecessary overhead i n the wri te -rev iew-edi t workf low inherent i n c o - a u t h o r i n g . T o address the problem\", we first e s tabl i shed user-centered requ irements for a n n o t a t i o n s u p p o r t . W e c o n d u c t e d a s m a l l field inves t igat ion of e m a i l exchanges i n c l u d i n g d o c u m e n t a t t a c h m e n t s , a m o n g three s m a l l groups of a c a d e m i c s (3 to 5 people each) . W e categor ized the h igher- leve l c o m m u n i c a t i o n f r o m the e m a i l a n d deve loped a set of e leven requirements to s u p p o r t d o c u m e n t a n n o t a t i o n s . W e next deve loped d o c u m e n t - e m b e d d e d s t r u c t u r e d a n n o t a t i o n s ca l l ed \"bundles\" t h a t i n c o r p o r a t e h igher- leve l c o m m u n i c a t i o n into a uni f ied a n n o t a t i o n m o d e l m e e t i n g the set of requirements . W e also des igned a n d i m p l e m e n t e d a h i g h -fidelity p r o t o t y p e ca l led the \" B u n d l e E d i t o r \" t h a t i l lustrates o u r s t r u c t u r e d a n n o t a t i o n m o d e l . F i n a l l y , we c o n d u c t e d a usab i l i ty s t u d y w i t h 20 p a r t i c i p a n t s to evaluate the a n n o t a t i o n rev i ewing stage of c o - a u t h o r i n g . T h e s t u d y showed t h a t the a n n o t a -t i o n bundle s i n o u r h igh- f ide l i ty p r o t o t y p e r e d u c e d r e v i e w i n g t i m e a n d increased accuracy , c o m p a r e d to a s y s t e m t h a t s u p p o r t s o n l y edits a n d c o m m e n t s . ) iii Contents Abstract 1 1 Contents iii List of Tables . vi List of Figures vii Acknowledgements viii 1 Introduction 1 1.1 Research Motivation 2 1.2 Research Contributions 4 1.3 Overview of the Thesis 4 2 Related Work 6 2.1 Collaborative Writing 6 2.2 Annotations ' : 9 2.2.1- Annotation Definition 9 2.2.2 Annotation Model 10 2.3 Existing Co-authoring Systems 11 2.3.1 Research Systems 11 2.3.2 Three Commercial Systems 12 3 Requirements-Gathering through Field Investigation 14 3.1 A Small Field Investigation 14 Contents i v 3.2 R e q u i r e m e n t s for S t r u c t u r e d A n n o t a t i o n s 16 4 S t r u c t u r e d A n n o t a t i o n M o d e l 20 4.1 M o d e l E l e m e n t s 20 4.2 Iden t i f y ing T y p e s of A n n o t a t i o n s 22 4.3 L i n k i n g B u n d l e C r e a t i o n t o the C o - a u t h o r i n g P rocess 23 5 A P r o t o t y p e of S t r u c t u r e d A n n o t a t i o n s : T h e B u n d l e E d i t o r . 25 5.1 M a j o r Inter face C o m p o n e n t s 25 5.2 F u n c t i o n a l D e s c r i p t i o n of the B u n d l e E d i t o r 27 5.2.1 B a s i c F u n c t i o n a l i t y 27 5.2.2 T h e F o u r P r i m a r y W a y s of C r e a t i n g B u n d l e s 30 5.2.3 W o r k i n g w i t h B u n d l e s 31 5.3 I terat ive D e s i g n a n d L o w - t o M e d i u m - F i d e l i t y P r o t o t y p e s . . . . 32 5.4 I m p l e m e n t a t i o n 36 5.5 P i l o t Tes t i ng • • 36 6 E v a l u a t i o n of S t r u c t u r e d A n n o t a t i o n s 38 6.1 M e t h o d o l o g y 39 6.1.1 T w o Sys tems 39 6.1.2 Tasks 43 6.1.3 Measu res • 48 6.1.4 E x p e r i m e n t a l D e s i g n 50 6.1.5 P a r t i c i p a n t s 50 6.1.6 P r o c e d u r e 50 6.1.7 Hypo theses 51 6.2 R e s u l t s ,. • • 52 6.2.1 Tes t i ng Hypo theses 52 6.2.2 O t h e r Ef fects * 53 6.2.3 Se l f - repor ted Measu res 55 6.2.4 O t h e r Feedback 56 6.2.5 S u m m a r y of R e s u l t s 57 Contents v 6.3 Discussions 58 6.3.1 Bundle Concept Is Intuitive 58 6.3.2 Bundles Reduce Navigation Time 58 6.3.3 Bundles Improve Accuracy' ' . . . 59 6.3.4 Users Group Annotations 59 , 6.3.5 Scalability of Bundles 60 6.3.6 Cost/Benefit Tradeoff • 60 6.3.7 Bundles Provide a More Pleasant User Experience . . . . 60 7 Conclusion and Future Work 62 7.1 Conclusions 62 7.2 Future Work 62 Bibliography 65 A Usability Study Questionnaire 71 B Usability Study Documents 77 C Usability Study Task Sets 86 C l Task Instructions in Black Hole Document 86 C.2 Task Instructions Music and Consumer Document 90 C.3 Task Instructions in the Practice Document 93 vi List o f Tables 3.1 Evaluating requirements against current co-authoring systems. . 18 6.1 Comparison of the Bundle System and the Simple System . . . . 43 6.2 Speed measures across five tasks. Df = (1,16). N=20 53 6.3 Accuracy measures across five tasks. Df = (1,16). N=20 54 vii List of Figures 4.1 Annotation Model 22 5.1 The Bundle Editor 26 5.2 Bundle Creation 28 5.3 Navigating Bundles 29 5.4 Comment Display in the Document 33 5.5 Text Display Scheme 34 6.1 Bundle System 41 6.2 Simple System 42 6.3 Task Instruction Illustration 44 . 6.4 Navigation and Decision Time 49 6.5 Navigation Time Graph 54 6.6 Non-task Related Annotations Graph 55 6.7 Collaborative Reviewing Methods 57 6.8 Collaborative Reviewing Activities • • 58 B . l Original docB 78 B.2 Annotated docB 79 B.3 Original docM 81 B.4 Annotated docM 82 B.5 Annotated Practice Document 84 V l l l Acknowledgements First I would like to thank my supervisors, Dr. Joanna McGrenere and Dr. Kel-logg Booth, for their guidance and support throughout the entire thesis project process. Joanna was always very helpful in helping me organize my thoughts and ideas about the project and providing detailed and thorough feedback. Kelly always had novel ideas to solve research problems, which inspired me greatly. I would also like to thank Dr. Barry Po for the many hours he devoted to helping me with analyzing study results as well as providing valuable feedback at different stages of this research. I am also especially grateful to Dr. Steven Wolfman for his insightful comments on the project and for being the second reader of this thesis. Finally, I would like to thank many friends that helped and supported me in the project. In particular, I thank Tristram Southey for his never-ending support and encouragement. Financial support for this research was provided by NSERC, the Natural Sciences and Engineering Research Council of Canada, through its Discovery Grants program, and its Research Networks Grants program funding for NEC-TAR, the Network for Effective Collaboration Technology through Advanced Research. Facilities and research infrastructure were provided by the Canada Foundation for Innovation. The user study was conducted under Certificate B05-0494 that was issued by the University of British Columbia Behavioural Research Ethics Board. QIXING ZHENG UNIVERSITY OF BRITISH COLUMBIA ix To my parents - Zheng, Dashui and Li, Weiwei. To my love - Tristram Southey. 1 Chapter 1 Introduction Co-authoring academic papers, books, business reports, and even web pages is common practice [39]. Word processors and other tools provide some support for collaborative authoring, but not as effectively as we might desire. Much of the effort in collaborative writing is spent reviewing and editing drafts [32]. Typical workflow involves co-authors annotating drafts and passing them back and forth. Basic annotations are edits (insertions and deletions) and comments on specific parts of the document. However, co-authors also communicate at a higher level about a document, for example, by suggesting changes to the document's tone, clarifying previous annotations, or responding to other co-authors' document-related questions. This higher-level communication is not currently well supported by collaborative authoring tools. We use the term \"co-authoring\" to refer to this entire writing-reviewing-editing cycle. While the purpose of annotations ranges from strictly personal (fine-grain highlighting to aid memory) to more communal (comments or questions for co-authors) [23], Neuwirth et al. suggest that the most important purpose of shared annotations is to enable fine-grained exchanges among co-authors of a document [30]. We present a novel framework for co-authoring that fully integrates all annotations (basic edits and comments as well as higher-level communication) into a document and we introduce structured annotations that explicitly support workflow management within the co-authoring cycle. Chapter 1. Introduction 2 1.1 Research Motivation Let us first consider the following scenario: Jen, John, and Mary are collaborating on a conference paper us-ing Microsoft Word 2003 (MS Word). Jen reviews the first draft. She turns on the \"Track Changes\"1 and \"Comment\" features in MS Word, then makes her changes and adds her comments to the doc-ument. She then saves the revised document and sends it to John and Mary as an email attachment. In the email, Jen summarizes the changes she made in the document. For instance, she advises that most of her changes are in the first four sections, and that who-ever works on the document next needs to spend more time on the Results and Conclusion sections. Jen also points out an important global change she made in the document, replacing the word \"Intru-sive\" with \"Obtrusive\" in some but not all instances. At the end of the email message, Jen includes some questions for John and Mary to address, such as what the title of the document should be. After John receives Jen's email, he notifies everyone that he will be the next person to edit the document. Like Jen, John reviews the annotated document in MS Word using the Track Changes and Comment functions. When he finishes, he saves the document as his revised version and sends it to Jen and Mary as an email attachment. He also describes in his email message a number of the changes he has made and recommends that Jen and Mary review particular comments first. Finally, it is Mary's turn to review the document. She too performs the same annotate-and-email steps. xWith the Track Changes feature turned on, each insertion, deletion, or formatting change made in the document is tracked. When reviewing each tracked change, one can either accept or reject it. Chapter 1. Introduction 3 As described in the scenario, co-authors often make basic annotations using their word processors and then send the revised document to co-authors via email as attachments, pathnames in a shared file system, or URLs on the web. This is usually done asynchronously, one author at a time. The annotate-and-email sequence is repeated until the document is completed. Higher-level com-munication, such as summarizing changes and more general document-related discussions, often takes place outside the document, usually in the bodies of emails used to circulate the drafts. This approach is problematic because it requires co-authors to maintain collaboration artifacts in different places (word processor files and emails) with no formal association between the two. This un-necessarily complicates workflow. Valuable information can be buried and easily forgotten or misplaced [40]. The number of emails can easily grow rapidly when co-authors rely on email for document-related discussions such as deciding on a title. Associating the correct emails with the correct version of the document quickly becomes overwhelming. Even if the appropriate emails are located, depending on the nature of in-formation communicated, it can be difficult to navigate between email and doc-ument content. For example, in the above scenario Jen explains in her email why the word \"intrusive\" is replaced with \"obtrusive\" in some but not all cases. Not only does Jen need to make the basic edits, she must provide a separate comment providing her rationale for the edits as well as navigation descriptions so other co-authors can find the edits. The navigation descriptions could be general (\"whenever we describe haptic signals\") or very precise (\"one is in the second sentence of paragraph 5 on page 3\") to help other co-authors to find changes. More precise descriptions require more effort from the originator, but make it easier for co-authors to locate the changes. However, no matter how precise the descriptions are in the email, co-authors still need to spend effort to find the relevant annotations. Moreover, this workload does not increase linearly as annotations are added, because co-authors often need to read an-notations more than once to ensure they have found the right set described in the email. Moreover, there is no easy way to describe semantic groupings of Chapter 1. Introduction 4 annotations using current tools, other than plain English text descriptions, so significant communication overhead exists [40] and the co-authoring workflow suffers. 1.2 Research Contributions The goal of our research is to use structured annotations to uniformly support all annotation activities and facilitate workflow management within a co-authoring cycle. The focus of our research is small, distributed groups of co-authors col-laborating asynchronously on editing and reviewing documents embedded with a large number of annotations. The research makes the following contributions: 1. Integrated all annotations fully within the document. 2. Developed a set of eleven requirements for supporting document annota-tions. 3. Created a comprehensive structured annotation model that satisfies the set of requirements. 4. Designed and implemented a hi-fidelity prototype incorporating the anno-tation model. 5. Conducted a user study and showed that structured annotations in our high-fidelity prototype reduced reviewing time and increased accuracy, compared to a system that only supports edits and comments. 1.3 Overview of the Thesis Chapter 2 discusses related work and provides background for the research from three areas: general research in collaborative writing, research on shared anno-tation, and a survery of existing co-authoring topis. Chapter 3 describes the requirements gathering for document annotations that was performed through a Chapter 1. Introduction 5 small field investigation that resulted in a set of eleven requirements for support-ing annotations both inside and outside the document. Chapter 4 introduces document-embedded structured annotations called \"bundles,\" which are part of a unified annotation model that meets the requirements identified in Chapter 3. Chapter 5 describes a high-fidelity prototype for structured annotations called the \"Bundle Editor.\" In Chapter 6, we discuss the usability study with 20 subjects. The study evaluated the annotation reviewing stage of co-authoring and showed that the annotation bundles in our high-fidelity prototype reduced time and increased accuracy, compared to a system that supports only edits and comments. Chapter 7 summarizes the main results in the thesis and discusses several areas for future research. Substantial portions of this thesis will be published in the 2006 Proceedings of the SIGCHI conference on Human factors in computing systems [36]. Chapter 2 6 Related Work In this chapter, we first review the literature on collaborative writing and de-scribe related research issues. We then provide a more focused discussion of research on annotation and survey existing co-authoring systems. 2.1 Collaborative Writing Collaborative writing, or group writing, is any writing done in collaboration with one or more other persons [14]. Common stages in collaborative writing are planning, writing, reviewing and editing [27]. Baecker et al. [2] summarize the activities carried out in each stage of colaborative writing as follows. Planning. Sketch out the main idea of the document, and gather component pieces such as references. Plan how the document will be written and define the roles of each author. Produce an outline of the document. Writing. Translate the ideas generated and outlined in the previous stage into text, and edit portions of the text while writing. Reviewing and editing. Make changes and generate comments about the written text to make the document more coherent. Ensure that there are no grammar errors in the document and that all formatting requirements have been satisfied. There are two modes of collaborative writing. One is called \"synchronous writing,\" which is a tightly-coupled collaboration among group members. In synchronous writing, collaborators write at the same time. The other mode Chapter 2. Related Work 7 of collaborative writing is called \"asynchronous writing\", in which co-authors usually by access and modify shared documents at different times, with only one person working on the document at a time. Asynchronous writing is often done in the document editing and reviewing stages [39]. Typically, the asynchronous writing process is sequential, so the document is passed from one co-author to another in turn. In our research, we focus on the editing and reviewing stages of asynchronous writing, which is when most collaboration occurs [32]. Collaborative writing is a broad research area with many interesting and challenging research issues, some of which are outlined below. Most of the these issues fall in the general area of Computer Supported Collaborative Work (CSCW). Synchronous vs. asynchronous communications: how to effectively support synchronous and asynchronous communication and integrate them in collaboration. For example, Chandler [9] discussed some of the characteristics of asynchronous collaboration and applied them to a case study involving a team composing a mission statement. The focus of Anchored Conversations research [10] is on tightly coupled synchronous collaborative work between dis-tributed group members. Its main contribution is to anchor text chats into doc-uments so that collaborators can have conversations within the existing work context. Jackson and Grossman [18] described an integrated synchronous and asynchronous collaboration system that solved the traditional workgroup barri-ers of time and space. Rhyne and Wolf [37] argued that the binary distinction of synchronous and asynchronous communication was unnecessary and harm-ful, and presented a model that included both synchronous and asynchronous collaboration software as submodels. Version control and consistency maintenance: various document con-trol methods and consistentency management algorithms, including problems of merging two versions of a document. For example, Whitehead [41] talked about two application-layer network protocols, WebDAV and DeltaV, which provide capabilities for remote collaborative authoring, metadata management, and version control for the Web. Dourish and Bellotti [12] explored application Chapter 2. Related Work 8 semantics for consistency management in a collaboration toolkit. Munson and Dewan [26] described a flexible object merging framework that allows definition of a merge policy based on the particular application being used and the context of the collaborative activity. Neuwirth et al. [29] introduced a software system, flexible diff, that finds and reports differences (i.e.,\"diffs\") between versions of a text. Group awareness, notification, and roles: awareness and notification of changes to document content, of group progress, and of group members' cur-rent activities. Huang and Mynatt [17] identified many potential benefits of an awareness system that displays information within a small, co-located group in which the members already possess some awareness of one another's activities. Mendoza-Chapa et al. [24] presented a comparative analysis of workspace and conversational awareness support in collaborative writing systems. Neuwirth et al. [31] explored the interactions among co-authors in collaborative writing. These interactions are influenced by the presence, knowledge, and actions of other co-authors. Brush and Borning [4] introduced a lightweight group aware-ness technique called \"Today\" messages, which are short daily status emails to keep group members aware of work progress and reducing the need for face-to-face meetings. Group awareness is directly related to the roles members of a collaborative writing group play. Dourish [13] talked about \"different mecha-nisms, informational, role restrictive, and shared feedback, that current CSCW systems use to support group awareness.\" Jaeger and Prakash [19] described the requirements of role-based access control for collaborative systems. Shared annotations: exploring annotation models, interface design for annotations, and robust annotation positioning in evolving documents. Weng and Gennari [40] presented an activity-oriented annotation model that resem-bles the rich functionality of physical annotations for an enhanced collaborative writing process. Wojahn et al. [42] compared three types of interface design for annotations: split-screen interface, interlinear interface, and aligned interface. Brush [5] took a first step toward examining, from a user's perspective, what an annotation system should do when a document changes. A good overall refer-Chapter 2. Related Work 9 ence in this area is Brush's Ph.D. disertation, \"Annotating Digital Documents for Asynchronous Collaboration\" [3]. Workflow management: as noted in Chapter 1, there are workflow prob-lems in the reviewing and editing stages. Workflow problems also exist in other stages of collaborative writing and in systems that support collaborative work in general. This research area focuses on finding better ways to support collabo-rative workflow and reduce the workload. Allen [1] first introduced workflow in collaborative work. Woods et al. [43] described different characteristics of infor-mation overload. Phelps [35] introduced the use of wizards to better facilitate workflow in collaboration. These issues are not isolated. They usually overlap. Our research investi-gates these and related problems in shared annotation and workflow manage-ment. 2.2 Annotations In the editing and reviewing stages of collaborative writing, annotations are added to a document so that co-authors can exchange ideas about the document [30]. 2.2.1 Annotation Definition Annotations have evolved from paper-based to digital. The term \"annotation\" itself carries many different meanings. Marshall [23] classified paper-based an-notations into four categories, depending on their content types (whether they are explicit or implicit to another reader) and locations (whether the annota-tion's anchor is a point or a range). For example, a scribbled note at the end of a paragraph is an explicit annotation that might apply to the entire paragraph (a range) or to the point between paragraphs, whereas highlighted text and circled words are each examples of implicit range annotations. Similar to Marshall's definition, Brush et al. [6] defined digital annotations as markings made on a Chapter 2. Related Work 10 document at a particular place, with each annotation having two components: an anchor and content. Fish et al. defined annotations to be hypertext nodes [15] that are linked to the base document. Ovsianokov et al. [34] proposed the idea of \"clumps,\" which are comments that can anchor at multiple places in the document. None of these definitions extend beyond simple editing (insert, delete, or replace) and comment annotations. 2.2.2 A n n o t a t i o n M o d e l Just as there is no standard definition for annotation, there is no standard-ized annotation model. In particular, there is no agreed-upon convention for structuring annotations. Previous research [8, 20, 22, 28] has identified vari-ous attributes of annotations, some of which may not apply to certain types of annotations: • Class: insert, delete, comment, question, reply, etc. • Type: text, graphics, voice, etc. • Title: highlights what the annotation is about • Context: the surrounding text where an annotation is located • I D : unique identification number for an annotation • Timestamp: when an annotation is created • Annotator/creator: identifies the creator of an annotation • Anchor: the concrete location of the portion a document to which an annotation refers • Status: reviewing status of an annotation, e.g., new, read, accepted, or rejected • Priority: an indication of an annotation's importance Chapter 2. Related Work 11 Weng and Gennari developed an eleven-attribute annotation model [40] that uses annotation status to support awareness-of in-progress reviewing and revi-sion activity. Their model has three major differences compared to previous an-notation data models such as the Resource Description Framework (RDF) [25], which is used to describe a website's metadata. First, the status information of an annotation supports progress tracking and provides cross-role feedback among reviewers and authors. Second, annotations have extended activity-oriented properties such as rating, category of problems, response deadline, etc. Finally, Weng and Gennari's model allows for rich contextual information for an annotation, including both a versioned text anchor in the document and contex-tual threaded discussions for the annotation. Their model is the only one we are aware of that allows annotations to be anchored to the entire document; most models assume that annotations will be anchored at a particular location within the document. Ovsianokov's [34] model is the only one that allows anchors to have multiple locations. 2.3 Existing Co-authoring Systems Various tools support collaborative authoring. Brush [3] reviewed some of these annotation systems, focusing on issues such as online discussion in educational settings, notification strategies, and annotation re-positioning techniques in an evolving document. We review systems from the point of view of how well they support collaborative authoring workflow. 2.3.1 Research Systems Co-authoring systems, or more specifically annotation systems, fit within the broader research area of collaborative writing. The classic collaborative writing systems such as PREP [30], Quilt [15], and SASSE [2] all support basic annota-tions, but do not support annotation grouping. In contrast, the recent Anchored Conversations system [10] allows text chats to be anchored into documents so Chapter 2. Related Work 12 that co-authors can have conversations within their work context. Although this is a real-time conversation tool rather than a shared annotation tool, it is an attempt to integrate higher-level communication within the document. 2.3.2 Three Commerc i a l Systems Noel and Robert studied 42 users in May 2001 [32] and found that most in-dividuals used word processors and e-mail as their main co-authoring tools. Eighty-three percent of the subjects used Microsoft Word 2003 (MS Word). MS Word integrates edit and comment annotations into the document and assigns annotation attributes automatically. Annotations can be filtered by au-thor or by type (formatting changes, comments, insertions, or deletions). All annotations are listed in a reviewing pane below the document pane in the main window. Annotations are ordered by their positions in the document. MS Word incorporates edits into the document once they are accepted by one co-author, meaning other co-authors might not know of these edits once the document is saved. Word has a Web Discussion function for collaboration, but Cadiz et al. note that it is limited in terms of where annotations can be anchored [7]. In contrast to Word, annotations in Adobe Acrobat Professional 7.0 (Ac-robat) do not alter the original document because they are not incorporated into the document. This must be done manually after reading the annotations. Status indicators and more sophisticated filtering by annotation type, reviewer, or status are provided. The reviewing pane in Acrobat uses a threaded display, not simple document order, so replies to an annotation are listed indented and below the original annotation. Recently, numerous web-based collaborative authoring tools have been de-veloped [3]. XMetal Reviewer, a new product by Blast Radius Inc. [44], is such a system. Designed for reviewing X M L documents, it combines many of the advantages of MS Word and Acrobat. Basic annotations are integrated within the document, and global comments appear at the top of the document. Insertions and deletions can be incorporated into the document rather than Chapter 2. Related Work 13 kept as annotations, but this can always be reversed. This makes annotations persistent, unlike in MS Word where accepted changes lose their identities as annotations once they are accepted. XMetal Reviewer facilitates discussion by letting co-authors reply to each other's annotations in real-time and in context to reduce miscommunication. Annotations can be filtered by type, author, or status. XMetal is server-based to support collaboration among a large group of people, which could be a drawback for small groups that want a lightweight solution. In all three systems, annotations can only be grouped using system-defined filters such as filter-by-author or filter-by-status. Because comments about a specific aspect of a document may be scattered throughout the document, it would be useful to be able to gather them together. In a similar vein, there is only a partial record of the co-authors' annotating processes. Some systems keep track of editing sessions but do not otherwise capture ordering or relationships between individual annotations. This limitation was identified by Weng and Gennari [40], who noted that \"[annotations should be activity oriented.\" Chapter 3 14 Requirements-Gathering through Field Investigation In this chapter, we describe the requirements gathering phase of our research on document annotation. A small field investigation was conducted to better understand the nature of annotation activities during co-authoring and to val-idate our own experiences with co-authoring. Eleven design requirements for supporting document annotations were developed based on our literature re-view and field investigation. We evaluated three current co-authoring systems against these requirements. 3.1 A Small Field Investigation We analyzed the email exchanges and document attachments of three small groups of academics (3 to 5 people in each group). We examined both the email content and the annotations in the attachments to find the relationships between the two. Each group had co-authored a conference paper, approximately 8 pages in length. They had all finished the conference paper by the time we sent out an email request for collecting collaborative writing data. After consulting with their group members, one author from each writing group voluntarily forwarded all relevant email exchanges including document attachments to the author of this thesis.1 We analyzed a total of 158 email exchanges across the three groups 1The request was made after the groups had completed their co-authoring activities, so some messages may not have been captured because not all messages were sent to every co-author, and even those that were may not have been saved by the co-author who forwarded Chapter 3. Requirements-Gathering through Field Investigation 15 (52 emai ls per g r o u p o n average) . W h i l e m a n y of the emai l s i n c l u d e d d o c u m e n t a t t a c h m e n t s (Microso f t W o r d or L a T e X files), o u r analys i s focused o n the text content of the e m a i l a n d its re la t ionsh ip to the d o c u m e n t . B e l o w we categorize the m o s t frequent ly o c c u r r i n g content , a n d p r o v i d e the percentage o f the 158 emai l s t o w h i c h each category a n d sub-ca tegory a p p l y . N o t e t h a t these are no t exclus ive categories. M o s t emai ls fall into m o r e t h a n one category, so the percentages do not s u m to 100. T o - d o i tem(s ) descr ibe w h a t r e m a i n s t o be done , or s h o u l d be done next (89%). T h e o r d e r i n g of the i tems i m p l i c i t l y pr ior i t izes the w o r k , a n d somet imes co -authors give expl ic i t d i r e c t i o n o n pr ior i t i es . T h e s e often i n c l u d e c o l l a b o r a -tors ' avai lable t imes to w o r k o n the p a p e r . S u m m a r i e s of ed i ts t h a t a c o - a u t h o r has m a d e to the d o c u m e n t (92%) often a p p e a r together w i t h t o - d o lists. C o - a u t h o r s often s u m m a r i z e edits a b o u t issues t h a t arise at m u l t i p l e places i n the d o c u m e n t (78%), s u c h as g l o b a l w o r d rep lacements or spe l l ing changes t h r o u g h o u t a d o c u m e n t . D i s c u s s i o n s a b o u t the d o c u m e n t often i n c l u d e p a r t s of the text c o p i e d in to a n e m a i l t o p r o v i d e context (64%). T h e s e i n c l u d e two subcategories: quest ions are somet imes d i r e c t e d at a p a r t i c u l a r c o - a u t h o r (53%); g e n e r a l c o m m e n t s (41%) p e r t a i n to the entire d o c u m e n t ( c o m m e n t s o n the tone of the d o c u m e n t or suggest ions a b o u t d o c u m e n t s t r u c t u r e ) . C o m m e n t s - o n - c o m m e n t s are c o m m e n t s a b o u t one or m o r e prev ious c o m -ments . T h e s e m o s t often c o n c e r n c o m m e n t s t h a t have no t yet b e e n addressed (31%) or adv ice t o co -authors o n how to process the referred-to c o m m e n t s (34%). I n f o r m a t i o n expressed as text e m b e d d e d i n e m a i l const i tutes w h a t we re-ferred to at the outset of th is thesis as \"higher-level c o m m u n i c a t i o n . \" C o -a u t h o r s devote a lot of effort to d e s c r i b i n g h o w a n n o t a t i o n s relate t o each o ther because text is inefficient for express ing a n n o t a t i o n l o c a t i o n , t y p e , or context , espec ia l ly w h e n a n issue arises at m u l t i p l e places i n the d o c u m e n t . C u r r e n t l y , the messages. We believe we obtained most of the relevant messages. The difficulty inherent in this collection process underscores one of our assumptions, which is the unreliability of email as an archival record of collaborative annotation activity. Chapter 3. Requirements-Gathering through Field Investigation 16 co-au thors mus t descr ibe assoc ia ted anno ta t i ons by w r i t i n g commen ts (e i ther i n te rna l to the documen t or ex te rna l l y i n ema i l ) . T h e r e is no way to d i rec t l y a n -no ta te m u l t i p l e anno ta t i ons us ing e lec t ron ic too ls . R e c o g n i z i n g th is l i m i t a t i o n , we deve loped a l is t of requ i rements t o b u i l d a n a n n o t a t i o n m o d e l t ha t w o u l d un i f y a l l documen t - re l a ted c o m m u n i c a t i o n b y a d d i n g s t ruc tu re to anno ta t i ons . 3.2 Requirements for Structured Annotations W e der i ved eleven des ign requ i rements for a n n o t a t i o n sys tems t ha t ref lect co-a u t h o r i n g work f low. T h e f irst seven requ i rements are based o n ou r l i t e ra tu re rev iew of a n n o t a t i o n mode ls a n d on cur ren t a n n o t a t i o n sys tems. T h e last four requ i rements address c o m m u n i c a t i o n tha t cu r ren t l y occurs ou ts ide the d o c u -men t , w h i c h were ident i f ied i n ou r f ie ld inves t iga t ion . R l . S u p p o r t bas ic anno ta t i ons such as ed i ts a n d commen ts w i t h speci f ic a n -chors. T h i s a l lows co-au thors to exchange f ine-gra ined i n f o r m a t i o n w i t h i n the d o c u m e n t contex t . A l l the cur rent anno ta t ions sys tems we e x a m i n e d suppo r t th is requ i rement . R 2 . P r o v i d e a n easy w a y to i nco rpo ra te changes speci f ied i n anno ta t i ons in to the documen t . T h i s saves co-au thors the effort requ i red to m a n u a l l y i n -co rpora te changes after read ing the anno ta t i ons . R 3 . P reserve the o r i g ina l anno ta t ions i n case co -au thors wan t t o refer back to t h e m la ter . T h i s avoids the loss of a n n o t a t i o n h i s to ry tha t happens i n m a n y sys tems w h e n the documen t is saved after changes are accep ted . R 4 . S u p p o r t b o t h a separate a n n o t a t i o n l is t v i ew as we l l as the ab i l i t y t o v iew anno ta t ions in tegra ted w i t h i n a documen t . M o s t of cu r ren t a n n o t a t i o n sys tems suppo r t d u a l v iews . R 5 . M o n i t o r rev iew ing anno ta t i on s ta tus to he lp co-au thors keep t rack of the rev iew ing process. T h i s a l lows co-au thors t o q u i c k l y iden t i f y a n a n n o t a -Chapter 3. Requirements-Gathering through Field Investigation 17 t i on ' s rev iew ing progress a n d a p p l y p rope r rev iew ing effort t o i t acco rd -ingly. R6. S u p p o r t documen t - re l a ted d iscuss ion w i t h a n n o t a t i o n rep l y func t ions a n d th readed d i sp lay of anno ta t i ons . T h i s encourages co -au thors t o rep l y t o each o ther 's commen ts i n the d o c u m e n t a n d also helps t h e m see the re la-t i onsh ips between the rep ly a n n o t a t i o n a n d ear l ier anno ta t i ons . R7. S u p p o r t flexible a n d u n i f o r m f i l te r ing. T h i s a l lows co-au thors t o rev iew more focused a n d sma l le r sets of anno ta t ions by t r ea t i ng anno ta t i ons as ob jec ts a n d a n n o t a t i o n a t t r ibu tes as f ields so u n i f o r m filtering acco rd ing to one or more a t t r i bu tes c a n be app l i ed to any set of anno ta t i ons to ret r ieve a sma l l e r set of anno ta t i ons . R8. A l l o w anno ta t i ons to be d i rec ted to speci f ic co -au thors . W e found f r om the field i nves t iga t ion tha t co -au thors o f ten d i rec t documen t - re l a ted c o m -ments , espec ia l l y ques t ions , t o speci f ic co -au thors i n e m a i l messages. R9. S u p p o r t genera l commen ts tha t anchor t o the ent i re d o c u m e n t , or t o s ing le or d i sconnec ted sets of po in ts or ranges w i t h i n a documen t . C o - a u t h o r s o f ten inc lude genera l commen ts i n ema i l s because there is no des igna ted loca t ions i n the documen t for genera l commen ts . RIO. A l l o w users t o p r io r i t i ze anno ta t i ons . Q u i t e o f ten , i n a n e m a i l message, co -au thors adv ise o ther co -au thors w h i c h anno ta t i ons s h o u l d be rev iewed first or g iven pr io r i ty . R l l . S u p p o r t a n n o t a t i o n of g roups of anno ta t i ons . T h e mos t f requent ly occu r -r i ng i n the e m a i l content we co l lec ted were summar i es of ed i t s , to -do l is ts a n d commen ts -on -commen ts , a l l of w h i c h are examp les of anno ta t i ons of g roups of anno ta t i ons . W e eva lua ted the three sys tems d iscussed p rev ious l y i n the R e l a t e d W o r k chap te r (M ic roso f t W o r d 2003, A d o b e A c r o b a t P ro fess iona l 7.0, a n d X M e t a l Chapter 3. Requirements-Gathering through Field Investigation 18 M S W o r d A c r o b a t X M e t a l R I : basic anchors Yes Yes Yes R 2 : incorporated Limited Limited Yes R 3 : reversible edits Limited No Yes R 4 : dual views Yes Yes Yes R 5 : status Limited Yes Yes R 6 : discussions Yes Yes Yes R 7 : filtering OR only OR only OR only R 8 : specify receiver(s) No No Yes R 9 : general comments No No Yes' R I O : prioritization No Limited - No R l l : grouping No No No Table 3.1: Evaluating requirements against current co-authoring systems. Reviewer) against these requirements. The results are summarized in Table 3.1, which suggests that current tools fail to support some of the requirements. Of the three systems, XMetal best meets the requirements. However, it and the other systems fail to support two important requirements, which relate to the workflow overhead described in Chapter 1. None of the systems supports annotating groups of annotations ( R l l ) . Email categories such as summaries of edits and comments on comments iden-tified in the field investigation are instances of grouped annotations. For ex-ample, summaries of edits are basically comments on a group of edits in the document. Using current co-authoring systems, there is no way to annotate the edits directly, so co-authors have to rely on additional media to exchange this information. The second requirement that all three systems fail to fully support is the need for flexible filtering (R7) . All three systems only support OR filtering, which means co-authors can only filter document-embedded annotations on one attribute (e.g., author or status) at a time. Flexible filtering should at least allow Chapter 3. Requirements-Gathering through Field Investigation 19 co-authors to do AND filtering, such as finding Jen's (creator attribute) unread (status attribute) comments (class attribute), in just one filter operation. 20 Chapter 4 Structured Annotation Model In this chapter, we present a comprehensive annotation model that introduces both mandatory and optional attributes of an annotation. We then discuss existing and new types of annotations applying the model. Finally, we link structured annotations with workflow. This forms the theoretical framework underlying the high-fidelity prototype described in Chapter 5. 4.1 Model Elements Using the requirements listed in section 3.1 as a guide, we constructed a com-prehensive model of annotations that encompasses the behaviors we observed in the field investigation. In the model, every annotation has a set of attributes. Depending on the purpose of the annotation, some of the attributes can be empty. Mandatory attributes include the creator of the annotation, a t i m e s t a m p , r e v i e w i n g status (unread/read and accepted/rejected), and an a n c h o r / r e f e r e n c e (the annotation's location and range relative to the document content or related to other annotations). Multiple non-contiguous ranges are permitted as the1 an-chor for a single annotation. As a special case, the anchor can be the entire document. An annotation can also refer to one or more previous annotations, which is indicated the option substructure attribute discussed later in this sec-tion. All the mandatory attributes have default values. For example, the default Chapter 4. Structured Annotation Model 21 value for creator and timestamp are the current machine name and current time. A newly created annotation has a default status of \"unread,\" and will become \"read\" when the annotation is actively selected by a co-author. The default value for the anchor is calculated depending on where the annotation is placed. It can be a single point in the document, a range within the document, or a set of points and ranges. Optional attributes include the name of the annotation (a short text string), a list of recipients (those co-authors who can view the annotation1), a free-form text note, modification (insertion, deletion, or replacement of text), a priority, and substructure (a list of other annotations to which the annotation refers, in effect providing additional \"anchors\" to these annotations2). Each annotation must have at least one of the name, comment, modification, or substructure attributes in addition to having the four mandatory attributes. The name, note, modification, and substructure attributes are null (not present) by default. The default value for recipients is the list of authors.3 Three key annotation attributes in our model are anchor, note, and sub-structure. These make integrating high-level communications into documents possible, hopefully eliminating the need to use auxiliary channels such as email. We classify annotations into two categories: single annotations that have no substructure, and bundled annotations that have substructure. The latter are called \"bundles.\" The most distinctive feature of our annotation model is the addition of structure to annotation. The anchor for an annotation captures how it relates to the document and the substructure captures how it relates to other annotations in the document. 1We leave for future work an exploration of the mechanisms for specifying which co-authors have access to annotations and how this might affect workflow. 2These could include links to \"permanent\" external objects using URLs, or to attachments associated with the document itself, but the current model employs only simple links to previous annotations in the document itself. 3Again, future work could extend this attribute from simple recipient lists to more gener-alized lists of recipients that include primary recipients, secondary recipients, and anonymous recipients, mimicking the To:, Cc: and Bcc: fields in email, and permission or capabilities to reply, modify, or delete the annotation. Chapter 4. Structured Annotation Model 22 Annotation. Figure 4.1: A Venn diagram illustrating how different types of annotations fit into the annotation model We can annotate a group of annotations by creating a bundle that has a set of previous annotations as its substructure. A bundle refers to each of the annotation in its substructure and may have a note attached with it that further defines the the relationship between the annotation and its substructure. O u r definition distinguishes bundles as those annotations that have non-null substructure, but we will sometimes use the term \"bundle\" to refer generically to any type of annotation in o u r in o u r model. This will be true when we introduce the Bundle Editor, which handles both single annotations and bundled annotations. 4.2 Identifying Types of Annotations Within the annotation model there are some special cases that correspond to common annotation types in traditional systems. A simple edit has an anchor to a range of text and the modification attribute. A comment has only the note attribute with an anchor into specific document content, and a general comment is a comment whose anchor is the entire document. These are all single annotations (Figure 4.1). Chapter 4. Structured Annotation Model 23 A number of interesting new types of annotation also arise from our model. A meta-comment is a comment that has substructure, indicating the list of annotations to which it refers, and these in turn may have anchors into the document. Meta-comments can have their own document anchors (in which case they are not \"pure\" meta-comments). The nesting of substructure can have as many levels as desired, leading to the notion of inherited anchors by which a meta-comment (or any annotation) is recursively associated with the anchors of its substructures.4 A reply is a special meta-comment that refers to just a single previous annotation. Another special type of bundle is a worklist. An example would be a bundle having the name \"Check spelling\" with comment text that says \"I am not sure how we spell some of the people's names in our report. Please make sure I have them right.\" The recipient list would indicate who is to do the spell-check, and the anchor would indicate all of the places in the document where the names in question appear. 4.3 Linking Bundle Creation to the Co-authoring Process The spell-checking bundle just described could be created manually, but we envision it being created automatically as a side effect of running a document processor's spell-checking command. Realizing that the misspelled words are all names of people, a user could indicate that the selected words form a new bundle by clicking on a bundle button in the spell checker's dialogue box that creates a new bundle whose substructure is the set of edit made by the spelling checker. The bundle would recursively have a multi-location anchor. The user could manually add name and comment attributes by typing text into the appropriate fields in the spell-check dialogue box. Recipients would be selected from a list of 4Annotations can only refer to previous annotations, so we do not have to worry about infinite recursion because the substructure relationship is acyclic. Chapter 4. Structured Annotation Model 24 co-authors. In a similar manner a document processor's \"find\" command could produce a bundle of all instances of a pattern, and the \"replace\" command could produce a bundle of all of its replacement edits. Another candidate for automatic bundle creation as a side effect is the \"Track Changes\" feature in MS Word. A user should be able to turn on tracking that automatically bundles all new edits (and comments too, if desired) so that at the end of a session there is a ready-made bundle that can be turned into a worklist. The user could then review just the changes from the current session, or highlight the changes for another co-author to review. The full power of structured annotations lies in the interplay between nor-mal workflow (editing, commenting, and reviewing) and the ability to capture artifacts from that workflow and use them to manage future workflow. In the annotation model proposed by Weng and Gennari [40] users can assign only one pre-defined category, such as \"question\" or \"reply,\" to each annotation. Our model allows users to define their own categories by bundling relevantannota-tions into the substructure of a new annotation whose name attribute identifies the category. Moreover, any annotation could be assigned to multiple categories because the bundling substructure has no restrictions other than the require-ment that it be acyclic. Adding optional user-defined attributes may still be necessary and would be an easy extension to our model. They would be similar to the user-defined attributes available in qualitative and video analysis systems such as Noldus Ob-server [33], or the common subclassing (derivation) operation in object-oriented languages where new types extend old types by adding additional attributes (with user-defined default values) and behavior. 25 Chapter 5 A Prototype of Structured Annotations: The Bundle Editor In this chapter, we present the Bundle Editor, which is a high-fidelity proto-type implementing the structured annotation model described in the previous chapter. We first focus on the major interface components in the Bundle Editor and then describe its functionality. Finally, we step back to briefly describe the iterative design and prototyping, implementation and pilot testing for the system to motivate the design choices that were made. 5.1 Major Interface Components Based on our annotation model, we implemented a high-fidelity prototype called the \"Bundle Editor,\" which has a number of functions designed to support structured annotations (Figure 5.1). The main component of the system is a two-pane window with an upper d o c u m e n t pane and a lower r e v i e w i n g p a n e (similar to MS Word, Acrobat, and XMetal). The document pane displays the annotated document. The reviewing pane is a multi-tabbed pane where each tab pane shows different annotation information. There are two permanent, default tabs at the bottom of the reviewing pane. The first tab is \"All Annotations,\" which contains all single annotations (inserts, deletes, replacements, and comments). General comments (i.e., com-Chapter 5. A Prototype of Structured Annotations: The Bundle Editor 26 jtSuccDssfufh/loaded. Title:-\"Please Hold\" Not AlwaysMusicTo.Your.EarsfUmversitv Of. Cincinnati Researcher Finds Nearly all 01 us:knowwhat it watts like.to be put on \" o i vho iamMSJanus iea lho j ^ customer service*-number.-and.you.can'expect Urhear.at leastajew.bare.ofiinmpjdje^^^ ques*jonis;.Do you;hang up:or.do you^keep holding^ That may depend onyiHir-gendere-and.whattype ofmusic:is;playing; according to;research.repprted^ attheiSOcie^ofiConsurns^psychology.confBrence.iKsllaris/whQ lias'studiedthe.effects;Qf.'rnusic;on consumers foryears.iearnedwithSsgma'Research ManagementGroup*of Cincinnati^evaluatethe-effects'or,,on-holu!miisic\".fbr accompany that operatesi&n-a^ustomerservlce line. During'a^reviou^s'tudy/^ types-of of^aki^usitiriusicat hold with7H''ofthexomnany's\\cii^ rockand the; company's.xurreni format oftadult;alt^ inc^ude^'ihdlviduaiconsumers/'smair busiriess'and Iarge^business;s«cfei's*egrri3iitsv Participantswere-askedio imagine calling•aicustomer.assistanee line .and being:placed on hold:They were:then exposed to\\mus^&aU4ddon-hold.music? via headsets:and-asked to.estimate, how long.it:pfayed;OtherLre_aj:uonsiap^ bythe researchers:; Service-p'roYiders. of course, dontwant.you to have.to wait on hold, but it you etodid. they wantitto.be a pleasant experience, for all. of-you.- However. Kellaris'.conclusions may. ho Wsome distressing •news:for companies;'No matter, whatmusicof.the.. four-types.of musicwasplayed. the time spent \"on hold'.was generally overesttmated.--The;actual.-waitm* in the stuciy.was .•fi:mtmit&a.-htri:th^ ® -a >E a i s mw\\m |GljJ 14110:0,2 • General Comment: Please review allthe annota... [Unread]: |?I|J 1*10:37 General Comment Some or my comments havens lUnread]' K I '«47,4e= Replaced: was with Is ILJniead] •X?:J...12:48:16 'Replaced: on4iolri music with musical hold.\" [Unread]. •-r?. Jen Lv 12:49:05.' Inserted: to? lUnrradl' p T l John | - J 13:36:34 Comment: Do we. need the.eleyator. music analogy?'' [Unread] (a) M Msry '14:06:03: 'Ihihn; 13:55:49 auto* 15:40 24: \"music hold*' vsV'Vn-hold music'VNqta; I efta... Comparative and Superlative-Note: I've cone;.. VerbTense Corrections V Comments for dohn-Note: Please reply ASAP! Spelling Erirts^ Nqte: I have corrected the sp... ail other annotations [Unread] [Unread]. [Unread]' [Unread] |0nread];. [Unread] .^-AirAnnotations |;f All Bundles.\" (b) Figure 5.1: (a) The Bundle Editor with document and reviewing panes. Anno-tations are embedded in the document pane and are color coded according to. author. The reviewing pane is a multi-tabbed pane where each tab pane shows different annotation information. The first tab in the reviewing pane is \"All Annotations\", which contains inserts, deletes, replacements, and comments in the document order. General comments are always placed at the top of the list, (b) The second tab in the reviewing pane, called \"All Bundles\", lists all previously created bundles. The last bundle listed in the tab is named \"all other annotations\", and is automatically maintained by the system. Chapter 5. A Prototype of Structured Annotations: The Bundle Editor 27 ments that pertain to the entire document) appear at the top of the list, and the rest of the annotations appear in the order in which their anchors occur in the document. The second permanent tab is \"All Bundles.\" It lists all bundles, i.e., the annotations that have substructure. The last bundle listed in this tab is named \"all other annotations.\" It is maintained automatically by the system and contains all the single annotations that do not belong to any bundle. 5.2 Functional Description of the Bundle Editor We provide mechanisms for grouping annotations into bundles, annotating pre-vious annotations, filtering to select annotations, and sorting annotations. We describe each of these functionalities, with links to the specific annotation re-quirements identified in Section 3.1 shown in parentheses. We then present different ways of creating a bundle and describe how to interact with bundles using the Bundle Editor. 5.2.1 Bas ic Funct ional i ty The Bundle Editor has all of the basic functionality that a typical document editor has, such as insert, delete, and comment ( R l : basic anchors). It also has specific functions to create a bundle ( R l l : grouping), as shown in Figure 5.2. Bundles are stored with the document and are linked to various places in the document or to other annotations. Co-authors can add and remove annotations to and from bundles. Any annotation can be in more than one bundle and bundles can be in other bundles. For example, in Figure 5.3, there are two bundles within the \"Verb Tense Corrections\" bundle. One is called \"Jen's Verb Tense Corrections,\" and another is called \"Mary's Verb Tense Corrections.\" Co-authors can annotate a group of annotations by including a note in the appropriate bundle and directing the bundle to a particular set of co-authors (R8: specify receiver(s)). For instance, in Figure 5.2, Mary creates the bundle Chapter 5. A Prototype of Structured Annotations: The Bundle Editor 28 selection fl um: 2866 to 2B70 i SIS® EIION Edit. .Review Mary Show Revtewma Pane nusinessancHaige business ^^ dws-senirreniv;. Rarticipantswere asked;to .imagmexalling-atcuslomer assistance.line^ anlbelngiplaced onhold..They were then-exposed •to,\"m«swaMwWon-'hold music\" vla-headsets-'and asked to;estimate: how ionq it played..Olherifeacllons^ also solicited and quantified hythe.researchers; Service providers:^ * course •dontwantyou^ to have to wait on holdout if you dodidi.they want&to.be^ pleasanl experience for all of you. However-Kellans1. conclusions may hold somedistresslng-news.for.companies.-.No matter what music:of the four types of music was played, the time:spent\"on hold\" was.generally overestimated. The.actual waiting-'In the. study wasi, 6;minutes.-butthe.average;e3tlmate:Was^ mlnutes^ He;didilncf some.good.news tor 1he:^ fitetine-who;hlreclhlm:'He concluded the.kind of music Ihey.are playing now.:alternative:.is probably their feetisfbest choice. Two toings-mal«made:alternative>music.a^ not produce;sjgnificantlymore positive:or negativeireactioriS'iirpeople.. Second, males andfernaleswere Ie6s^ 4iffe4-enlpolati2edin,their!re_action.s,to this type of music. Kellaris-otherfindings. however^ make.the state.ofw4i€4d-«w«i«Tiusical hoid:a.littleJej$;firm;-ETimeef»*wi«spent on hold seemed.slightly&horfastshortgi thanany othert/pe ofmustc when llghtjazzwas'-played^ huttheeffect of music format differed formen and.women. Among the males/.thewaifseemedshortestwhenxlassical music was!played: Among the. females.Mhe.wait seemed l««g«€tlongef-when:classical music was,played. This.may.berelatedlo the differences in attansiGnatientiorUeyelsa In qeneral.'-.clas.slcal;musJc;evQkedthe mw^mostpositive reaciions'among male: lightjazz-evokedthe;rnoslpositive, reactions (and shortest waiting time estimates) among females: Rockwasthela»^sss;pwswiprGfetrect'across-both •a. \\Filter\\ List of items in the bundle: ^ | r~ f*l 13:03:06 • Mary. 13;05:54= :07:Q4 . ff \" ^ I3;u7;54\"-• trl :05:: 13 f Replaced: shortest with shorter Replaced: longest with lonyer Keplared more with most Bundle Name.jitive and Superlative! h _Add Bund i.. All Annotations;':!': All Bundles -•[ Comparative and Superlative I've-corrected the^ useof the-.' comparativeform when the superlative form should be used and vis&.versaPlease review the chang&si.made: Directed the hundle to: •'•l^ JiJen-: fet John; [iD Mary,: • Go to next am tot .it ion Goto, annot; I ! . . . . - Uidicminn there is!a i S | lS ieommerr t | Inseita comment note attached! with a bundle\" r~| \"Accept- Reject: \\mitor\\ F'WerainiotqJioits rr—1 -Soil annotations; amiotation(si 1 ' 3cc«Ulhoto>^R>til»9 B £ i U accorilinyto'attiibiites w New Bundle Button: creates;.! new tmndle; . Bundle AddBiittou:.:adds :anuotation{s)iiitoaIniudie: BiindleMiMis Btit10n:^ieinoyes; aiuiotatloii(s) 'nomaifoiiiidieC ' :%Lfe|': Bundle.Decompose Button: deletes a bundle and-;: disassociates > Its stil>str iiciiti e Figure 5.2: A new bundle called \"Comparative and Superlative\" is being cre-ated. Co-author, in this case \"Mary,\" has added relevant annotations into the bundle using the \"Bundle Add\" button. She also writes a note with the bundle and specifies the receivers to be Jen and John. A legend for different interface components is also included in the figure. The comment icon allows users to insert a comment with a specified anchor or a general comment. Both \"Accept\" and \"Reject\" can be used to accept or reject one or more selected annotations. Chapter 5. A Prototype of Structured Annotations: The Bundle Editor 29 Filev Edit- :Re\\newj jjohtr •. {•] Show Reviewing Pane •*y=^ Comment i Bundle Selected:. ....„„.•., _ . . . . 1 •. . . . . .... ..: •• - , : .. .• • . -_ a toinpany-thatoperatesowa customer•setvicelme.. Dunng;a0 \\Filter\\ \\Sort\\ Comparative and Superlative-Note: I've carre... Verb lense Corrections Mary's Verb Tense Corrections-Note: I have in... (Unread] -Jen's Verb Tense Collections- Note: I coritmu.v. Replaced: do with did Replaced: make with made Pen's Verb Tense Corrections bundle t > \\ cpntinuVdwhera Maryleftoff think wejhave?alttheiCorrections;onv8>rb^ense;now.! [Please review the changes i Figure 5.3: The user highlights the \"Jen's Verb Tense Corrections\" bundle in the reviewing pane, which highlights all of its sub-annotations' recursive anchors in the document pane. The \"Verb Tense Corrections\" bundle contains two sub-bundles. One was created by Jen and one was created by Mary. Chapter 5. A Prototype of Structured Annotations: The Bundle Editor 30 \"Comparative and Superlative\" and attaches a note explaining what changes she made on comparative and superlative forms. The bundle is directed to Jen and John. The Bundle Editor has functions for replying to annotations, which encour-. ages discussion ( R 6 : discussions), and it allows co-authors to make general comments to each other without leaving the document ( R 9 : general comments). The filtering function in the Bundle Editor is more flexible than the filter-ing functions in existing tools (R7 : filtering). It allows co-authors to select annotations based on multiple attributes such as \"all of Jennifer's and Brad's comments.\" The filter result is a new bundle that is a subset of the annotations in the current tab in the reviewing pane to which the filter was applied. The result can either replace the bundle in the reviewing pane or appear in a new tab in the reviewing pane.1 The sort function in the system allows co-authors to sort annotations within a tab of the reviewing pane according to time, lo-cation in the document, author, recipient, or any other user-defined or built-in attribute. Reviewing progress can be tracked by assigning a status to individual an-notations (R5 : status). Depending on the co-authors' reviewing activities, the system assigns annotation status automatically, so an \"unread\" annotation be-comes \"read\" by a co-author when it has been selected. Co-authors can always over-ride a system-assigned status by right clicking on the annotation either in the document pane or reviewing pane to set the status. When a bundle's status is set, users have choice to decide whether the status will propagate to all the annotations in its substructure. 5.2.2 The Four Primary Ways of Creating Bundles Bundles can be created manually while annotating the document. For example, if Jennifer finds recurring problems in a document, she can create a bundle by explicitly selecting all relevant annotations so she can deal with them as all at 1For permanent tabs filters always produce their results in a new tab. Chapter 5. A Prototype of Structured Annotations: The Bundle Editor 3 1 once. Temporary or working bundles are created by filtering and other operations. They can be saved as permanent bundles with a single click. For example, Jennifer might want to look at the comments made by Brad. She can create a working bundle by filtering on \"Brad\" and \"comment\" and save the edits as a bundle for later reviewing. Working bundles can also be created by normal editing commands, such as \"Find/Replace.\" Brad may want to replace all oc-currences of \"Jennifer\" with \"Angelina\" and then save the results as a bundle so that other co-authors can manipulate all of the annotations in a single op-eration, such as setting the status to \"reject\" or changing the replacement field to some other name should he later change his mind. A bundle is created automatically at the end of every reviewing session. Once Jennifer finishes her session, all of her new annotations from that session form a bundle that other co-authors can review, unless she elects not to save it. This mechanism generalizes the \"Track Changes\" functionality in current editors and provides a uniform way to capture reviewing history. A flexible ability to group or bundle annotations is lacking in other co-authoring systems. Bundles provide explicit representations of user-defined workflow, and they integrate normal editing with other annotation activity using a range of implicit to explicit bundle creation. 5.2.3 W o r k i n g w i t h Bundles Various techniques help users maintain a mental model of a document and its annotations. In order to capture the structure of annotations, we employ a threaded list of annotations in the reviewing panel (R6: discussions). Users can expand or collapse any bundle to view or hide the annotations belonging to it. Once a bundle is expanded (i.e., its substructure is showing), \"Next\" and \"Previous\" buttons can be used to traverse the annotations within the bundle. A right-click on any annotation within the document or the reviewing panel gives users the option to view the bundles to which it belongs ( R l l : grouping). Chapter 5. A Prototype of Structured Annotations: The Bundle Editor 32 Users can select multiple bundles at a time and perform operations (such as setting the reviewing status) on all of the selected annotations. If a bundle is selected, the anchors for all of its sub-annotations will be highlighted in the document (shown in Figure 5.3). Users can have several bundles active at one time, each in separate tabs of the reviewing pane, and switch between them. Each tab can be sorted according to author, date, document order, or various other attributes. Co-authors can prioritize annotations in a bundle using drag-and-drop techniques (RIO: prioritization). For example, users can move a bundle up and down in the list of annotations in the reviewing pane. Annotations can also be moved between bundles. 5.3 Iterative Design and Low- to Medium-Fidelity Prototypes Before we formalized our design in the high-fidelity Bundle Editor prototype, we iterated through a series of paper prototypes and medium-fidelity prototypes using Microsoft PowerPoint. Many design alternatives center around two inter-face components: the way comments are displayed in the document and the way the single and bundled annotations are organized and displayed in the reviewing pane. For displaying comments in the document, one design alternative was to use a comment icon similar to the note icon in Acrobat, where the icon is anchored at a user-specified location. However, using icons does not allow us to encode structural information. For example, if a comment explains why a certain edit was made (i.e., a meta-comment), there is no easy way to visually link the comment icon with the edit. Also, the icon design only allows comments to be anchored at a point and not on a range of text. Several rounds of brainstorming and feedback from potential users resulted in a design that displays the com-ment's location and range as a colored background on the text. If the comment is anchored at a single point then it is displayed as a triangle, otherwise it is Chapter 5. A Prototype of Structured Annotations: The Bundle Editor 33 Hie;, Edit: .igeviGW-: Mary sS3 Show Reviewing Pane ^^^ fCommerrti (selection from: 28D0 to_2870 ^ • - _j . business and:large business ?H*£lws.s6giri9his: Rarticipanls.were askecftoimagine.calling'aiCustomer assistance line:: and being-placedon hold.TheywerethenexposedtQ-\"musicalholdori-rioldmusic7 via headsets andasked-toiestimate: how-long it played. OtherLre.actiQpsiandjCommentS4were also solicited and quantified bytheresearchers: Serviceproviders::otcourse.'.don'tw3ntyou:to have to wait on holdout if you dedidy.they wantj.it.to.be:a pleasant experience for all of you. However.- KeHans'.conclusions may-hold some distress!ng!news-forscompaniesr,No matter what music:of the four types of music was played, the time;spent- on hold: was generally overestimated. The.actual waitmg-ln the.study wast 6-minutes.but the.averageestimate was^rrm3utes^He:didiind:some'goodnewstorthe:tfl9lrJtcline who.hiredhtm:'He concluded trie kind 01 musicthey are playing now. alternative.-is probably their teetterbest choice. Two thmgs-makemade:atternatlve musics good efloee&choD3e;-.First.:lt did not produce;slgnito4WWriwii£miifii£aihoia^Callaimosianvfu£tomaf sewto meet, and you cart C«DO« ID heai al least a few Oars of «lipjfl.»ltt»tor.(nu!!S before an epuzter.tmaW picks up. ThB-e^siioMs:Pjyo>] hang UD cr no you k6&p Itoldinj^ i Tr^tma/Dt'punii on your aonaers-and Wi&t type ot mus^ la pia/ino/accorcilri'j 10 waeann resorted t/y:Di:jii(nd!iKeilan«< rt iha,*ocieiyjjf.tonsjj(»K o syen 0lQOY_conrp ten f«,!. ^ eli ^ ri a. -*t KI his stalled me flails sr music et. consumm lormoi*. tnan 11 years, learned>tUi aigm* RflsoartnManagemflntOraupotCintinnEKWevaljaie tne effects ocon-rtoia musl?\" lor a tomtanv tftatcperalei mi J custuniw service line: During a previous Etuov.thoMC^»E8Stcnor and nis <.olU<«.ac disnijiie*: tested (our WW3 cf ldr>n-r.iairt in.tw m naa.lial-- ml asUdtc ailimat\" new long it plaved. Other faarugns^andconimpntsworaaltto soliciied ana quanttliad ovthargcQarcf err, S9iviteprwJ9rsl'ii!rLiu^s1duril-ramyi)ul«lia'.vlawwrJntirjiu. tiutffyoudoclld.irtByv-antftlofcesplaasjnlsxpsriBnce forailof )'OU'Howaver1 Kellarifl-conclusions may hold soma distressing ntiws tor companlas No matter what music of the four.tytidi nt muah\". was ptavfed, the time ananfon hoitr.vss v«nat Airv* ovAmtAEIrrialfid. Tha aftual vyaitiwj.ifi Ihb slimy 0 mlnuUis.tiiJt jna.^rsnH.iistlrraisvVfts tr.nlnute»iHft.milflno snma anna nawR.fn«np.-.!ifc8!(r t!v< wnn nirpn him rin •Til \"mutic hold\"v». \"on fiofeJ music\"- Mota: f cha,_ Compar«iw and Supstialrvc-Note: ivu ecrre.. Veiti TomiuConeL-lium, CMnmentv l u Joh»w.Nute: Pt#aw* fepV ASAPI StwUktt tri«u MWU; I tiwo cut i acted mi; ttp..' alt other armotattons {Unroll' fUwwi). Al Annul AIIOHS p'AB Buffi&M>~[ Figure 6.1: The Bundle System used in the usability study. It was created by modifying the Bundle Editor. The \"New Bundle\", \"Bundle Add\", \"Bundle Minus\", and \"Bundle Decompose\" buttons were removed. Participants were not able to create new annotations or manually exit the system. There was a task control button (e.g., \"End Task\" button shown in the figure) located at the bottom right of the screen for participants to start or end a task during the experiment. Chapter 6. Evaluation of Structured Annotations 42 •ami Wbuccalng, according to research reported try Dr. James Koliaris attto.eoijlet^ of cons.umeijsw^ *twhas studiediheeffectso< muaeonconsumerstormore,; than 12 years, ipamsd wrfn Stoma'Res »aicr> fclanagenwri Wvup of Cincinn&li to evaluate aw arietta ol \"on-Jmld music\" 1c a company in al operates s«a customer service iirie:. biiiinu a [jrayiuiii study. theMQ.'^'iicriPi any hii w ^ W * ' , oil* •!;,«'! v tastari four types o(tw4*wiaj«y*iwr.uiii':al Ituld with 7Vof Ins company's clienlo,i?p.otthernjfyomen;from Indianapolis, and Los AriseIOE. Light J3zz, classical, rock and the tamoun/S ciifrentrijimatrjJ.aclull.altar^airyB,^ ^^ sil tssted Tha sample lnciuue«fJ Individual consumers; amail^ online?? anaiaroebiislnOTSwavMufinimi^Partirlnanl'swsro askebto imagine tailing a customer assistance lino ando&:nj| plated on hold; They were Irion exposed to *iwu«ltat hrtann- held muslt\" via headsets and asked lo1 estimate | how lor.cj it pisyed.-Oiri&r^a^g^^Qd.cO.'Tir/ienlii.were alia solicited and lu&Mrnod uy If'd n>s*<>rch&rs. IService ciovidors.of course, donlwaiitwu to nave ro wall on hold; but rfyoudwid, UieywarrtltJa tie* apteoiarilaiDeii'aiite M aH of TOU Ho-*rff/er KellaVis'corirlualons may lioiu same dlstiftsslrigiiews lor roiri'p(.TilBS'.'Ni> matlsr what music orif>e four types of music was played; trio time epeni'on holdtwa^Beneraityd^restimatid.lhe actual waiiitia. in the study was^ np Jen .... • jU 1249:05 i:. Ai Arnwtaliuti* f Rspiacnrt; was wfUi Is Unplaced: oh-ltold music with ir«nlc'U hnbi [Urrc^ ij bisertod: to [orr end] Common): Oo we naad tho elevator 'music wntocjyjf [UriTeuaj ftiptaeutf: uueiucct with uperaiur. (Unread) Cuninert: 1 like wu end Ihtt \\uun(pmai with » q_ . Fiunr: Jen . . . . . . • ^SuMact:';Re:V«rti'T9ns To: ,Bab.'jojin:Mary/ _ I Message. •• where Mart mil orr.nriink we hava all Verb tanse now.: Please review. | |my changes'; Fior'n' Mar*. 'Sueject verb .Tense Corrections • ToTDOD.'Joh! John\"*\" W have made s> pi'stihr»V paragraphs but'rari out of ttrnV tOi the rest' echangssoi in out of Urn *t my changes' Figure 6.2: The Simple System used in the usability study. Compared to the Bundle System's interface, the Simple System only had a single-pane review-ing pane to display document-embedded annotations. General comments and higher-level annotations were displayed in a separate simulated email window, to the right of the system interface. Similar to the Bundle System, there was a task control button for the Simple System. The two systems were otherwise the same. Chapter 6. Evaluation of Structured Annotations 43 Bundle System Simple System Interface components document panel, multi-tabbed reviewing panel document panel, single pane reviewing panel Basic annotations (ex-cluding general com-ments) embedded in the doc-ument and listed or grouped in the reviewing panel embedded in the docu-ment and listed in the re-viewing panel General comments listed at the top of the \"All Annotations\" tab in the reviewing panel shown in the simulated email window Group of related anno-tations listed in the \"All Bundles\" tab in the reviewing panel shown in the simulated email window Filtering functions AND, OR filtering on all or a subset of annotations OR filtering on all annota-tions Table 6.1: Comparison of the Bundle System and the Simple System 6.1.2 Tasks There were six representitive tasks to complete for each document, which were designed to gauge the strengths and weaknesses of the Bundle System. The annotations for all tasks were present from the outset. We controlled for the number, type, and authorship of annotations in the documents: 52 basic anno-tations (8 insertions, 5 deletions, 25 replacements, and 14 comments); Jennifer, John, and Mary made 15, 15, and 25 annotations, respectively. In addition, we controlled for reviewing difficulty with respect to the amount of context partic-ipants needed to review in order to accept/reject an annotation; 36 annotations could be processed by reading a single sentence, 10 annotations required reading two sentences, and 6 required reading a full paragraph. Both documents with annotations embedded are included in Appendix B. Among the 52 annotations, 32 were related to the selected tasks, while 20 served as \"distractors\" as the participants were performing their tasks. Chapter 6. Evaluation of Structured Annotations 44 Task #5 ———^--\"r-——- - ~ — ; — T ~ Tnsk^atkgrpiiii'cl'-------'—' ——'-- —:—— •Iii. M & d b c t M e h l . \" l i i i is ichi iiblfl\" is used to descnbe tlieiiagifeb playing, i i i tlie^baekigrGiihd music discussed iii the doeument; The study desanbed.mjth iS: tjying;|to evaluate (lie effects of ;'on-liold; music!' \\on customers when-they are \"musical hold'.' after;calhng:B^ Dt^g'M^'S/Rre^ious .reyiewinj^sessiq^ f\\vb;plu^sesV^^^^ function to find all the incorrect uses of '•inusical;hold\"andfep to find all &efmcorr^ct-use!S ot-'pu-hold^uusicV au^replacedjtiiem^ — — r ~ — — - - — . . . . . . . ..... , . . : •'T\"ask;lii's'Mic1ftoiis. >.r..v... ——-— Preview the ^mdtations-reg^&g''\"musisal' hold\"•'•and \"bn-liold-music:\" Accept the ones that you •agreeVivith,.and i:ejert;me dries that ydii disagree with: Click-\"Start Task \" when you are ready, to start.. Figure 6.3: Task 5 instruction in docM. Task background explains how the phrases \"musical hold\" and \"on-hold music\" should be used in the document followed by the specific instructions for the task. In this section, we describe each task in terms of instructions, presentation, relevant annotations, and expectations. For each task, a task instruction screen was shown first. Some tasks also had task background to inform or refresh par-ticipants on basic English grammar or specific words used in the document, as shown in Figure 6.3. For each document, the same task instructions were given for both the Simple System and the Bundle System. Because the documents differed in content, some tasks were adjusted slightly to fit the document con-tent, but always witht he goal of making them equivalent. The full set of tasks instructions shown to the participants is included as Appendix C. Chapter 6. Evaluation of Structured Annotations 45 Task 1: Location Pointers. Instructions. • docB: review annotations on quantifying words (e.g., at least, at most). • docM: review annotations on comparative and superlative forms of adjec-tives. Presentation. • Bundle System: a bundle with a note attached containing all relevant annotations. • Simple System: an email message containing location pointers for relevant annotations. Relevant annotations. 5 task-relevant annotations from 1 co-author dis-tributed in each document. 3 were designed to be accepted and 2 were designed to be rejected according to the document context. Expectations. Better performance for both speed and accuracy in the Bundle System. Task 2: Localized Annotations. Instructions. Review all annotations in a specified paragraph. Presentation. • Bundle System: a general comment describes which paragraph to review. No relevant bundle created. • Simple System: an email message describes which paragraph to review. Relevant annotations. 5 task-relevant localized annotations from multiple co-authors. 4 were designed to be accepted and 1 was designed to be rejected according to the document context. Expectations. Similar performance in both systems. Chapter 6. Evaluation of Structured Annotations 46 Task 3: Spelling Edits. Instructions. Review spelling edits in the document. Presentation. • Bundle System: a bundle with a note attached containing all relevant annotations. • Simple System: an email message describing relevant annotations. Relevant annotations. 6 task-relevant annotations from 1 co-author dis-tributed in the document. 4 were designed to be accepted and 2 were designed to be rejected according to the document context. Expectations. Better performance for speed in the Bundle System. Similar\" performance for accuracy in both systems because spelling edits are easy to re-view. Task 4: Multiple Co-authors Annotations. Instructions. Review all verb tense edits in the document. Presentation. • Bundle System: a bundle with two bundles (created by 2 co-authors) in its substructure. Each sub-bundle contains task-relevant annotations and comments.. • Simple System: two email messages (from 2 co-authors) are shown (one is a reply to the other) describing the relevant annotations. Relevant annotations. 8 task-relevant annotations from 2 co-authors (4 an-notations from each co-author) distributed in the document. 6 were designed to be accepted and 2 were designed to be rejected according to the document context. Expectations.Better performance for both speed, and accuracy in the Bundle System. Chapter 6. Evaluation of Structured Annotations 47 Task 5: Global Replacements. Instructions. • docB: review all the replacements between \"grow\" and \"growth.\" • docM: review all the replacements between \"on-hold music\" and \"musical hold.\" Presentation. A writing tip explaining how to use each word in the document was provided before participants started the task. • Bundle System: a bundle with a note attached containing relevant anno-tations. • Simple System: an email message describing the relevant annotations. Relevant annotations. 5 task-relevant annotations from 1 co-author dis-tributed in the document. 3 were designed to be accepted and 2 were designed to be rejected according to the document context. Expectations. Better performance for speed in the Bundle System. Similar performance for accuracy in both systems because these replacement edits are easy to identify. Task 6: Unaddressed Comments. Instructions.Review a co-author's comments that have not been accepted or rejected. Presentation. • Bundle System: a general comment describes which co-author's comments to review. No relevant bundle created. • Simple System: an email message describes which co-author's comments to review. Relevant annotations. 3 task-relevant comments from 1 co-author distributed in the document. 2 were designed to be accepted and 1 was designed to be re-jected according to the document context. Chapter 6. Evaluation of Structured Annotations 48 Expectations: Filtering functions are likely to be used in both systems. Better performance for both speed and accuracy in the Bundle System because of multi-attribute filtering. Each task discussed above was representative of tasks we saw in our field in-vestigation, where authors connected higher-level communication in email with lower-level document-embedded annotations. For example, task 1,3,5 all rep-resent the tasks of reviewing summaries of edits. Task 2 represents reviewing general comment. Task 4 represents reviewing to-do items, and task 6 repre-sents reviewing comments-on-comments. The task difficulty was primarily for the user to find/navigate to the right set of annotations to review, which was our main focus. Some individual annotations in our study were a bit cosmetic (e.g., word replacements, spelling edits) because subjects' understanding of the document was limited (they were not authors). This minimized individual dif-ferences in reviewing skills and comprehension by making it straightforward to decide accept/reject, so we could measure time spent navigating to the relevant changes. This is discussed in the next section. 6.1.3 Measures Our main dependent variables were speed and accuracy. Speed consisted of total completion time per task, which was the aggregate of navigation time and decision time. Navigation time was calculated by adding three types of time segments: initial navigation time, between selection navigation time, and final navigation time. See Figure 6.4 for details on how these time segments were measured. Decision time was calculated by adding the time segments between selecting an annotation to accepting or rejecting the annotation. During each task, a participant could be either navigating in the annotated document or deciding whether to accept or reject a particular annotation. Accuracy was assessed with three measures: the number of task-relevant annotations reviewed (accepted/rejected), the number of task-relevant anno-tations reviewed correctly, and the number of non-task-relevant annotations Chapter 6. Evaluation of Structured Annotations 49 Time Line: Action start task total task completion time = navigation time + decision time —L accept A select first annotation A select annotation B select annotation C accept C reject C select annotation D - , select annotation E J initial navigation time decision time —I- End Task between selection navigation time between selection navigation time j - decision time J - decision time ~y between selection navigation time between selection navigation time final navigation time Figure 6.4: An example of how the navigation and decision time were measured. Total task completion time is the sum of navigation time and decision time. Navigation time is composed of initial navigation time, between annotation navigation time, and final navigation time. All time from selecting an annotation to accepting or rejecting the same annotation is measured as decision time. Chapter 6. Evaluation of Structured Annotations 50 reviewed. We also recorded the number of times the filtering function was used. Self-reported measures captured through questionnaires included ease of find-ing annotations, ease of completing tasks, confidence in performing task, ease of use, ease of learning, and overall system preference. 6.1.4 Experimental Design The experiment was a within-subjects 2x6 (system type x task) factorial de-sign. Document type was a within-subjects control variable, and both order of presentation for system and order of presentation for the document were between-subject controls. A within-subjects design was chosen for its increased power and because it allowed us to collect comparative comments on the two systems. To minimize learning effects, we counterbalanced the order of presen-tation for both system type and document, resulting in four configurations. The tasks were presented in the same order to each participant. 6.1.5 Participants A total of 20 people (8 females) participated. They were undergraduate and graduate students recruited through online mailing lists and newsgroups. They were paid $20 for their time. All spoke English as their native language. Sev-enteen used a word processor (mainly Microsoft Word) every 2-3 days, and 3 did so once a week. All felt very confident about using their word processor, al-though 5 had never used any annotation functions. They had all been involved in collaborative authoring: 6 participants fewer than 5 times, 7 participants between 5 and 10 times, and 7 participants more than 10 times. 6.1.6 Procedure The experiment was designed for a single two-hour session. A questionnaire was administered to obtain information on past computer and writing experi-ence. Participants were then shown a training video on general concepts such as collaborative authoring and how to use the first system, followed by a practice Chapter 6. Evaluation of Structured Annotations 51 session of six reviewing tasks using the first system. For each task, a partic-ipant first read the task instruction screen then clicked on the \"Start Task\" button. The system loaded and the data logging and timing functions started. After the participant finished a task, s/he clicked \"End Task\" and the next task instruction appeared. The practice tasks were similar to the experimental tasks described previously, but in a different order and on a practice document different than either of the test documents. Participants were next asked to read the original version of the task docu-ment (i.e., with no annotations), after which they had to perform the six tasks in the order they were given. A second questionnaire was administered to col-lect feedback on the first system. Participants were given a 5-minute break and then were shown a video on how to use the second system, followed by six practice tasks using the same practice document then the six experiment tasks for the second document. A final questionnaire solicited feedback on the second system and asked the participants to directly compare the two systems. A short de-briefing was conducted with some of the participants based on their questionnaire data. 6.1.7 Hypotheses Our main hypotheses were as follows: HI. The Bundle System will reduce the time participants spend navigating to relevant annotations. Some tasks (as identified above) will be more affected than others. H 2 . Participants will perform more accurately in the Bundle System than the Simple System. Some tasks (as identified above) will be more affected than others. Chapter 6. Evaluation of Structured Annotations 52 6.2 Results Here we report on both the quantitative data captured through software logging as well as the self-reported data from our questionnaires. Before testing our hypotheses, we checked to make sure that there was no effect of document. Investigation of an interaction effect between document and task on total time (F(4,64) = 4.706, p =.002, rj 2 = .227) revealed that task 1 was more difficult in docB than in docM. Our goal had been to create two documents that were as equal in difficulty as possible, and so we removed task 1 from our remaining analysis and focused exclusively on tasks 2 through 6.1 To test our hypotheses we ran 2 systems x 2 order of systems x 2 order of documents x 5 tasks ANOVAs for our speed and accuracy measures. System and tasks were within-subjects factors, and orders of system and document presentation were both between-subjects factors. For our secondary analysis, a series of two-tailed t-tests were used to investigate performance differences between the two systems for each of the tasks. Along with statistical significance, we report partial eta-squared {rj2), a measure of effect size, which is often more informative than statistical significance in applied human-computer interaction research [21]. This value is usually interpreted as .01 being a small effect size, .06 a medium effect size, and .14 a large effect size [11]. 6.2.1 Testing Hypotheses Total navigation time (across all 5 tasks) was significantly less in the Bundle System (p < .001). Participant's decision time, however, was not impacted by the two systems (p = .336). The large navigation time effect was suffi-cient to influence the total completion time, which was also significantly lower in the Bundle System (p < .001). The means are given in Table 6.2. As 1This is a potential confound because the six tasks were always done in the same order. Thus subjects might have experienced an asymmetric transfer effect from task 1 to the other tasks, but we think this was minimal at best because the six tasks were relatively independent of each other. Chapter 6. Evaluation of Structured Annotations 53 Speed Mean (in sec) Bundle Simple F Sig. v2 Navigation 39.3 58.3 40.1 < 0.001 0.715 Decision 60.8 64.5 0.98 0.336 0.058 Completion 100.2 122.8 22.9 < 0.001 0.589 Table 6.2: Speed measures across five tasks. Df = (1,16). N=20. hypothesized in HI, and as Figure 6.5 shows, some tasks required less navi-gation time than others. There was an interaction between task and system, (.F(4,64) = 16.09, p < .001, rf = .354). T-tests revealed that tasks 3, 4, and 5 were all significantly faster in the Bun-dle System (all df = 19,p < .001). There were no differences detected for tasks 2 and 6. Consistent with hypothesis H2, accuracy was also significantly better with the Bundle System. Across all 5 tasks, participants reviewed more task-relevant annotations (p < .001), they correctly processed more task-relevant annotations (p = .018), and they made fewer identification errors, meaning they reviewed fewer non-task-relevant annotations (p < .001) in the bundle condition. Means for these errors are shown in Table 6.3. There was an interaction between task and the number of non-task-relevant annotations reviewed (F(l,16) = 21.93,p < .001,if = .578), prompting us to investigate which tasks were affected differently by the two systems. A series of five two-tailed t-tests showed that there were significantly more non-task-relevant annotations reviewed in the Simple System for task 4 and task 6 (both df = 19,p < 0.001). These differences are apparent in Figure 6.6. 6.2.2 Other Effects In addition to the main effect of system type, we also found a main effect of task across all measures. This was expected because we designed each task to match a particular type of annotation activity; some activities are inherently more difficult and time consuming than others. Chapter 6. Evaluation of Structured Annotations 54 System Type Bundle System Simple \"\"\"\"System 2 3 4 5 6 Tasks Figure 6.5: Line graph for mean navigation times per task in the two systems. N=20. Accuracy Mean (# of annos) Bundle Simple F Sig. V2 Task-relevant annotations 5.25 5.01 19.53 < 0.001 0.550 reviewed Task-relevant annotations 4.84 4.61 59.02 0.018 0.306 reviewed Correctly Non-task-relevant annota- 0.05 0.65 7.05 < 0.001 0.787 tions reviewed (errors) Table 6.3: Accuracy measures across five tasks. Df = (1,16). N=20. Chapter 6. Evaluation of Structured Annotations 55 i i i ! r •2 3; -A< :-iS; 6 Figure 6.6: Line graph for mean number of non-task-relevant annotations re-viewed. N=20. We found a number of multi-way interactions involving task, and system and document presentation orders. Systematic investigation of each of the in-teractions revealed no clear interpretation of the interactions. Not surprisingly, participants used the filtering functions more in the Simple System than in the Bundle System (F(l, 16) = 39.42,p < 0.001,rj2 = 0.711). 6.2.3 Self-reported Measures We ran the Wilcoxon Signed-Rank Test on the questionnaire data. Consistent with our navigation and accuracy findings, analysis of the self-reported mea-sures showed that with the Bundle System participants found it easier to find annotations (p = 0.002), easier to complete tasks (p = 0.012), and were more confident in their answers (p = 0.014). They also had an overall preference for the Bundle System (p = 0.003). But there was no significant difference in the ease of learning (p = 0.667) or ease of use (p = 0.26) between the two systems. When asked which of the two systems they would prefer to continue using, 18 Chapter 6. Evaluation of Structured Annotations 56 out of the 20 participants (90%) chose the Bundle System. 6.2.4 O t h e r F e e d b a c k We asked our participants how they currently review documents with their co-authors. Among the 20 participants, the most popular reviewing method is writing email messages to co-authors (18/20) that include suggested changes and comments about the document. The next most popular methods are directly editing the document using a word processor (16/20), and printing out the doc-ument and marking up using a pen (15/20). These are followed by using annota-tion functions in existing word processors such as \"Track Changes\" in MS Word 2003 (12/20) and using online newsgroups like Yahoo Groups (10/20). Partici-pants usually use multiple reviewing methods (e.g., direct editing + email). The reviewing methods and their frequencies of use are shown in Figure 6.7. After using each of the systems, participants were also asked to estimate whether they spent more time finding annotations of interest or deciding whether to accept or reject annotations. They could also indicate that they spent roughly the same amount of time on the two activities. As we expected, participants felt they spent more time deciding whether to accept or reject annotations (13/20) in the Bundle System than finding annotations of interests (3/20). The remain-ing 4 participants felt they spent roughly the same amount of time on the two activities. In the Simple System, opinions were almost evenly split (8 chose find-ing annotations, 5 chose accepting/rejecting annotations, and 6 chose roughly the same). The results are summarized in the Figure 6.8. Participants provided free-form comments at the end of the questionnaire about what they liked and disliked about each system. For the Simple System, although not actually integrated with the system, most participants indicated that they liked the email window, which provided them with more informa-tion to complete tasks. Interestingly, many participants who used the Simple System first indicated they liked the filtering function; however, of those partic-ipants who had first been exposed to the Bundle System, almost all disliked the Chapter 6. Evaluation of Structured Annotations 57 Howdo you and your coauthors review a collaborative document?. • amuaiiy. '' ramorthy a weekly reviewing methods Figure 6.7: Comparing different collaborative reviewing methods and their fre-quency of use. N = 20. comparatively limited filtering functions in the Simple System. For the Bundle System, participants noted the time saved using bundles and were surprised by how easy it was to learn to use bundles. They also liked the flexible filtering provided in the 'Bundle System. One suggestion for improvement in the Bundle System was to increase the size of the reviewing pane. Participants felt the current reviewing pane was small and required too much scrolling. 6.2.5 Summary of Results To summarize, the Bundle System allowed participants to navigate among an-notations significantly faster for tasks 3, 4 and 5. Participants were also sig-nificantly more accurate with the Bundle System; for example, they reviewed significantly fewer non-task-relevant annotations for tasks 4 and 6. Overall, 90% of participants.preferred the Bundle System. Chapter 6. Evaluation of Structured Annotations 58 Which reviewing activity did you spend more time on? Find accept/reject roughly the annotations annotations same reviewing activities Figure 6.8: Participants' perceived length for reviewing activities. 6.3 Discussions There are a number of interesting findings from the study, which we discuss in the remainder fo this chapter. 6.3.1 Bundle Concept Is Intuitive All participants developed the strategy of using a bundle list as their guide for completing tasks. They searched first for an existing bundle related to the cur-rent task description before directly searching for annotations in the document. Based on their interaction sequences with the prototype and their feedback, it was clear that the bundle concept, and its fit within the task workflow, was intuitive. 6.3.2 Bundles Reduce Navigation T i m e Once participants found a relevant bundle, locating each annotation in the doc-ument was a single click away. By contrast, in the Simple System, most of the navigation time was spent searching through the document for the next relevant annotation, which was time consuming. Bundling reduced the navigation time Chapter 6. Evaluation of Structured Annotations 59 for tasks 3, 4, and 5. All three tasks had annotations distributed throughout the document that were not amenable to basic filtering. For task 6, filtering was a good strategy in both systems. Even though the Bundle System had the advantage of filtering on both the comment and author attribute, it was easy in the Simple System to filter on author and then identify the comments. So it was not surprising that task 6 did not show a difference. As one would hope, there was no difference in navigation time for tasks that were localized within the document (task 2). 6.3.3 Bundles Improve Accuracy Once the correct bundle was found, users were guaranteed to find the task-relevant set of annotations. This minimized the number of extra annotations reviewed and allowed users to concentrate on reviewing the actual annotations. The biggest difference was found in task 4, where 39 extra annotations were reviewed across all participants in the Simple System, and none extra were reviewed in the Bundle System- The cause of this was users mistakenly identi-fying annotations as verb tense changes; for example, in docB replacing \"grow\" with \"growth\" was treated as a verb tense change. This was quite surprising, given that all our participants were native English speakers. But it shows that bundling can overcome even basic misunderstandings of the English language. 6.3.4 Users Group Annotations Participants filtered significantly more often in the Simple System than in the Bundle System. They did so to reduce the number of annotations under con-sideration for a task. Participants were effectively creating their own temporary task-based annotation groups. Not only might there be a cost to having the reviewer do the grouping (see discussion on cost below), but current systems do not allow users to store filter results for subsequent usage. Bundling supports the easy creation and reuse of annotation groups formed through filtering. Chapter 6. Evaluation of Structured Annotations 60 6.3.5 Scalability of Bundles Our target context for bundles is sophisticated documents that are heavily an-notated. We chose simpler documents for our experiment in order to keep the tasks manageable. We speculate, however, that a comparison between the Sim-ple System and the Bundle System for sophisticated documents would be even more dramatic. As a document increases in length, causing relevant annotations to be spread further apart, navigation time will increase without bundles. 6.3.6 Cost/Benefit Tradeoff Our experiment only evaluated the annotation reviewing stage of authoring. Bundles shift some of the effort that is traditionally spent on annotation re-viewing to annotation creation. At first glace this might appear to be a zero sum game, and that effort is only being shifted within the authoring workflow. We argue that authors are currently communicating a large amount of informa-tion through email, and that manually creating bundles should be more efficient than incurring overhead through the inefficiencies of email. Automatically gen-erated bundles should clearly be faster than email communication. A tradeoff to explore, however, will be between the value of bundles and the increased overall complexity they bring to the annotation system. Evaluating bundle creation, and the impact of bundles on the complete co-authoring workflow, is an obvious next step in our work. 6.3.7 Bundles Provide a M o r e Pleasant User Experience When participants were asked which system they preferred, 90% stated that it was the Bundle System. The elements of the Simple System they liked the most were the email message and filtering function. We note that the experimental design provided a single email message per task, with clear instructions, which underestimates the workload in real situations when users need to locate the relevant email, and possibly an entire email thread describing the task. The two participants who favored the Simple System were both experienced Microsoft Chapter 6. Evaluation of Structured Annotations 61 Word users, but neither had used the annotation functions. They were excited by the functionality in the Simple System, and they found the Bundle System to be complex and confusing. However, they both recognized the potential advantages of bundles and thought that after becoming accustomed to basic annotation functions, they might desire more complex ones. 62 Chapter 7 Conclusion and Future Work 7.1 Conclusions In this thesis, we have presented a structured annotation model, which includes annotation groups called bundles. Bundles are designed to improve co-authoring workflow by fully integrating annotations (both basic and higher-level annota-tions) with the document. We have implemented a preliminary prototype called the Bundle Editor and compared it to a system that offers only basic annotation functions. Our study focused on annotation reviewing and showed that struc-tured annotations can reduce the time it takes to navigate between task-relevant annotations and can improve reviewing accuracy. 7.2 Future Work Ultimately, we would like to work towards a lightweight and robust annotation tool that can be integrated into existing word processors (e.g., Microsoft Word) or online reviewing systems (e.g., XMetal Reviewer) to support co-authoring. We summarize some of the potential future work below. Evaluation of Bundle Creation Now that there are confirmed benefits at the reviewing stage, our next step will be to investigate the usability of bundle creation and, more generally, how Chapter 7. Conclusion and Future Work 63 bundles support the full co-authoring workflow. We mentioned in section 5.2.2 that there are four ways to create bundles. More research is needed to explore the practical situations for using each bundle creation method as well as more accurate estimates of the effort required. Supporting Version Control and Synchronous Co-authoring Broader issues of how bundles can support collaborative writing in general still remain. These include investigating how bundles might be extended to support version control and synchronous co-authoring, which are both classic problems in the collaborative writing literature. Our initial intuition is that bundles provide more organized annotations, which can reduce the workload co-authors experience during each reviewing and editing cycle. Thus, co-authors can complete the document with fewer reviewing cycles, resulting in fewer versions. Although our structured annotation model does not directly target the problem of version control, we hypothesize that it can minimize it. More research is required to measure the effect of bundles on version control. In our research, we investigated asynchronous collaboration because it is more common during the reviewing and editing stages of collaborative writ-ing. However, structured annotations may also be extended to synchronous co-authoring environments. For example, in a synchronous setting, co-authors could choose to send a bundle back and forth through instant messaging, or bundles could update themselves automatically for all co-authors to see real-time changes. Synchronous conversations between co-authors could be saved as bundles for later retrieval by other co-authors. Many potential synchronous uses for bundles remain to be explored. Enhancing Annotation Structures in the Document Another area for further research is the display of bundled annotations in the document. The reviewing pane clearly shows which bundles have been created Chapter 7. Conclusion and Future Work 64 and their substructures. However, when the user selects (i.e., highlights) an an-notation in the document pane, there is no visual cue suggesting which bundle or bundles the annotation belongs to or whether there are other similar anno-tations (i.e., belonging to the same bundle) nearby in the document. Currently, users need to explicitly right-click on the annotation and choose the option to view the bundle(s) it belongs to. Visualization techniques need to be applied carefully to display the relationships among annotations in the document while at the same time not overloading the document pane. Including Free Text Search In our usability study, some participants stated a need for a free text search on annotations in the Bundle Editor. They pointed out that when there are a large number of annotations including bundles, it would be easier if they could type the bundle name into a search box and locate the bundle quickly in the reviewing pane. This shows another advantage of having structured annotations: if annotations were distributed in both the document and in emails, it would be nearly impossible to implement one search mechanism to search on two different applications. Structured annotations make it possible to conduct a single search on all document-related annotations (i.e., single or bundled annotations). Structured Annotations for Rich Annotation Types Another future research area is how to apply structured annotations on rich annotation types such as image, audio, and video annotations. The attributes we defined in the annotation model (see section 4.1) will need to be modified accordingly. For example, the note attribute might be an image or audio file instead of just a simple string of text. 65 Bibliography [1] Allen, R. (2005). Workflow: An introduction. Workflow Management Coali-tion, http: //www.wfmc.org/information/Workflow-An_Introduction.pdf. [2]'Baecker, R. M. , Nastos, D., Posner, I. R., and Mawby, K. L. (1993). The user-centered iterative design of collaborative writing software. In CHI '93: Proceedings of the SIGCHI conference on Human factors in computing sys-tems, pages 399-405, New York, NY, USA. A C M Press. [3] Brush, A. B. (2002). Annotating Digital Documents for Asynchronous Col-laboration. PhD thesis, University of Washington. [4] Brush, A. J. and Borning, A. (2003). 'today' messages: lightweight group awareness via email. In CHI '03: extended abstracts on Human factors in computing systems, pages 920-921, New York, NY, USA. A C M Press. [5] Brush, A. J. B., Bargeron, D., Grudin, J. , and Gupta, A. (2002). Notification for shared annotation of digital documents. In CHI '02: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 89-96, New York, NY, USA. A C M Press. [6] Brush, A. J . B., Bargeron, D., Gupta, A., and Cadiz, J. J. (2001). Robust annotation positioning in digital documents. In CHI '01: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 285-292, . New York, NY, USA. A C M Press. [7] Cadiz, J. J. , Gupta, A., and Grudin, J. (2000). Using web annotations for asynchronous collaboration around documents. In CSCW '00: Proceedings Bibliography 66 of the 2000 A CM conference on Computer supported cooperative work, pages 309-318, New York, NY, USA. A C M Press. [8] Catlin, T., Bush, P., and Yankelovich, N. (1989). Internote: extending a hypermedia framework to support annotative collaboration. In HYPERTEXT '89: Proceedings of the second annual ACM conference on Hypertext, pages 365-378, New York, NY, USA. A C M Press. [9] Chandler, H. E . (2001). The complexity of online groups: a case study of asynchronous collaboration. ACM Journal of Computer Documentation, 25(l):17-24. [10] Churchill, E . F. , Trevor, J. , Bly, S., Nelson, L. , and Cubranic, D. (2000). Anchored conversations: chatting in the context of a document. In CHI '00: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 454-461, New York, NY, USA. A C M Press. [11] Cohen, A. L. , Cash, D., Muller, M. J. , and Culberson, C. (1999). Writing apart and designing together. In CHI '99: CHI '99 extended abstracts on Human factors in computing systems, pages 198-199, New York, NY, USA. A C M Press. [12] Dourish, P. (1996). Consistency guarantees: exploiting application seman-tics for consistency management in a collaboration toolkit. In CSCW '96: Proceedings of the 1996 ACM conference on Computer supported cooperative work, pages 268-277, New York, NY, USA. A C M Press. [13] Dourish, P. and Bellotti, V. (1992). Awareness and coordination in shared workspaces. In CSCW '92: Proceedings of the 1992 ACM conference on Computer-supported cooperative work, pages 107-114, New York, NY, USA. A C M Press. [14] Ede, L. and Lunsford, A. (1990). Singular Texts/Plural Authors: Perspec-tives on Collaborative Writing. Carbondale: Southern Illinois UP. Bibliography 67 [15] Fish, R. S., Kraut, R. E . , and Leland, M. D. P. (1988). Quilt: a collabora-tive tool for cooperative writing. In Conference Sponsored by ACM SIGOIS and IEEECS TC-OA on Office information systems, pages 30-37, New York, NY, USA. A C M Press. [16] Flesch-Kincaid Readability Test. http://en.wikipedia.org/wiki/Flesch-Kincaid_Readability_Test. [17] Huang, E . M. and Mynatt, E . D. (2003). Semi-public displays for small, co-located groups. In CHI '03: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 49-56, New York, NY, USA. A C M Press. [18] Jackson, L. S. and Grossman, E . (1999). Integration of synchronous and asynchronous collaboration activities. ACM Computing Surveys, 31 (2): 12. [19] Jaeger, T. and Prakash, A. (1996). Requirements of role-based access con-trol for collaborative systems. In RBAC '95: Proceedings of the first ACM Workshop on Role-based access control, page 16, New York, NY, USA. A C M Press. [20] Kahan, J. and Koivunen, M.-R. (2001). Annotea: an open RDF infras-tructure for shared web annotations. In WWW '01: Proceedings of the 10th international conference on World Wide Web, pages 623-632, New York, NY, USA. A C M Press. [21] Landauer, T. (1997). Handbook of Human-Computer Interaction, chapter 9: Behavioral research methods in human-computer interaction., pages 203-227. Amsterdam:Elsevier Science B.V. [22] Margolis, M . and Resnick, D. Third voice: Vox Populi Vox Dei? http: //firstmonday.org/issues/issue4_10/margolis/index. html. [23] Marshall, C. C. (1997). Annotation: from paper books to the digital library. In DL '97: Proceedings of the second ACM international conference on Digital libraries, pages 131-140, New York, NY, USA. A C M Press. Bibliography 68 [24] Mendoza-Chapa, S., Salcedo, M. R., and Oktaba, H. (2000). Group aware-ness in collaborative writing systems. In Proceedings of the 6th International workshop on groupware, pages 112-118. [25] Miller, E . (1998). An introduction to the resource description framework. http://www.dlib.org/dlib/may98/miller/05miller.html. [26] Munson, J. P. and Dewan, P. (1994). A flexible object merging frame-work. In CSCW '94: Proceedings of the 1994 ACM conference on Computer supported cooperative work, pages 231-242, New York, NY, USA. A C M Press. [27] Neuwirth, C. M. (2000). Computer support for collaborative writing: a human-computer interaction perspective. In Third annual collaborative edit-ing workshop. [28] Neuwirth, C. M . , Chandhok, R., Charney, D., Wojahn, P., and Kim, L. (1994a). Distributed collaborative writing: a comparison of spoken and writ-ten modalities for reviewing and revising documents. In CHI '94'- Proceedings of the SIGCHI conference on Human factors in computing systems, pages 51-57, New York, NY, USA. A C M Press. [29] Neuwirth, C. M. , Chandhok, R., Kaufer, D. S., Erion, P., Morris, J. , and Miller, D. (1992). Flexible diff-ing in a collaborative writing system. In CSCW '92: Proceedings of the 1992 ACM conference on Computer-supported cooperative work, pages 147-154, New York, NY, USA. A C M Press. [30] Neuwirth, C. M . , Kaufer, D. S., Chandhok, R., and Morris, J. H. (1990). Issues in the design of computer support for co-authoring and commenting. In CSCW '90: Proceedings of the 1990 ACM conference on Computer-supported cooperative work, pages 183-195, New York, NY, USA. A C M Press. [31] Neuwirth, C. M. , Kaufer, D. S., Chandhok, R., and Morris, J. H. (1994b). Computer support for distributed collaborative writing: defining parameters of interaction. In CSCW '94: Proceedings of the 1994 ACM conference on Bibliography 69 Computer supported cooperative work, pages 145-152, New York, NY, USA. A C M Press. [32] Noel, S. and Robert, J . -M. (2004). Empirical study on collaborative writing: What do co-authors do, use, and like? Computer supported cooperative work, 13(l):63-89. [33] Noldus Observer, http://www.noldus.com/site/doc200401012. [34] Ovsiannikov, I. A., Arbib, M. A., and McNeill, T. H. (1999). Annotation technology. Int. J. Hum.-Comput. Stud., 50(4):329-362. [35] Phelps, L. (1997). Active documentation: wizards as a medium for meeting user needs. In SIGDOC '97: Proceedings of the 15th annual international conference on Computer documentation, pages 207-210, New York, NY, USA. A C M Press. [36] Qixing Zheng, J. M. and Booth, K. (2006). Co-authoring with structured annotations. In CHI '06: Proceedings of the SIGCHI conference on Human factors in computing systems, New York, NY, USA. A C M Press. [37] Rhyne, J. R. and Wolf, C. G. (1992). Tools for supporting the collaborative process. In UIST '92: Proceedings of the 5th annual ACM symposium on User interface software and technology, pages 161-170, New York, NY, USA. A C M Press. [38] ScienceDaily. http://www.sciencedaily.com/. [39] Sharpies, M . , Goodlet, J. , Beck, E . , Wood, C , Easterbrook, S., and Plow-man, L. (1993). Research issues in the study of computer supported collabo-rative writing, chapter 2, pages 9-28. Springer-Verlag. [40] Weng, C. and Gennari, J. (2004). Asynchronous collaborative writing through annotations, note. In Proceedings of ACM Conference on Computer supported cooperative work (CSCW'04), pages 564-573, Chicago, IL. Bibliography 70 [41] Whitehead, E . J. J. (2001). WebDAV and deltaV: collaborative authoring, versioning, and configuration management for the web. In HYPERTEXT '01: Proceedings of the twelfth ACM conference on Hypertext and hypermedia, pages 259-260, New York, NY, USA. A C M Press. [42] Wojahn, P. G., Neuwirth, C. M. , and Bullock, B. (1998). Effects of in-terfaces for annotation on communication in a collaborative task. In CHI '98: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 456-463, New York, NY, USA. A C M Press/Addison-Wesley Publishing Co. [43] Woods, D. D., Patterson, E . S., Roth, E . M. , and Christoffersen, K. (2002). Can we ever escape from data overload? A cognitive systems diagnosis. In Cognition, technology and work, volume 4, pages 22-36. [44] XMetal 4.6 Reviewer, http://www.xmetal.com/en.us/products /xmetaLreviewer/index.x. Appendix A Usability Study Questionnaire Appendix A. Usability Study Questionnaire 72 Using Structured Annotations in Collaborative Writing Study Questionnaire Form Instructions Please try to respond to all of the items listed below. For those items that are not applicable, specify N/A. Part 1: Past Computer and Writing Experience (To be completed before the study) 1. Which word processor do you currently use for writing documents (e.g. essays, reports, letters, conference papers, journal articles, etc.)? 2. How often do you use the word processor? • Once a month • Once a week • Every 2-3 days • Every day 3. How confident do you feel about using the word processor? 1 2 3 4 5 Not at all confident E E E E C ^erY confident 4. Do you use the annotation functions in the word processor? (e.g. Track Changes and Commenting functions in Microsoft Word) • Yes. • No, please specify why: 5. Have you previously written or reviewed documents with other people? • None • less 5 times • between 5-10 times • more than 10 times Word processor used in collaborative writing: (Continue on the next page) Appendix A. Usability Study Questionnaire 73 5. How do you and your co-authors review a collaborative document? (Check all the items that apply.) Reviewing Method Frequency of Use • Print out the document, mark on the document using a pen, and then hand back the marked document to co-author(s). • weekly • monthly • annually • Directly edit on the document using a word processor. • weekly • monthly • annually • Use the annotation function in the word processor to edit the document and add comments. • weekly • monthly • annually • Write an email message that includes suggested changes and comments about the document to other co-authors. • weekly • monthly • annually • Use online communication groupware (e.g., Yahoo! Groups) to discuss about the changes to the document. • weekly • monthly D annually • Other. • weekly • monthly • annually Please soecifv: J Appendix A. Usability Study Questionnaire 74 Part 2: (To be completed after completing tasks using the first system) 2-'4W-4-;J si- 6-1,71 1. It was easy to learn to use this system. strongly ^ disagree C G G G C G strongly agree 2. Navigating through annotations using this system was easy. strongly disagree c G G G G G strongly agree 3. Completing the given tasks using the system was easy. strongly disagree G G G G G C strongly agree 4. Finding annotations of interest using this system was easy. strongly disagree G G G G G G strongly agree 6. Overall, I was satisfied with how easy it was to use this system. strongly disagree c G G G G G strongly agree 7. I was confident about my answers to the tasks. strongly £3 disagree c G G G G G strongly agree 8. I would like to use this system for my co-writing activities. strongly disagree G G G G G G strongly agree 9. I enjoyed using this system. strongly p disagree G G G G G G strongly agree Questions: 1. What was the most difficult task for you to complete using the system? 2. Overall, on which of the following two activities do you feel you spent the most time? • Finding the annotations of interest in the system • Determining whether to accept or reject annotations • I spent roughly the same amount of time on the above two activities 3. What particular aspect(s) of this system did you like? 4. What particular aspect(s) of this system did you dislike? Appendix A. Usability Study Questionnaire 75 Part 3: (To be completed after completing tasks using the second system) 1. It was easy to learn to use this system. strongly ^ disagree E E E E E E strongly agree 2. Navigating through annotations using this system was easy. strongly E disagree c c E E E E strongly agree 3. Completing the given tasks using the system was easy. strongly ^ disagree c c C E E E strongly agree 4. Finding annotations of interest using this system was easy. strongly £ j disagree E • E E E E strongly agree 6. Overall, I was satisfied with how easy it was to use this system. strongly ^ disagree E E E E E E strongly agree 7. I was confident about my answers to the tasks. strongly E disagree c E E E E E strongly agree 8. I would like to use this system for my co-writing activities. strongly £ j disagree E c E E E E strongly agree 9. I enjoyed using this system. strongly E disagree c E E E E E strongly agree Questions: 1. What was the most difficult task for you to complete using the system? 2. Overall, on which of the following two activities do you feel you spent the most time? • Finding the annotations of interest in the system • Determining whether to accept or reject annotations • I spent roughly the same time on the above two activities 3. What particular aspect(s) of this system did you like? 4. What particular aspect(s) of this system did you dislike? Appendix A. Usability Study Questionnaire 76 5. If you could choose only one of the systems to continue using, which would it be? • First System • Second System 6. Any additional comments? 77 Appendix B Usability Study Documents b. 1 O r i g i n a l b l a c k hole d o c u m e n t ( d o c B ) w i t h n o a n n o t a t i o n s b. 2 B l a c k hole d o c u m e n t w i t h annota t ions b. 3 O r i g i n a l m u s i c a n d c o n s u m e r d o c u m e n t ( d o c M ) w i t h n o a n n o t a t i o n s b. 4 M u s i c a n d c o n s u m e r d o c u m e n t w i t h annota t ions b. 5 W e a t h e r a n d m o o d d o c u m e n t (pract ice d o c u m e n t ) w i t h a n n o t a t i o n s Appendix B. Usability Study Documents 78 Title: NASA Observatory Confirms Black Hole Limits The very largest black holes reach a certain point and then growth no more. That's according to the survey of back holes made by NASA's Chandra X-ray Observatroy. Scientists also discover that previously hidden black holes are well below their weight limit. The new results corroborate recent theretical work about how black holes and galaxies grow. The biggest black holes, those with at least 100 million times heavier the mass of the sun, eat voraciously during the early universe. Nearly all of them ran out of \"food\" billions of years ago and went onto a forced starvation diet. On the other hand, black holes 10 to 100 million solar masses follow a more controlled eating plan. Because they took smaller portions of their meals of gas and dust, they continue growth as of today. \"Our data show some super massive black holes seem to binge, while others prefer to graze,\" said Amy Barger of University of Wisconsin and University of Hawaii. Barger is the lead author of the paper describes the results in the latest issue of the astronomical journal. \"We understand better than ever how super massive black holes grow.\" One revelation is that there is a strong association between the growth of the black holes and the birth of stars. Previously, astronomers do careful studies of the birth of stars in galaxies but didn't know as much about the black holes at their centers. \"These galaxies lose material into their central black holes at the same time they make their stars,\" Barger said. Therefore, whatever mechanism govens star formation in galaxies also governs black hole growth. Astronomers have made an accurate census of both the biggest, black holes in the distance, and the relatively smaller, clamer ones closer by Earth. Now, for the first time, the ones in between the two extreme have been properly counted. Co-author Richard Mushotzky of NASA's Goddard Space Flight Center, Greenbelt, Md. said that they needed to have an accurate head count over time of all growth black holes if they ever hoped to understand black holes' habits. This study relies on the X-ray images obtained, the Chandra Deep Fields North and South, plus a key wider-area survey of an area called the \"Lockman Hole.\" The distances to the X-ray sources were determined by optical spectroscopic follow-up at the Keck 15-meter telescope on Mauna Kea in Hawaii, and show the black holes range from a billion to 12 billion light-years away. The very long-exposure images are crusial to keep observing black holes within a billion light-years away and find the black holes that otherwise would go unnoticed. Chandra found many of the black holes smaller than about 100 million suns are buried under large amounts of dust. This prevents detection of the optical light from the heated material near the black holes. The X-rays were more energentic and able to dig through this dust and gas. However, the black holes show little sign of being obscured by dust or gas. In a form of weight self-control, powerful winds generated by the black hole's feeding frenzy may have cleared out the remaining dust and gas. Figure B. l : Original black hole document (docB) with no annotations. Appendix B. Usability Study Documents 79 The vety.largest black holes reachiaeertain point andthengrewthgrownoiiriore. That's •accordingtotba«ujveyiOf back holes madefeywith NASA's ChandraX-rayCita~*ift4sitHiV«\"'tc-ef>'jtiiry Scientists also discovered that previously hidjdeji.black ifholes.areiWell.belowjtheir weight limit. T-twThese new results corroborate recent ^ r m d l s l i s m c t i i a l w o r k a b o u t h o w black boles and galaxies gwwgrowth The ,biggest.blackt\"ioles;theycontinued gw*g row ing .as -o f today. \"uuridata show some super massive-black holes seemto binge.whileothers preferto^graze,\" said.AmyBarger.of.trie Universityof Wisconsinaftdwith University of Hav^an.- Barger. istheleadauthorfefor the paper descnbesdescribing the-: results m the latest issue of the astronomical journal .\"We understand betterthan ever h o w super massive black holes grow\" One revelation i s that there w a s a strong -w^*+*U>Afoiinei.tiM>i between t h e growth of ttw-black holes a n d t h e birth of stars Previously, astronomers defiad done careful studies of the birth of stars in galaxies but didnt kndwas.much about t^he:biackofa|lgwwt^rdwing;blackholes ;iftheyeverihopedrto..understand; .black holes ' : h a b i t S r i , This s t u d y reliesd on the X-rayjrnages ever obtained, the Chandra Deep Fields North and South, plus a key wider-area ^survey of amarea-called the \"Loekraan Wole:\";The;distancesto.theX^^ follow-up at the Keck 15-meterjelescope on Mauna Kea in Hawaii, and show the black holes range trorn less than a billion to 12 billion light-years away The very long-exposure images are uu&wlr-nicN to keep observing black holes within :.a billion light-years away and;find;the;black;holesthatotherwise would;go.unnoticed; Chandra tound many of the black holes smaller than about 100 million suns are buried under large amounts ot dust .This prevents detection of the.optical light.frorn the heated material near the black ho le* The X-rays we-ware more -A>-4tj« t,j ?,,,--nrtN and able to >ii4«ii i o \" through this dust a n d gas However, the I vge r black holes show little sign of 7 b e i n g o b s c u r e d ; b v d u s t . o r . g a s . \" l n a iform^ofweight.-self-controlivpowerfuLwindsigenerated.-bythe^black-hole.'S'feeding^renzy-mayhavevcleared^outthe remainmgdust;and;gas;: J F i g u r e B.2: B l a c k hole documen t w i t h anno ta t i ons . Appendix B. Usability Study Documents 80 Comments in docB (listed in document order): Jen: I think it's the best survey to date. John: Would readers understand what hidden black holes are? Jen: I think it should be \"millions.\" Mary: We need to capitalize the first character for each word in the journal name. John: Precisely, it should be the birthrate of stars. John: I don't think \"their\" has a clear reference. Mary: Shall we add a reference here for readers to follow? Jen: I don't think it's the first time. Mary: I don't think it's clear that Richard is the co-author of which paper. Mary: They are the deepest X-Ray images-ever obtained, right, John? Jen: I think the telescope is only 10-meter long. John: Do we need to explain more about how the long-exposure images are obtained? Jen: We also need to include gas. Mary: John, do we need to explain what optical light is? x Appendix B. Usability Study Documents 81 Title: \"Please Hold\" Not Always Music To Your Ears, University Of Cincinnati Researcher Finds Nearly all of us know what it was like to be put on \"on-hold music.\" Call almost any customer service number, and you can expect hear at least a few bars of insipid elevator music before an operater picks up. The question is: Do you hang up or do you keep holding? That may depend on your genders and what type of music is playing, according to research reported by Dr. marketing James Kellaris at the society of consumer psychology conference. Kellaris, who has studied the effects of music on consumers for more than 12 years, teamed Sigma Research Management Group of Cincinnati to evaluate the effects of \"on-hold music\" for a company that operates on a customer service line. The UC researcher and his collegues tested four types of on-hold music with 71 of the company's clients, 30 of them women, from Indianapolis, Los Angeles. Light jazz, classical, rock and the company's current format of adult alternative were all tested. The sample include individual consumers, small business and large business segments. Participants were asked to imagine calling a customer assistance line and being placed on hold. They were then exposed to \"musical hold\" via headsets and asked to estimate how long it played. Other reactions and comments were also solicited and quatified by'the researchers. Service providers, don't want you to have to wait on hold, but if you do, they want it to be pleasant experience for all of you. But Kellaris' conclusions may hold some distressing news for companies. No matter what music is played, the time spent \"on hold\" was generally overestimated. The actual waiting in the study was 6 minutes, but the average estimate was 7 minutes. He did find some good news for the cleint who hired him. \"The kind of music they're playing now, alternative, is probably their better choice. Two things made it a good chooce. First, it did not produce significantly more positive or negatives reactions in people. Second, males and females were less different in their reactions to this type of music.\" Kellaris' other findings, however, make the state of on-hold music a little less firm: Time spend on hold seemed slightly shorter when light jazz was played, but the effect of music format differed for men and women. Among the males, the wait seemed shorter when classical music was played. Among the females, the wait seemed longest when classical music was played. This may be related to the differences in attention levels. In general, classical music evoked the more positive reactions among males; light jazz evoked the most positive reactions (and shortest waiting time estimates) among females. Rock is the least prefered across both gender groups and produce the longest waiting time estimates. \"The rock music's driving beat kind of aggravates people calling a customer assistance line with a problem,\" said Kellaris. \"The more positive the reaction to the music, the shorter the waiting timeis seemed to be. \"So maybe time does tend to fly when you're having fun, even if you're on musical hold,\" Kellaris quipped. Figure B.3: Original music and consumer document (docM) with no annota-tions. Appendix B. Usability Study Documents 82 Title:.\"Please Hold'' NorAlways:Music-To Your Ears, University Of Cincirmati'Researcherf.inds--:Nearlyall:ofus'knowvi'hatvit.vv^s^ike'tobe'put-:on \" o f K t o J * m u m r r i u s i e a l t i o l d : \" Call almostanycustomer service < •number; andtyou can expecMo^hearat-least a few barsof^sjpidiejejratjojs™ question i s : © o you hangrup?ordo.youikeepholdincj,?ji •That ;maydepend-on.your:genders.and what type of music-is playing<»accordingto research reported;byDr;xiames Kellaris atthe'Socjety of consumer-psychology conference ^Kellaris. v»ho has studied the effects of music un consumers for more than 1--2yearSj teamed w i l h ? S i g m a - R e s e a ^ ^ accompany that:operates;«p-a;customer.:serviceHine: During.a previous s t u d y ; 1 h e y ^ r e s e a r c h e r : a n d ; . h i s « e 4 ^ ^ ••with'7^l-ofthetcompanv;s clientsi3niOfthem>women^fromIndianapolis;-andLosjAnaeles; Light iaa; :classical , ;rockand'the: company's c u r r e n t f o r m a t . o f ;M u ^ business and.Iarge;businessse«&r*segments.ParticipantswereaskedtOiimagine'calling'ascustomeriassistanceiline -andbeing placed onihold; They were-then.exposed-to .\"musical holden;hQld.rnusiC\"Viaheadsetsand asked-.to;estimate •.how.longut piayed.-.Othertrea^gnjjandtcoijimejTts^w-erealso solicited and:quantified:by the researchers. 'Service-providers,.ofcourse; don.t want you-to^haveitOrwaitonmold,.butifyoudedid.theywantiitto be a-pleasant:experience= f o r allofvou. , However;-Kellaris' conclusions>may.tiold;sornedistressing:news:for:cornpanies.- :No-matterwhat music of the-•Tourty'pesof music was-played,,thetimetspeht--on hold«was generallyoverestimated.iThe:actual-waitmg--in !the ;studv was •6.minutesi'.butthe-;average estimate was&minutesy,He didfind'some aood-newsTorthe-sterfficllne who,hired>.hirrv.-He; concludedthekindot music^theyare playingnoW:: alternative, is probablytheirtoterbest choicer -Two-thingswakemade.'ate •ABqatlyeVreaetion . music, ' •\"- ' - • '-Kellaris'other findings, however,-make-the^state'ofon-holdmusicmusical holdia^littlefessjfirmviTime.spendsspentionihold. •seemed-slightly-siwrt&stsli'orter.ithan anyothertype ofmusic,when;tightjazzwas?played,:.butthe effect.otmusicformat-1 differed formenandswomen.Amongthe males;thewait seemedshortest when: classical musicwas'played. Amongitrie ifemales,;the ;waitseerned4&ftg8«tlonger-when-classical music was played.vThis^may-be relatedtethe;differences in-attwnswtw ttenti o n, I eve I In g e n e r a l , j p j i s j j c a j s n i u s ^ -reactions:(and shortest.waiting-.time : estimates)arnong:females.Rock-wasthe4eastlesst5ref«?&4preferted across-b.othr sgendergroupsand producesd the longestwaitingtime estimates^'Theirr^mjgic'Sjdriving.beatkind. of aggravates: 'people-calling a :customerassistance-:line:with;a pfoblem,\".said-Kellaris;He-concludedthat ( thewore-positivethe;reaction tothemusic.theshorterthewaitingtime4S--seemed:to be/Somaybetime'doestend:tofiy-.when:you're?having!fun,.even'ifr •-you're o n m u s i c a l holdonrholdmusic and waiting for customer'seryice.--, F i g u r e B . 4 : M u s i c a n d consumer documen t w i t h anno ta t i ons . \\ Appendix B. Usability Study Documents 83 Comments in docM (listed in document order): John: Do we need the elevator music analogy? John: I like we end the paragraph with a question. Mary: Capitalize the first character for each word in the conference name. Jen: I think UC refers to University of Cincinnati. Jen: I though 30 of them are men. Mary: John, I think adult alternative means a mix of contemporary styles. John: Do we need to include more details about the study? Jen: I don't think \"it\" has a clear reference. John: I think it should be 7 minutes and 6 seconds. Mary: \"Polarized\" is a better word choice here. Jen: I don't understand what we mean by \"less firm.\" Jen: I think musical preference is another factor. Jen: I think it should be light jazz. Mary: We meant adult alternative here, right, John? Appendix B. Usability Study Documents 84 TitlesWarm'rteather.B'oostsrMood, Broadens;The;Mind* ' fhe3poslt ive:impact : w a i r n ; ^ tminuteslVr-the;warmiweat^ i h ^ v a r m i n / t h e r m M ^ ' w e a t h e r ^ m p r o v i n g i m o o d ^ . t i Y n e ; s p e n t o u t s i d e i i n c r e a s e d ^ s u m m e r . j l w e red-mood levelszand-tneeffec^ \"Being-outside-in pleasant'weather :reallyofte^ • 'researcher.who'le^^ediihe;^ found- no;rera%onship;:'SO;we.wenibackia •iwrt««o'atsidef n^whattne.'se.ason -change.' !Th8;fihdihgs^co ;and:.OscarYbarra,:Wili be-so^ 5'A'Set''ofitfe.ejStu^ vArbor^participarite'-whoiwe^ 5moodsand-;memoryxompareH.fo;part^ -participants-who sp\"'ent'the;time;inside: fine i m p a c t ; o f « e a t h e ™ j -coiintries, ;on;average^sp^ •of-changingjweather'bUtside.-. Keller hiroself-exp.erienced;the;,phyp&m '%r<-aifT)id:wlhterlripito. Mexico; quickly.taeingjreminderj that\" ng-trie fwintertiraes/Hem'ot'esJthafemdst'peo^ ' , \"exampje'being^among'those/Whovsuff^ • duringjthe'fail or,winter.Previous:r^ isunnierweather improving-stockperformance. trheiresearchersfalsoToundi'tneioptimal temperature for,-m ytempe'ralure;:-wiui\"mood>'decrea3ihg>:iMemp'eVa& showeverfwith'mood.-peaking:at'65;degrees;in M i c h i g a n ' a n d ^ e / d e ^ iRorweathe'r?-to\\improve;mo -initial expectations, researchers found:that;spendmg;time i n d o o r s : w h e n ^ •:decrea'se'd;mbqd :anbr being. . . . . :cooped-up:in'doorswhenM;eatherbecomesbetter|or'perha\"ps because ofimproved-weather-.can make normaliMtJyjtieSi feeI boHhg or irritating. T h e r e s e a r ^ h3ve-evblved>i:h seasonal and'weather changes since the dawn of the species. GaMiqg for fu rth e r res e a rc h; i nt o the ' \" '•VubjecftttieTO 'springtime weather; go ot.tside'.'' \"\" \" ' \" Figure B.5: Weather and mood document (practice document) with annota-tions. Appendix B. Usability Study Documents 85 Comments in the practice document (listed in document order): John: I think \"beneficial\" suits better. Jen: I don't think it's clear here which seasons we are referring to. Jen: We should add \"Dr.\" in front of his name. Mary: Since we did find out the tests in 2000 are the biggest tests, I think we should indicate that here. Jen: We need to include where the participants are from. John: We need to add \"at least\" before 30 minutes to be consistent with the first sentence. Mary: We need to enhance these are indoor activities. Jen: We should use \"affect\" here. 86 Appendix C Usability Study Task Sets Before starting to review each document, participants were asked to imagine themselves in the following scenario: Scenar io : Imagine you are Bob, and you are one of the co-authors for the above document. Other co-authors are Jen, John, and Mary. This is your first time reviewing the document after the first draft of the document has been completed. Other co-authors have already reviewed the document and made annotations. Please complete the following tasks. Note: In the following tasks, you will be asked to review- groups of annotations. \"Review\" means to accept the annotations that you think are correct and reject the ones that you think are wrong either according to E n g l i s h g r a m m a r and/or d o c u m e n t contex t . Some annotations may already been accepted or rejected by other co-authors. Task instructions are assumed to be the same in both systems unless indi-cated otherwise. C l Task Instructions in Black Hole Document T a s k 1 Task Instructions One of the co-authors has made annotations regarding quantifying words. Re-view these annotations. Accept the ones that you agree with and reject the ones Appendix C. Usability Study Task Sets 87 that you disagree with. Click \"Start Task\" when you are ready to start. T a s k 2 in the S i m p l e S y s t e m Task Instructions You have received an email from John. Review the annotations mentioned in John's email. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. T a s k 2 in the B u n d l e S y s t e m Task Instructions You have received a general comment from John. Review the annotations men-tioned in John's general comment. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. T a s k 3 Task Background John ran the system spell checker during his turn reviewing. He corrected the spelling of some words according to the spell checker's suggestions. The changes he made are embedded as edits in the document. Task Instructions Review the spelling edits John made. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. T a s k 4 Task Instructions Three of the co-authors have made annotations regarding verb tense in the document. Review these annotations. Accept the ones that you agree with and Appendix C. Usability Study Task Sets 88 reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. Task 5 Task Background \"Grow\" (and its different verb forms according to verb tenses) is a verb, and it should be used when we talk about an action. However, \"growth\" is noun, and it should be used when we talk about the process of growing. During Mary's previous reviewing session, she discovered there are some misuses of the two words, so she first ran \"Find/Replace\" function to find all the incor-rect use of \"grow\" and replaced it with \"growth.\" Then, she ran \"Find/Replace\" again to find all the incorrect use of \"growth\" and replaced them with \"grow.\" Task Instructions Review the annotations regarding \"grow\" and \"growth.\" Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. Before Task 6 You will need the following list of facts about the document to complete the next task. Please read them carefully before you proceed. Facts: 1. The study of counting black holes relied on the deepest X-ray images ever obtained. 2. The distances to the X-ray sources were determined by optical spectro-scopic follow-up at the Keck 10-meter telescope on Mauna Kea in Hawaii. 3. The biggest black holes ran out of \"food\" billions of years ago. 4. Whatever mechanism governs star formation in galaxies also governs black hole growth. Appendix C. Usability Study Task Sets 89 5. Black holes approximately 10 to 100 million solar masses took smaller portions of their meals of gas and dust. 6. It is the first time that we are able to count the black holes between the biggest, active black holes in distance and smaller, calmer ones closer to Earth. 7. NASA's Chandra X-ray Observatory made the best survey to date of black holes. 8. Previously, astronomers had done careful studies of the birthrate of stars in galaxies. 9. Dr. Amy Barger is from the University of Wisconsin and University of Hawaii. 10. Many of the black holes smaller than about 100 million suns are buried under large amounts of dust and gas. Task 6 in the Simple System Task Instructions Make sure you have a paper copy of the fact sheet before you start this task. You have received an email from Jen. Review the annotations mentioned in Jen's email. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. Task 6 in the Bundle System Task Instructions Make sure you have a paper copy of the fact sheet before you start this task. You have received a general comment from Jen. Review the annotations men-tioned in Jen's general comment. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. Appendix C. Usability Study Task Sets 90 C.2 Task Instructions Music and Consumer Document Task 1 Task Background Adjectives can express degrees of modification by using their degree of compar-ison forms, namely positive, comparative, and superlative forms. Here are some simple examples: P o s i t i v e , C o m p a r a t i v e , S u p e r l a t i v e Short, Shorter, Shortest Long, Longer, Longest Little, Less, Least Much, More, Most Good, Better, Best The comparative form of an adjective is usually used when we compare two objects. The superlative form is used when we compare three or more objects. Example: Jerry is shorter than Tom. - Comparing two objects Jerry is the shortest among all three cartoon characters. - Comparing three objects i Task Instructions One of the co-authors has made annotations regarding adjectives' comparative and superlative forms. Review these annotations. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. Task 2 in the Simple System Task Instructions You have received an email from John. Review the annotations mentioned in John's email. Accept the ones that you agree with and reject the ones that you Appendix C. Usability Study Task Sets 91 disagree with. Click \"Start Task\" when you are ready to start. Task 2 in the Bundle System Task Instructions You have received a general comment from John. Review the annotations men-tioned in John's general comment. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. Task 3 Task Background John ran the system spell checker during his turn reviewing. He corrected the spelling of some words according to the spell checker's suggestions. The changes he made are embedded as edits in the document. Task Instructions Review the spelling edits John made. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. Task 4 Task Instructions Three of the co-authors have made annotations regarding verb tense in the document. Review these annotations. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. Task 5 Task Background In this document, \"musical hold\" is used to describe the stage of holding with music playing in the background, and \"on-hold music\" is used to describe the Appendix C. Usability Study Task Sets 92 specific type of music discussed in the document. The study described in the document is trying to evaluate the effects of \"on-hold music\" on customers when they are put on \"musical hold\" after calling a company's service line. During Mary's previous reviewing session, she discovered there are some misuses of the two phrases, so she first ran \"Find/Replace\" function to find all the incorrect use of \"musical hold\" and replaced it with \"on-hold music.\" Then, she ran \"Find/Replace\" again to find all the incorrect use of \"on-hold music\" and replaced them with \"musical hold.\" Task Instructions Review the annotations regarding \"musical hold\" and \"on-hold music.\" Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. Before Task 6 You will need the following list of facts about the document to complete the next task. Please read them carefully before you proceed. Facts: 1. Among the females, the wait seemed longest when classical music was played. 2. The effect of music format differed for men and women may be related to the differences in attention levels and musical preferences. 3. There are 30 female clients in the study described in paragraph 4. Males and females were less polarized in their reactions to alternative music. 5. Adult alternative is a mix of contemporary styles. 6. The actual wait in the study was 6 minutes, but the average estimate was 7 minutes and 6 seconds. 7. James Kellaris is from University of Cincinnati. Appendix C. Usability Study Task Sets 93 8. The rock music's driving beat kind of aggravates people calling a customer assistance line with a problem. 9. In the study, participants were asked to imagine calling a customer assis-tance line and being placed on hold. 10. Classical music evoked the most positive reactions among males. Task 6 in the Simple System Task Instructions Make sure you have a paper copy of the fact sheet before you start this task. You have received an email from Jen. Review the annotations mentioned in Jen's email. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. Task 6 in the Bundle System Task Instructions Make sure you have a paper copy of the fact sheet before you start this task. You have received a general comment from Jen. Review the annotations men-tioned in Jen's general comment. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. C .3 Task Instructions in the Practice Document Task 1 Task Background Acronym is a word formed from the initial letters of a series of words. Example: IEEE is an acronym for institute of Electrical and Electronics Engineers. Task Instructions Appendix C. Usability Study Task Sets 9 4 One of the co-authors has made annotations regarding the use of acronyms. Review these annotations. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. Task 2 i n the S i m p l e S y s t e m Task Instructions You have received an email from John. Review the annotations mentioned in John's email. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. Task 2 i n the B u n d l e S y s t e m Task Instructions You have received a general comment from John. Review the annotations men-tioned in John's general comment. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. Task 3 Task Background John ran the system spell checker during his turn reviewing. He corrected the spelling of some words according to the spell checker's suggestions. The changes he made are embedded as edits in the document. Task Instructions Review the spelling edits John made. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start'. > Task 4 V Task Instructions Appendix C. Usability Study Task Sets 95 Three of the co-authors have made annotations regarding verb tense in the document. Review these annotations. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. Task 5 Task Background In the document, \"warm\" should be used to describe pleasant weather, whereas \"hot\" should be used to describe the unpleasant summer weather. During Mary's previous reviewing session, she discovered there are some misuses of the two words, so she first ran \"Find/Replace\" function to find all the incor-rect use of \"warm\" and replaced it with \"hot.\" Then, she ran \"Find/Replace\" again to find all the incorrect use of \"hot\" and replaced them with \"warm.\" Task Instructions Review the annotations regarding \"warm\" and \"hot.\" Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. Before Task 6 You will need the following list of facts about the document to complete the next task. Please read them carefully before you proceed. Facts: 1. Matthew Keller is a post-doctoral researcher at University of Michigan. 2. A set of three studies conducted by Keller and his colleagues involved more than 600 participants from throughout the United States. 3. For weather to improve mood, subjects needed to spend at least 30 minutes outside in warm, sunny weather. 4. The researchers note that it should not be surprising that weather and sea-sons affect human behavior, given that humans have evolved with seasonal Appendix C. Usability Study Task Sets 96 and weather changes since the dawn of the species. 5. The tests that were conducted in 2000 on whether weather affects mood are the biggest tests so far in the theory. Task 6 in the Simple System Task Instructions Make sure you have a paper copy of the fact sheet before you start this task. You have received an email from Jen. Review the annotations mentioned in Jen's email. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. Task 6 in the Bundle System Task Instructions Make sure you have a paper copy of the fact sheet before you start this task. You have received a general comment from Jen. Review the annotations men-tioned in Jen's general comment. Accept the ones that you agree with and reject the ones that you disagree with. Click \"Start Task\" when you are ready to start. "@en ; edm:hasType "Thesis/Dissertation"@en ; vivo:dateIssued "2006-05"@en ; edm:isShownAt "10.14288/1.0051589"@en ; dcterms:language "eng"@en ; ns0:degreeDiscipline "Computer Science"@en ; edm:provider "Vancouver : University of British Columbia Library"@en ; dcterms:publisher "University of British Columbia"@en ; dcterms:rights "For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use."@en ; ns0:scholarLevel "Graduate"@en ; dcterms:title "Structured annotations to support collaborative writing workflow"@en ; dcterms:type "Text"@en ; ns0:identifierURI "http://hdl.handle.net/2429/17720"@en .