UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A study of functional units for information use of scholarly journal articles Zhang, Lei 2011

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2011_spring_zhang_lei.pdf [ 4.51MB ]
Metadata
JSON: 24-1.0071811.json
JSON-LD: 24-1.0071811-ld.json
RDF/XML (Pretty): 24-1.0071811-rdf.xml
RDF/JSON: 24-1.0071811-rdf.json
Turtle: 24-1.0071811-turtle.txt
N-Triples: 24-1.0071811-rdf-ntriples.txt
Original Record: 24-1.0071811-source.json
Full Text
24-1.0071811-fulltext.txt
Citation
24-1.0071811.ris

Full Text

	?A STUDY OF FUNCTIONAL UNITS FOR INFORMATION USE OF  SCHOLARLY JOURNAL ARTICLES  by Lei Zhang    A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES (Library, Archival, and Information Studies)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  April 2011 ? Lei Zhang, 2011 	ii??ABSTRACT This research aims to enhance reading effectiveness and efficiency by presenting the readers with the text in the article that is most relevant to a particular information task, rather than presenting the article in its entirety.  It applies the idea of the functional unit, the smallest information unit with a distinct function within four major components of scholarly journal articles ? Introduction, Methods, Results and Discussion.  First, through a review and analysis of the literature and validation through user surveys, 41 functional units within the four components were identified.  Also identified were the relationships between the individual functional units and five information use tasks, and furthermore the relationships among a set of functional units for a particular task.  The functional units were classified into three categories (primary, related, additional related) according to how useful they were for each task.  Based on this taxonomy, a prototype journal reading system was designed and implemented.  Thirty 3rd and 4th year psychology students participated in an experimental study using the prototype system.  Content analysis was used to analyze qualitative data collected from retrospective interviews, questionnaire open-ended questions, and screen recordings.  A statistical analysis of quantitative data collected via rating scales, logging of time and highlights was also carried out.  The answers to comprehension questions were assessed first by content analysis and then by statistical analysis. Participants using the prototype system were significantly more satisfied with the information obtained, highlighted more relevant text, and answered more fully the comprehension questions.  The use of functional units was effective in enabling people to focus on specific information and use pieces of relevant information across the article, but not necessarily to move from more relevant to less relevant information.  Participants using the 	iii??prototype system also felt significantly more efficient in obtaining the information.  The use of functional units was efficient in enabling people to read less or read selectively.  The signaling of functional units seemed to be more effective and efficient for the tasks requiring use of information scattered across articles.   This research suggests that information within an article can be organized and presented to benefit readers? information use.  	iv??PREFACE The research involving human subjects presented in this dissertation was carried out with permission from the University of British Columbia Behavioural Research Ethics Board.  The ethical approval certificates validated by the Behavioural Research Ethics Board are:  1. Number: H09-00889  Title: Employing functional units for electronic scholarly journal use ? Phase I  2. Number: H09-02100  Title: Employing functional units for electronic scholarly journal use ? Phase II 	v??TABLE OF CONTENTS Abstract ......................................................................................................................................................... ii Preface ......................................................................................................................................................... iv Table of Contents .......................................................................................................................................... v List of Tables ............................................................................................................................................... ix List of Figures ............................................................................................................................................... x 1 Introduction ................................................................................................................................................ 1 1.1 Context of Research ............................................................................................................................ 1 1.2 Theoretical Framework ....................................................................................................................... 4 1.3 Research Objectives and Research Questions..................................................................................... 6 1.4 Significance of Research ................................................................................................................... 10 1.5 Operational Definitions ..................................................................................................................... 10 1.6 Outline of Dissertation ...................................................................................................................... 11 2 Literature Review ..................................................................................................................................... 13 2.1 Overview ........................................................................................................................................... 13 2.2 Genre Theory .................................................................................................................................... 13 2.2.1 Defining Genre ........................................................................................................................... 14 2.2.2 Current Genre Studies ................................................................................................................ 16 2.2.3 Swales? CARS Model and Move Analysis ................................................................................ 21 2.2.4 Journal Article Component Use ................................................................................................. 27 2.3 Relevance Theory ............................................................................................................................. 32 2.3.1 Defining Relevance Theory ....................................................................................................... 32 2.3.2 Applications of Relevance Theory in Information Studies ........................................................ 36 2.3.2.1 ?Relevance? in Information Studies .................................................................................... 36 2.3.2.2 Impact of Relevance Theory on Information Studies ......................................................... 40 2.3.3 Bridging Genre and Relevance .................................................................................................. 44 2.4 Reading ............................................................................................................................................. 45 2.4.1 Semantic Navigation .................................................................................................................. 45 2.4.2 Reading Patterns ........................................................................................................................ 49 2.4.3 Information Use ......................................................................................................................... 53 2.5 Summary ........................................................................................................................................... 55 3 Developing a Functional Unit Taxonomy: Methods ................................................................................ 57 	vi??3.1 Overview ........................................................................................................................................... 57 3.2 Identifying Information Use Tasks and Functional Units ................................................................. 58 3.2.1 Identifying Information Use Tasks ............................................................................................ 59 3.2.2 Identifying Functional Units ...................................................................................................... 61 3.3 Validation Study ............................................................................................................................... 71 3.3.1 Validating Functional Units and Information Use Tasks ........................................................... 71 3.3.2 Validating Relationships between Functional Units and Information Use Tasks ...................... 73 3.4 Summary ........................................................................................................................................... 73 4 Developing a Functional Unit Taxonomy: Results .................................................................................. 75 4.1 Overview ........................................................................................................................................... 75 4.2 Results of Survey I ............................................................................................................................ 75 4.2.1 Information Use Tasks ............................................................................................................... 75 4.2.2 Functional Units ......................................................................................................................... 76 4.3 Results of Survey II .......................................................................................................................... 79 4.4 Summary ......................................................................................................................................... 101 5 Evaluation of the Utility of Functional Units in a Prototype System: Methods .................................... 102 5.1 Overview ......................................................................................................................................... 102 5.2 System Design ................................................................................................................................ 107 5.2.1 Design Rationale ...................................................................................................................... 107 5.2.2 Interface design ........................................................................................................................ 109 5.2.3 Content ..................................................................................................................................... 113 5.3 Participants and Recruitment .......................................................................................................... 114 5.4 Experimental Tasks ......................................................................................................................... 117 5.5 Measures ......................................................................................................................................... 119 5.6 Instruments ...................................................................................................................................... 120 5.7 Experimental Procedures ................................................................................................................ 123 5.8 Pilot Study ....................................................................................................................................... 126 5.9 Data Analysis .................................................................................................................................. 126 5.10 Summary ....................................................................................................................................... 127 6 Evaluation of the Utility of Functional Units in a Prototype System: Results ....................................... 129 6.1 Overview ......................................................................................................................................... 129 6.2 Task Ease and Topic Familiarity .................................................................................................... 129 6.3 Results of Reading Effectiveness .................................................................................................... 133 	vii??6.3.1 Results for RQ 4.1a .................................................................................................................. 134 6.3.1.1 Perceptions of Task Completion ....................................................................................... 134 6.3.1.2 Task Performance ............................................................................................................. 137 6.3.1.2.1 Relevant Text Highlighted ......................................................................................... 137 6.3.1.2.2 Quality of Answers .................................................................................................... 138 6.3.2 Results for RQ 4.1b .................................................................................................................. 141 6.3.2.1 User Evaluations of Interface Functionalities ................................................................... 141 6.3.2.2 Use of Functional Units .................................................................................................... 145 6.3.2.2.1 Use of Functional Units in Three Categories ............................................................. 145 6.3.2.2.2 Effective Reading ....................................................................................................... 154 6.4 Results of Reading Efficiency ......................................................................................................... 155 6.4.1 Results for RQ 5.1a .................................................................................................................. 156 6.4.1.1 Perceptions of Task Completion ....................................................................................... 156 6.4.1.2 Task Performance ............................................................................................................. 157 6.4.2 Results for RQ 5.1b .................................................................................................................. 160 6.4.2.1 Amount of Text Explored ................................................................................................. 160 6.4.2.2 Efficient Reading .............................................................................................................. 163 6.5 Summary ......................................................................................................................................... 166 7 Discussion .............................................................................................................................................. 168 7.1 Overview ......................................................................................................................................... 168 7.2 The Functional Unit Taxonomy ...................................................................................................... 168 7.2.1 Summary of Findings ............................................................................................................... 168 7.2.2 Discussion of Findings ............................................................................................................. 171 7.3 The Utilization of Functional Units ................................................................................................ 177 7.3.1 Summary of Findings ............................................................................................................... 177 7.3.2 Discussion of Findings ............................................................................................................. 179 7.3.2.1 Task Ease and Topic Familiarity ...................................................................................... 179 7.3.2.2 Effectiveness and Efficiency in Reading Outcomes ......................................................... 181 7.3.2.3 Effectiveness in Reading Process ..................................................................................... 186 7.3.2.4 Efficiency in Reading Process .......................................................................................... 191 7.4 Summary ......................................................................................................................................... 192 8 Conclusion ............................................................................................................................................. 194 8.1 Overview ......................................................................................................................................... 194 	viii??8.2 Contributions of the Study .............................................................................................................. 194 8.2.1 Contributions to Document Component Use ........................................................................... 194 8.2.2 Theoretical Contribution .......................................................................................................... 197 8.2.3 Methodological Contribution ................................................................................................... 199 8.3 Design Implications ........................................................................................................................ 199 8.4 Limitations ...................................................................................................................................... 200 8.5 Future Research .............................................................................................................................. 201 8.6 Summary ......................................................................................................................................... 202 Bibliography ............................................................................................................................................. 204 Appendices ................................................................................................................................................ 214 Appendix 1: Representative Move Structures....................................................................................... 215 Appendix 2: Reading Level of Journal Articles Used in Study ............................................................ 223 Appendix 3: Validation Survey Email Advertisement .......................................................................... 226 Appendix 4: Validation Survey I: Types of Information & Uses ......................................................... 227 Appendix 5: Validation Survey II: Relationships between Types of Information and Uses ................ 232 Appendix 6: DTD and Sample XML Document .................................................................................. 237 Appendix 7: User Study Email Advertisement ..................................................................................... 246 Appendix 8: User Study Consent Form ................................................................................................ 247 Appendix 9: Post-task Questionnaire .................................................................................................... 249 Appendix 10: Post-study Questionnaire ............................................................................................... 250 Appendix 11: Answer Key .................................................................................................................... 253 Appendix 12: Interview Transcripts ..................................................................................................... 255  	ix??LIST OF TABLES Table 2.1: Swales? CARS Model ................................................................................................................ 22 Table 2.2: Summary of studies on journal article component use .............................................................. 30 Table 3.1: Data source and analysis for Research Questions 1, 2 & 3 ........................................................ 58 Table 3.2: Six information use tasks adapted from Taylor?s model ........................................................... 61 Table 3.3: A preliminary set of functional units ......................................................................................... 67 Table 3.4: Inter-coder reliability of functional units identified ................................................................... 71 Table 4.1: Mean frequency scores of 6 information use tasks .................................................................... 76 Table 4.2: Mean scores of 52 functional units ............................................................................................ 77 Table 4.3: Validated taxonomy of functional units ..................................................................................... 78 Table 4.4: Mean scores and significance tests for functional units by tasks ............................................... 82 Table 4.5: Ranking scores of functional units on usefulness ...................................................................... 84 Table 4.6: A comparison between functional units with highest rating and ranking scores ....................... 86 Table 4.7: Mean scores of functional units by task "Learn about background" ......................................... 88 Table 4.8: Mean scores of functional units by task "Refer to facts" ........................................................... 89 Table 4.9: Mean scores of functional units by task "Refer to arguments" .................................................. 90 Table 4.10: Mean scores of functional units by task "Keeping up" ............................................................ 91 Table 4.11: Mean scores of functional units by task "Learn how to" ......................................................... 92 Table 4.12: Task-centered functional unit taxonomy ................................................................................. 95 Table 4.13: 41 Functional units with varying usefulness ............................................................................ 99 Table 5.1: Data sources and analyses for Research Question 4 ................................................................ 105 Table 5.2: Data sources and analyses for Research Question 5 ................................................................ 106 Table 5.3: Journal use experience and expertise ....................................................................................... 116 Table 5.4: Participant task assignment ...................................................................................................... 118 Table 5.5: Relations between task types and questions ............................................................................ 119 Table 6.1: User perceptions regarding effectiveness ................................................................................ 136 Table 6.2: Relevant text highlighted ......................................................................................................... 138 Table 6.3: Quality of answer ..................................................................................................................... 139 Table 6.4: Functional units used - "Refer to facts" ................................................................................... 147 Table 6.5: Functional units used - "Learn how to" ................................................................................... 148 Table 6.6: Functional units used - "Refer to arguments" .......................................................................... 149 Table 6.7: Descriptive statistics of move patterns .................................................................................... 150 Table 6.8: User perceptions regarding efficiency ..................................................................................... 157 Table 6.9: Completion time in minutes ..................................................................................................... 158 Table 6.10: Amount of text explored ........................................................................................................ 162 Table 7.1: Reading outcome measures showing significant differences .................................................. 182 Table 7.2: Significant results of text highlighted in the experimental system .......................................... 185 Table 8.1: Contributions made in various studies on document component use ...................................... 196   	x??LIST OF FIGURES Figure 5.1: Baseline interface ................................................................................................................... 110 Figure 5.2: Experimental interface ............................................................................................................ 111 Figure 5.3: Welcome screen ..................................................................................................................... 124 Figure 5.4: Instruction window ................................................................................................................. 125 Figure 6.1: Means of task ease .................................................................................................................. 131 Figure 6.2: Means of topic familiarity ...................................................................................................... 132 Figure 6.3: A screenshot of highlights and answer in the experimental system ....................................... 140 Figure 6.4: Task completion time by task ................................................................................................. 159 Figure 6.5: Completion time over time ..................................................................................................... 159 Figure 7.1: An illustration of functional units in three categories for ?Learn how to? task...................... 174    	1??1 INTRODUCTION 1.1 CONTEXT OF RESEARCH Journal reading is an indispensible activity for research, teaching and learning in a variety of scholarly disciplines.  Journal articles (93.9%) dominate as a source of the last substantive piece of information used by science faculty for work, while book or book chapters are used about half of the time and websites, about one third of the time.  Journal articles are also the most important source that science faculty know about the information prior to reading it (Tenopir et al., 2009a).  However, people are overloaded with information brought about by the growing amount of research and publications readily accessible in a digital world.  Furthermore, what is useful or interesting to readers is often a certain part of the journal article rather than the whole of an article (Bishop, 1999; Sandusky & Tenopir, 2008).  Research indicates that although the total time spent reading scholarly articles has increased, the time spent on each item read has declined.  For a university science faculty member in the United States, the average number of articles read per year increased from 150 in 1977 to 280 in 2005, while the average time spent per article read decreased from 48 minutes in 1977 to 31 minutes in 2005 (Tenopir et al., 2009a).  This suggests that there is a need to enhance electronic journal systems to support journal reading within academia.  To achieve this goal, this study endeavors to find a way to help readers locate and consume the most relevant information within a scholarly journal article.  The concept of genre, referring to the relatively stable and expectable form and content for communication within a particular community (Breure, 2001), casts new light on information seeking and use from a document-oriented approach, as opposed to the long-standing system-centered approach or user-centered approach.  Genre can tell about the function of a document with a few words.  Genre knowledge can enable the reader to instantly recognize what the 	2??document would look like (form), and consequently what it means (content), and ultimately what it is used for (purpose).  Genre instances can vary in granularity ? an individual web page, multiple web pages, a web site, or part of a web page can all be considered as comprising a genre (Crowston & Williams, 2000; Rosso, 2008; Shepherd & Watters, 1998).  Most existing studies on digital genre have focused on the level of documents such as journal articles (Dillon, 2004), web pages such as web newspapers (Vaughan, 1999) or web sites such as academic and corporate sites (Symonenko, 2007).  Only a few studies (Dillon, 2004; Vaughan & Dillon, 1998) have taken a more analytical approach by studying the genre of components within journal research articles ? Introduction, Methods, Results, and Discussion (IMRD).  Very few studies have exploited the genre conventions of article components for the information use of journal articles.  In his latest remarks on genre, Dillon (2008) notes that a finer grain of genre attributes can add value to navigation within a document.  It demands a switch of focus from helping the readers find the relevant article to helping the readers navigate, read, comprehend and use the information within an article.  This research responds to Dillon?s remarks by focusing on genre at the granular within-document level.   This research seeks to facilitate information use of journal research articles by applying the idea of the functional unit.  Here, a functional unit is defined as a chunk of information with a distinct communicative function embedded in any of the four major components of a scholarly journal article ? Introduction, Methods, Results, and Discussion.  For example, in the Introduction component, a piece of text reviewing previous research can be considered as a functional unit, while another piece of text indicating a gap in previous research can be considered as a different functional unit.  The concept of functional unit is based on Swales? Create a Research Space (CARS) model (1990), which was originally proposed for writing 	3??research article introductions.  As noted by Swales, the overall communicative purpose of the genre is achieved by the move structure, consisting of a series of functionally distinct moves.  Each move is a section of text which performs a specific communicative function, and is in turn realized by a set of steps.  Move analysis led by Swales has been an approach in the area of teaching English for specific purposes.  However, the value of move analysis has not been extended to information studies and been applied to the navigation and comprehension of digital documents.  The essence of genre lies in the expected form and content from both sides, authors and readers, and therefore move analysis, originating from analyzing academic articles for writing, might be usefully employed to guide the reading of scholarly journal articles.  The sequential narration of Introduction-Methods-Results-Discussion does not necessarily reflect the chronology of close reading (Berkenkotter & Huckin, 1995).  Making salient the functional units in a single component and reorganizing the related functional units from several components is a possible way to improve reading process and reading outcome.  This study identifies a set of common functional units within IMRD components of a journal research article.  Next it examines how these functional units are related to different tasks requiring use of information in journal articles, and furthermore how they are related to each other for a particular information task.  Then this study applies the mapping between functional units and information tasks to the design and implementation of an innovative journal reading system.  Thus we may help readers to fulfill a particular task by presenting them with the text in the article that is most relevant to the task, rather than presenting the article in its entirety.    	4??1.2 THEORETICAL FRAMEWORK  This research seeks to explore how to utilize the functional units within article components to provide the most relevant information for scholarly journal users.  This exploration is undertaken from the perspectives of Swales? Create a Research Space model (1990) and Sperber and Wilson?s Relevance-theoretic Comprehension Procedure (1995).  E-documents do not possess the discrete boundaries that paper documents do, but extend boundaries through the properties of hypertext.  Linguistics has not had the influence on hypertext studies that might be expected, although relating pragmatics to information studies can be traced back to the application of Grice?s Cooperative Principle and Austin?s Speech Act Theory.  The idea of coherent hypertext structure is built upon discourse comprehension; the concept of digital genre is derived from rhetoric or linguistic genre.  Both originate from linguistics-related areas.  However, as noted by Tosca (2000), ?linguistics hasn?t had a great impact on the attempts to build a rhetoric of hypertext, which strikes me as strange in a field so concerned with communication between author, (hyper)text and reader?.  Esperet (1996) also states that ?a comprehensive psycholinguistic model of hypertext usage (and users) is yet to be proposed? (p. 153).  The research achievements in linguistics are expected to contribute to studies on reading of digital documents. This research is theoretically based on two outcomes in linguistics ? Swales? genre model and Sperber and Wilson?s Relevance Theory.  As stated above, the idea of functional units as used in this study is based on Swales? CARS model.  In the move analysis studies led by Swales, the smallest units for analysis are a ?move? or a ?step? in a ?move?.  The functional unit in this study is developed from a ?move? or a ?step?, and thus is the smallest possible unit of information, with the least amount of information for use.  Thus the related functional units 	5??should contain adequate information for the reader?s needs in a certain context.  The way that functional units are connected is based on Sperber and Wilson?s Relevance-theoretic Comprehension Procedure.   Relevance Theory proposed by Sperber and Wilson (1995) seeks to account for verbal communication by revealing the cognitive processes involved.  Particularly, Sperber and Wilson?s Relevance-theoretic Comprehension Procedure outlines the processes in comprehending verbal communication: comprehension starts with the recovery of linguistically encoded meaning, and continues with the recovery of the explicit meaning and the implicit meaning.  The audience follows a path of least effort and stops at the first interpretation that satisfies his or her expectations of relevance.  Though Relevance Theory is situated in the field of pragmatics, it aims to lay the foundation for a unified theory of cognitive science and has been applied to thought processes in other contexts.  Therefore, we might conjecture that the cognitive thoughts involved in comprehending the verbal utterance could be applied to that in comprehending the written document.  Following Sperber and Wilson?s Relevance Theory, the comprehension procedure proceeds as follows: expectations of relevance generated by the most relevant functional unit can be extended to other related functional units within the component, which can be further extended to more related functional units beyond the component.  On a higher level, genre triggers some kind of expectation of relevance.  The main claim of Relevance Theory is that the expectations of relevance raised by utterance guide the hearer toward the speaker?s meaning (Sperber & Wilson, 1995; Wilson & Sperber, 2004).  Likewise, the information of text genres may influence the expectations of relevance in different ways, for example, business letters create specific expectations as to how they unfold (Unger, 2006).  According to Relevance Theory, out of the many information-based stimuli to which they are 	6??exposed, humans pay most attention to information that is seemingly relevant to them.  Communication claims one?s attention, which implies that the information communicated should be relevant.  Applied to a digital world, the digital document aims to communicate meaning by the author, and thus, the way the digital document is presented is inferred as relevant by the reader.  At this point Relevance Theory meets with Genre Theory ? a genre has its anticipated content and form for the communicative purpose.  Genre theory approaches from the artifact perspective involving human communicative behavior, while Relevance Theory approaches from the user perspective involving the cognitive processing of artifacts.  The two are connected by the mediated artifact, utilized by the information technology.  Based on Swales? genre model and Sperber and Wilson?s Relevance Theory, this study aims to explore how the functions of the smallest information units can be utilized to achieve optimal relevance in the reader?s interaction with digital documents.   1.3 RESEARCH OBJECTIVES AND RESEARCH QUESTIONS This dissertation will address the following general research question: how can functional units be employed for the location and consumption of relevant information within scholarly journal articles?  Because of the variations in genres of different domains, it is beyond the capacity of a dissertation to cover all of them.  This research focuses on one genre ? empirical research articles ? and in one specific domain ? psychology.  This study targets the psychology domain because adherence to APA (American Psychological Association) style in this domain has resulted in a relatively mature research article genre.  Under investigation are empirical research articles in journals and the components of Introduction, Methods, Results and Discussion, the main body of a standard empirical research article. 	7??This research consists of two phases, which are presented each in separate chapters as follows:  Phase I: Identify and validate the relationships between functional units and information   tasks involved in the use of scholarly journal articles (Chapters 3 & 4) Phase II: Implement and evaluate the utilization of functional units for particular information tasks involved in the use of scholarly journal articles (Chapters 5 & 6) Each phase includes several separate studies around the general objective of that phase. In order to fulfill these two objectives, this study will address five major research questions.  The first, second and third research questions, which address the conceptual model to inform the prototype system design, were completed in phase one, while the fourth and fifth research questions, which address the practical design, implementation and evaluation of the conceptual model, were done in phase two.  The five research questions are designed to approach the research objectives step by step.  The general and subsidiary research questions are as follows: Research Question 1: What are the most common functional units within psychology journal articles?  Research Question 2: How are functional units related to different tasks requiring use of information in psychology journal articles? 2.1 How are the IMRD components of a journal article related to different information tasks? 2.2 How are the functional units in a component of a journal article related to different information tasks? 	8??Research Question 3: How are functional units related to each other for a particular task requiring use of information in psychology journal articles? 3.1 For a particular information task, which functional unit is first attended to? 3.2 For a particular information task, how is a functional unit related to other functional units of the same component? 3.3 For a particular information task, how is a functional unit related to other functional units of different components? Research Question 4: Does the signaling of functional units to readers enhance reading effectiveness? 4.1a Does the signaling of functional units help readers to complete tasks more effectively? 4.1b If so, how does the signaling of functional units help readers to complete tasks more effectively? 4.2 Does the impact of functional units on effectiveness vary with reading tasks? Research Question 5: Does the signaling of functional units to readers enhance reading efficiency? 5.1a Does the signaling of functional units help readers to complete tasks more efficiently? 5.1b If so, how does the signaling of functional units help readers to complete tasks more efficiently? 5.2 Does the impact of functional units on efficiency vary with reading tasks? The first research question is to identify sets of functional units in the case of psychology journal articles.  The second research question examines the expectations of relevance generated 	9??by functional units ? the relationships of particular information tasks with a component and the functional units within that component.  The third research question further examines the expectations of relevance generated by functional units ? the relationships between the primary related functional unit and other related functional units in the same component and in different components for particular information tasks.  In addition to the explanatory power of Relevance Theory, it is of interest to see the ability of Relevance Theory to predict what information is likely to be relevant and what interpretive steps might be involved, and ?therefore allows for the manipulation of other people?s thoughts? (Yus, 2006, p. 512).  Optimal relevance is determined by the interplay of two variables: effect and effort, which are close to effectiveness and efficiency as criteria to evaluate information systems.  Research question 4 examines if and how the signaling of functional units supports reading from the effect aspect while research question 5 examines it from the effort aspect.  The sub-questions 4.1a and 5.1a are to investigate the reading outcome by the signaling of functional units, whereas the sub-questions 4.1b and 5.1b are to investigate the reading process by the signaling of functional units.  The sub-questions 4.2 and 5.2 are to see if these outcomes vary by information tasks.  To investigate the role of functional units in enhancing journal reading, first it is necessary to identify the functional units within IMRD components of journal articles.  Functional units play a role in journal reading due to the distinct function of each information unit through which the readers make use of the text.  The tasks provide a means to investigate the functions of information units.  So it is also necessary to identify a set of tasks relevant to using information from journal articles to examine the effect of these functional units.  Next the functional units and their associations with information tasks were validated by psychology 	10??journal users.  Then the validated functional unit taxonomy was designed and implemented into a prototype system for testing.   1.4 SIGNIFICANCE OF RESEARCH The focus of current genre studies has been on using genre at the document level to support information retrieval and navigation.  Given the growing pressure of reading more in less time within academia, helping the readers find the relevant article or locate information is far from enough.  This research moves a step further to look at genre at the granular, within-document level.  This research explores the smallest information units and exploits the functions of the smallest information units for journal reading.   By focusing on the internal components of journal articles, this research intends to model the relationships between the individual functional units and information use tasks, so as to present readers with the most relevant text in the article rather than the entire article.  The utilization of functional units is expected to support not only navigating, but more importantly, close reading, comprehending, and using the most relevant information within scholarly journal articles.   1.5 OPERATIONAL DEFINITIONS   Document component: A distinguishable section within scholarly journal articles ? Introduction, Methods, Results, Discussion, with identifiable structural characteristics and conventionalized communicative purpose.   	11??Effectiveness: The observable outcomes brought by a task completion.   Efficiency: The time and effort spent on completing a task.  Functional unit: A chunk of information within an individual document component (i.e., Introduction, Methods, Results, Discussion), with a distinct communicative function to fulfill, as part of the communicative purpose of that individual component.  For example, ?review previous research? in the Introduction is a functional unit, and ?highlight overall outcome? in the Discussion is another functional unit.  Information (use) task: The user?s activity of using the information from journal articles to achieve the goal of changing his/her state of knowledge.   Reading: A sequence of activities during the whole process of reading, including reading the contents closely, comprehending the contents, and using the contents for a purpose.   1.6 OUTLINE OF DISSERTATION Following this chapter is a literature review in Chapter 2, covering the theoretical bases and empirical evidence for this research.  Chapter 3 describes the procedures and methods used in the data collection and analysis in Phase I of the study.  In Chapter 4, the study results with respect to functional unit identification and validation in Phase I (RQs 1, 2 & 3) are reported. 	12??Chapter 5 describes the research methods, the procedures, and the instruments adopted in Phase II of the study.  Also described are the techniques in analyzing the collected data. Chapter 6 presents the experimental results from Phase II according to effectiveness and efficiency in regard to two research questions (RQs 4 & 5). Chapter 7 summarizes and discusses the functional unit taxonomy developed in Phase I, and the main findings in experimental study of Phase II. Finally, Chapter 8 discusses the major outcomes of this research, and its theoretical and practical implications.  Limitations and future directions are also discussed.    	13??2 LITERATURE REVIEW 2.1 OVERVIEW This dissertation studies the functions of types of information within scholarly journal articles and their application in supporting information use.  The idea of functional units arises from the current genre studies on document components, and the concept of functional units is derived from the move analysis led by Swales (1990, 2004).  How functional units are employed for journal reading is mainly based on Sperber and Wilson?s Relevance Theory (1995).  The goal for proposing functional units is to facilitate navigating, close reading, comprehending, and using scientific information in an article.  Therefore the literature pertaining to this study is mainly in three areas: Genre Theory, Relevance Theory and reading.   2.2 GENRE THEORY This section starts with a review of the various definitions of genre, followed by a discussion of genre-related research grouped by three threads.  First, current genre studies in LIS are discussed to show how genre is used to signal the function of information objects for classification or the structure of information objects for information design.  What is discussed next is Swales? CARS model and the use of move analysis in analyzing the rhetorical structure of IMRD components in scholarly articles.  Lastly, the existing studies on journal article component use, mainly on subject conception or information retrieval of the article components, are discussed.  This review of genre shows that the extant research in information studies has discussed genre with emphasis at the document level rather than at the within-document level.  Genre has been used more as a distinction between documents rather than between components 	14??of a document for information organization and design, though the inner document components deserve more attention in journal reading studies.   2.2.1 Defining Genre  Genre has been referred to as ?a distinctive category of discourse of any type, spoken or written, with or without literary aspirations? (Swales, 1990, p. 33).  It can be in the oral form of communication, such as songs, stories, genealogies, poetry, hymns, or in the literate form of communication, such as lists, e-mails, recipes, newspapers, novels, maps, journals, books, diaries, textbooks, letters, weblogs, whether print or electronic.  These forms of communication are an integral part of human activity, and therefore a more comprehensive view of genre is that it is about a variety of forms of communication and human activity (Andersen, 2008).  Genre is treated as a pattern of regularities across four dimensions: textual features of texts, composing processes in the production of these texts, reading practices in the interpretation of these texts, and social roles performed by writers and readers connected by these texts (Pare & Smart, 1994).  Definitions of genre vary, but most of them address the essences of form of a document, expected content, intended communicative purpose, and social acceptance (Kwasnik & Crowston, 2005).  The most commonly cited definition perhaps is that of Orlikowski and Yates (1994), ?a distinctive type of communicative action, characterized by a socially recognized communicative purpose and common aspects of form? (p. 543).  Literature on genre is profuse and perspectives on genre are diverse based on different purposes from different disciplines, among which the North American School has exerted an enormous impact on genre studies (Freedman & Medway, 1994).  Inspired by Miller?s Genre as social action (1994), the North American school of genre views texts within a broader context of 	15??typified communicative situations and human activities in which the texts are employed.  As such, genre is viewed as ?typified rhetorical actions based in recurrent situations? (Miller, 1994, p. 31).  It is more concerned with the action a text is used to accomplish rather than its substance or form, that is, how texts are produced and consumed to create realities of meaning, relation, and knowledge (Bazerman, 2004; Miller, 1994). Extending from Miller?s concept to studying genres for communications in organizations, Orlikowski and Yates (1994) proposed ?genre repertoire? as sets of genres routinely enacted by community members.  Therefore, identifying a genre repertoire can tell us something about established communicative practices of a community, and changes in genre repertoire over time can reveal changes in the structuring of that community?s communicative practices.  By combining activity theory and genre theory, Spinuzzi (2003) conducted four interrelated studies of traffic workers and their use of a database of traffic accidents.  Spinuzzi discussed ?genre ecologies? as multiple artifacts jointly mediating activities, ?workers continually draw on existing genres to develop local, ad hoc solutions to recurrent problems in their particular workplace.  They take official genres that were designed for broad situations and modify them with unofficial genres to produce solutions tailor-made for their own local situations.  In doing so, they build genre ecologies that collectively mediate their complex activities? (p. 222).  In addition, Bakhtin (1986) argued that genres were evoked by typical situations in speech communication, as primary speech genre in everyday life utterances, from which derived the secondary speech genre of writing in novels, dramas, literary commentaries, etc.   The various definitions show that genres are mediated artifacts through which people are interacting with information.  Like other genre studies, Swales? model involves mediated artifacts, manifested as multiple ?moves? and ?steps? from an article. 	16?? 2.2.2 Current Genre Studies Digital genre refers to the genre of digital documents, web pages or web sites, characterized by the triplet of content, form and functionality.  Research interests in digital genres are mainly in its evolution and function (Breure, 2001).  While most of the studies of digital genres described below focus on genre at the level of whole documents, rather than on finer-grained components within documents, the general approach to genre as a means of supporting relevance assessment, navigation and use of texts, is closely aligned. Crowston and Williams (1997) identified 48 different web genres from randomly selected web pages (100 in one sample, and 1000 in a second).  The size of genre repertoire was considered as a reflection of many communities on the Web and varied uses of the medium.  Web genres were classified by Crowston and Williams (2000) as reproduced genres which migrated from traditional media (60.6%), adapted genres incorporating linking and interactivity of web media (28.6%), and new genres which emerged unique to the web (5.3%).  Shepherd and Watters (1998) conducted similar research using the term ?cybergenre? and identified its subgenres: extant subgenres (from replicated to variant) and novel subgenres (from emergent to spontaneous).  A number of web genres are reproduced on or adapted to the web media as shown in these two studies.  Two genres have received much attention in current genre studies: weblogs and personal home pages.  The weblog emerged as a hybrid genre grown from offline genres and other online genres (Herring et al., 2005).  The personal home page was claimed as the first true digital genre since it evolved into its unique standard form (Dillon & Gushrowski, 2000). Genre-related research in LIS includes studies of knowledge organization, web design, and digital communication (Andersen, 2008).  Genre can be used as a document descriptor to 	17??improve web search effectiveness.  Another approach is to apply structural genre conventions to facilitate navigation and comprehension of digital documents.  Genre has also been investigated as social action connected with the social structure it shapes and is shaped by.  The first two threads are more related to this research as indicating the functions of the information units and the structural relations between these functional units. The significance of genre in information studies is based on the idea that the visual appearance of a document enables one to be aware of its form, which in turn enables one to be aware of its type of content, thus the distinctive and salient structural cues can tell the document?s identity (Toms & Campbell, 1999; Toms, 2001).  Genre creates shared expectations of form and content of communication, thus if a document is produced conforming to its genre conventions, the form should inform the user about its content before he or she actually reads it.  This is critically important in facilitating the user?s recognition of and interaction with a digital document.   For example, the same topic ?database design? can have purposes either for teaching or for research.  For a teaching purpose we have genres of syllabi, assignments, class notes, etc.  For a research purpose genres include papers, annotated bibliographies, etc.  Genre indicates the purpose of the document, which is very helpful for users when deciding on whether to reject the document or to process it further.  ?We suggest that enhancing document representations by incorporating nontopical characteristics of the documents that signal their purpose ? that is, their genre ? will enrich document (and query) representations in such a way that they resonate more truly with the information need of a user as situated in a particular context? (Crowston & Kwasnik, 2003, p. 348).   Roussinov et al. (2001) proposed a genre-based searching interface by limiting searches to specified genres, visualizing the hierarchy of genres discovered in the search results, and 	18??providing user feedback on the relevancy of the specified genres.  Rosso (2005, 2008) developed an 18-genre palette in the .edu domain recognizable to users, and when choosing from this genre palette users reached over 70% agreement on the genres of web pages.  Then Rosso evaluated the usefulness of the genre palette for web search with the genre of a page described in each search result.  Freund (2008a, 2008b) studied genres by linking them with situational relevance.  Situational relevance in information studies emphasizes the searcher?s task situations, in which workplace task types were matched with corresponding genre types, thus the association between task and genre could be a measurement of situational relevance.  Freund identified and implemented relationships between task and genre as a filtering component in an IR system for software engineers.  In a few previous studies, the genre-annotated search results did not produce significant improvement in participants? relevance judgments or task performance.  This was caused by complexities in the experimental design, but these studies did valuable explorations for subsequent research in this direction.   Current studies on structural genre conventions (Crowston & Williams, 2000; Dillon, 2000, 2008; Dillon & Vaughan, 1997; Symonenko, 2007; Toms & Campbell, 1999; Toms, 2001; Vaughan & Dillon, 1998, 2006; Vaughan, 1999) show that users? navigation and comprehension can be improved through conforming to the genre conventions in information design and the user?s identification of these genre conventions in the information space.  The central concept of genre, that content and form can inform each other, is contained in Dillon?s proposals of ?shape? (Dillon & Vaughan, 1997) and ?information model? (Dillon, 1999).  Proposed as an alternative ?navigation? metaphor, ?the concept of shape assumes that an information space of any size has both spatial and semantic characteristics.  That is, as well as identifying placement and layout, 	19??users directly recognize and respond to content and meaning? (Dillon, 2000, p. 523).  This idea was further elaborated as follows (Dillon, 2004, p. 118):  Thus the shape of a document can be a convention to both the writer, so that she conforms to expectations of format, and the reader, so she knows what to expect.  It can be a conveyer of context mainly to the reader so she can infer from, and elaborate on, the information provided, but it might be employed by a skilled writer with the intention of provoking a particular response in the reader.  Finally, it can be a means of mentally representing the contents to both the reader, so she grasps the organization of the text,  and the author, so that she can appropriately order this delivery.   Dillon (1999) made a similar statement while addressing ?information model? as one of the four factors in his TIME framework, i.e., Tasks, Information model, Manipulation, visual Ergonomics.  Information model refers to the user?s mental model of the text or information space.  Dillon (2000) further discussed the spatial-semantic model of shape in the context of varying levels of expertise with specific information objects: expert users employed semantic cues in structural knowledge to make sense of the information space whereas novice users had to rely on spatial cues in visual display for the same purpose.  Vaughan (1999) bridged genre theory and mental representations of structure (including schema theory, mental models and strategic discourse processing), since both deal with the regularities in users? conceptions of information spaces.  The mental representations, focusing on the individual thought processes, ?provides a natural extension to genre studies? which are situated in ?a socially grounded context? (Vaughan & Dillon, 2006, p. 504).  ?The value of genre springs from very real cognitive processes ? the genre can serve as a form of schematic representation or scaffold for long-term memory? closely related to ?behavioral practices in a community? (Dillon, 2002, p. 68).   Vaughan (1999) studied the effects of structural genre conventions on the development of mental representations of structure in the case of web-based newspapers.  The study used two designs which either conformed to or violated the structural interface and interaction design 	20??features identified by a group of expert users.  It was found that users of the genre-conforming design viewed more in the reading task and performed more efficiently in the information-seeking tasks than those of the genre-violating design.  It was also found that genre could be formed through repeated exposure and be conventionalized by shared understanding.  Over time users of the genre-violating design increased navigational exploration in the reading task and improved navigational efficiency in the information-seeking tasks.  Symonenko (2007) extended Vaughan?s work from individual web pages to a website or a website section as a unit of online content for research.  Based on genre theory and mental representation theory, Symonenko found that academic (i.e., universities) and corporate (i.e., telecommunications businesses) sites demonstrated site-type dependent emerging genres, and that users possessed site-type dependent expectations of content structures in interaction.   Symonenko also found that the match and mismatch between expectations and actual structures affected the success of interaction.     It is suggested that web designers should apply accepted genres appropriate for their purpose; for new applications, they should also take already accepted genres as a basis for evolution (Crowston & Williams, 2000).  That advice, considering the user?s knowledge and experience with the genre, and keeping the stability of genre, was echoed by Vaughan (1999).  The existing studies on genre-based navigation were considered insufficient by Andersen (2008) for they focused on the structures per se rather than on the communicative situation and human activity engaged by structures.  Agre (1998) and Crowston and Williams (2000) advocated to develop the sense of community with the particular audience and the particular activities considered while designing genre.  Though it is not easy to identify a community on the web, it may be obvious in designing genre for a corporate web site (Gonzalez & Sanchez, 2007; Marza, 	21??2007).  Nevertheless, structuring information environments for browsing and learning is one of the major research challenges, with the other two challenges as automatic genre classification and appropriate genre representation (Kopak et al., 2011).  2.2.3 Swales? CARS Model and Move Analysis Move-based discourse analysis has been found useful in the teaching of English for Specific Purposes, aiming to help non-native speakers or junior scholars to ?master or control the macro level organizational structures and the micro level of linguistic features conventionally used in texts required in their disciplines and professions? (Kanoksilapatham, 2003).  Swales? approach has been influential in move analysis in the field of English for Specific Purposes. ?A ?move? in genre analysis is a discoursal or rhetorical unit that performs a coherent communicative function in a written or spoken discourse? (Swales, 2004, p. 228).  Swales (1981) claimed four distinct moves as used in research article introductions: Move 1 ? establishing a territory; Move 2 ? Summarizing previous research; Move 3 ? Establishing a niche; and Move 4 ? Occupying the niche.  After further research, Swales (1990) modified his framework by merging the first two moves, resulting in the three move-step patterns known as the CARS (Create a Research Space) model.  The CARS model was for writing academic introductions based on a move analysis of 48 articles in the ?hard? sciences, social sciences, and life & health sciences.  Swales also presented examples to illustrate these moves and steps.  The three move-step patterns and examples (1990, p. 141) are shown in Table 2.1.    	22??Table 2.1: Swales? CARS Model Moves Steps Examples Move 1: Establishing a territory Step 1: Claiming centrality and/or   The study of ? has become an important aspect of ? A central issue in ? is the validity of ? Step 2: Making topic generalization(s) and /or  The aetiology and pathology of ? is well known. A standard procedure for assessing has been ? There are many situations where ? Step 3: Reviewing items of previous research  X was found by Sang et al. (1972) to be impaired. Chomskyan grammarians have recently ? Move 2: Establishing a niche   Step 1A: Counter-claiming or  Emphasis has been on ?, with scant attention given to ? Step 1B: Indicating a gap or  The first group ? cannot treat and is limited to ? Step 1C: Question-raising or Both suffer from the dependency on ? Step 1D: Continuing a tradition A question remains whether Move 3: Occupying the niche   Step 1A: Outlining purposes or The aim of the present paper is to give ? Step 1B: Announcing present research This study was designed to evaluate ? Step 2: Announcing principal findings The paper utilizes the notion of ? Step 3: Indicating RA structure This paper is structured as follows ?  According to Swales, the overall communicative purpose of the genre is achieved by a series of moves.  A move is a section of text which performs a specific communicative function.  Each move may contain a set of steps with which the move is realized.  In this way, the overall meaning of ?introduction? is realized through a sequence of moves, each of which is realized through several steps.  The boundary between moves is indicated by changes in the type of information communicated.  Later Swales (2004) again revised his model while keeping the three distinct moves, consisting of Move 1 ? Establishing a territory; Move 2 ? Establishing a niche; and Move 3 ? Presenting the present work.  In Swales? revised model, he added potential cycling of Move 1 and Move 2 sequences, and changed the steps included in each move.  There is only one step in 	23??Move 1 ?topic generalizations of increasing specificity?.  There are two steps in Move 2, step 1 ?indicating a gap? or ?adding to what is known?, while step 2 is optional ?presenting positive justification?.  In Move 3, step 1 is obligatory ?announcing present research descriptively and/or purposively, while steps 2-4 are optional, ?presenting RQs or hypotheses?, ?definitional clarifications?, and ?summarizing methods?, and the rest of the steps are probable in some fields, including ?announcing principal outcomes?, ?stating the value of the present research? and ?outlining the structure of the paper?.  Swales? revised model overcomes the shortcomings in his previous version. Swales? model (1981, 1990) has inspired studies of rhetorical organizational patterns in introductions of other disciplines and genres.  Crookes (1986) coded a corpus of 96 introductions from 12 journals in three areas of hard sciences, biology/medical sciences, and social sciences according to Swales? 1981 move system.  Crookes criticized Swales? model for unclear descriptions of moves 1 and 2 and a rigid sequence of moves 1-2-3-4.  By applying Swales? framework to Master of Science dissertations, Hopkins and Dudley-Evans (1988) found dissertation introductions had a greater length and followed a cyclical pattern.  Samraj (2002) applied Swales? CARS model to wildlife behavior and conservation biology introductions of research articles.  Samraj found disciplinary variation in the structure of the wildlife behavior introductions and conservation biology introductions, and that a step might be embedded within other steps.  Posteguillo (1999) compared 40 introductions in the computer science corpus with Swales? CARS model.  Posteguillo found that Move 1 Step 3 (literature review) was not always used in computer science introductions due to the relative newness of this discipline and its commercial orientation.  Move 3 Step 3 (describing structure) was found to be prevalent due to the lack of systematicity in the overall structure in computer science articles.  The above studies 	24??demonstrate the wide applicability of Swales? model and the variations in genres and domains to which it has been applied.  Originally proposed for teaching academic and research writings, Swales? model has stimulated research on English use of a variety of genres.  There are a number of studies on various components of research articles, including analyses of the individual components Methods (Bruce, 2008), Results (Brett, 1994; Thompson, 1993) and Discussion (Hopkins & Dudley-Evans, 1988), on a combination of components (Lewin et al., 2001), and on all four components (Kanoksilapatham, 2005; Nwogu, 1997).  The involved academic disciplines include biochemistry, medicine, social sciences, among others.   Swales? framework established a firm foundation for move analysis of the Introduction component of journal articles and has inspired subsequent works on Introductions.  It has been found that other move analysis studies on introductions, such as Kanoksilapatham (2005), Lewin et al. (2001), and Nwogu (1997), share a high similarity with the move patterns proposed by Swales.  As noted by Swales, the differences among rhetorical structure across disciplines are not as obvious in the Introduction and Discussion components as in the Methods and Results components.  The Methods component has received scant attention in move analysis probably because this component is highly specialized and heavily content-oriented.  For the Results component some move analysis simply describes data in an objective manner, while others include interpretation of data as well.  The Discussion component is the most important and also the most difficult component to write, and thus the outcomes obtained from move analysis are more varied than that of Introduction.  Even the titles representing the Discussion component vary, such as Results and Discussions, Discussions, Conclusions, etc.  Lewin et al. (2001) observed some discrepancies comparing their study with five previous studies on ?moves? in the 	25??Discussion component.  In all, move analysis studies demonstrate a general pattern of rhetorical organization that predominates in IMRD components of a scholarly article.  Nevertheless, there is evidence of variations in academic disciplines.  Lewin et al. (2001) perhaps did the most substantial work after Swales.  They conducted thorough analyses of the Introduction and Discussion components of a social science corpus on a macro (moves) level and a micro (acts) level.  For the Introduction component, they specified obligatory and optional acts, while for the Discussion component, they further specified pre- and post-head acts as optional acts.  Some studies indicate steps under each move, equivalent to acts, such as that of Swales (1990); while other studies do not specify hierarchical levels of these units, such as that of Hopkins and Dudley-Evans (1988).  Even for the same article component from similar disciplines, the results obtained from move-based studies vary.  Nwogu (1997) posited two moves of the Results component in medicine, whereas Kanoksilapatham (2005) found four moves in this component in biochemistry.  The move patterns of abstracts look like a synopsis of the structure of the whole article.  The move analysis on abstracts in protozoology showed an abstract was encapsulated in five moves: relation to other research, purpose, methodology, summarizing the results, and discussing the research (with sub-moves conclusions and recommendations) (Cross & Oppenheim, 2006).  By investigating paper abstracts published in international and Slovenian scientific journals in pharmacology, sociology, and linguistics and literature, Cross and Oppenheim (2006) observed that abstract structure varied with disciplines rather than adhering to a standard pattern (i.e., Introduction-Method-Results-Discussion), though they shared certain common features. Additionally, even in the same discipline, abstract structure might vary with the focuses of journals and papers (Sauperl et al., 2008).   	26??The essence of moves was captured by Swales? three-level genre model as follows: a. Communicative purpose, realized by b. Move structure, realized by c. Rhetorical strategies  Extending Swales? three-level genre model, Askehave and Nielsen (2005) developed a two-dimensional model (reading mode and navigating mode) on digital genres in the case of a corporate homepage.  By incorporating media elements into the concept of genre, they indicated that communicative purpose was accomplished by the navigating mode as well as by the reading mode.  The prototypical moves on the homepage in the reading mode included attracting attention, greeting, identifying sender, indicating content structure, detailing (selected) content, establishing credentials, establishing contact, establishing a (discourse) community, and promoting an external organization.  As noted by Swales, genre comprises ?a class of communicative events, the members of which share some set of communicative purposes? (1990, p. 58).  Askehave and Swales (2001) indicated that communicative purpose should not be the sole criterion for genre identification.  Instead, they suggested starting from linguistic text or alternatively from social context, supplemented with communicative purpose.  As a result, Swales (2004) proposed two procedures for genre analysis:  ? A text-driven procedure for genre analysis  Structure + style + content + ?purpose? ? ?Genre? ? Context ? Repurposing the genre ? Realigning the genre network ? A situation-driven procedure for genre analysis 	27??Identifying a communicative situation ? Goals, values, material conditions of groups in the situation ? Rhythms of work, horizons of expectation ? Genre repertoires and etiquettes ? Repurposing the selected genres ? Textual and other features of the genres Bhatia (1993) also proposed the following steps to analyze genre: placing the given genre-text in a situational context; surveying existing literature; refining the situational/contextual analysis; selecting corpus; studying the institutional context; levels of linguistic analysis (analysis of lexico-grammatical features, analysis of text-patterning or textualization, structural interpretation of the text-genre); specialist information in genre analysis.  These seven steps should be considered partially or wholly depending on the purpose of analysis, the aspect of genre being focused on, and the background knowledge of the genre in question (p. 22).  2.2.4 Journal Article Component Use Current genre research in information studies has focused on genre at the document level, that is, people pay more attention to the differences between the genres of a book, a journal article, a manual and so on, rather than on the differences between the genres of internal components of these documents.  Nevertheless, a few studies have examined article components.  According to Dillon (2004), a component is a part-genre of a journal article.  Vaughan and Dillon (1998) recruited expert users to categorize a set of paragraphs according to where they belong ? Introduction, Methods, Results, Discussion ? in an academic journal article.  The experts? verbal protocols were subjected to ?how, why, what? content analysis.  IMRD components have been found to have well-established roles to play: how they are read, why they are read, what content they should contain.  Five functions were identified each for the 	28??Introduction and Discussion, two functions for the Results, and one function for the Methods component.  However, ?how, why, what? in reading were identified from users? conceptions of article components in a general sense rather than from the documents themselves.  IMRD components were also discussed briefly in related work that considered the role domain expertise could have in helping users locate information in articles.  In an experiment reported by Dillon and Schaap (1996), experts and non-experts of scientific journal articles were asked to classify paragraphs according to their location in a typical journal article?s structure.  Results showed that experts outperformed non-experts in classifying paragraphs, both with and without structural cues.  Bishop and her colleagues (Bishop, 1998, 1999; Bishop et al., 2000) referred to parts of articles as components: title, section headings and subheadings, tables, figures, captions, reference list, individual references, abstract, author-assigned keywords, author names, author affiliations, author contact information, article sections and subsections, footnotes, endnotes, appendices, paragraphs, sentences, noun phrases, words, external linked information related to the paper, etc.  The functions of journal article components were to support finding relevant documents, assessing document relevance before retrieval, reading articles, creating document surrogates, reaggregation and integration into new documents.  Five basic functions were also found to be associated with reading articles: orientation, providing an overview, directing attention, aiding comprehension, triggering further reading (Bishop et al., 2000).  They found that readers tended to extract individual components from journal articles and incorporated them into their own writing.  This idea was applied by Sandusky and Tenopir (2008) to the components of tables and figures.  However, the implementations of this idea, whether through Bishop?s DeLIVER testbed, or Sandusky and Tenopir?s ProQuest CSA prototype, were focused 	29??on extracting logically discrete components from their embedding articles for the sake of searching and viewing.  This has raised the question of whether or not an individual component can stand alone for understanding, and what is the minimum information necessary for a stand-alone component to be understood (Sandusky & Tenopir, 2008).  Other studies on structured document retrieval, while addressing the importance of document parts in relation to document structure, do not consider genre conventions, such as Crestani et al.?s work (2004), which divided Shakespeare?s plays into acts, scenes and speeches, and displayed them by a new graphical user interface for structured documents.  Table 2.2 below is a summary of major previous studies on journal article component use in terms of descriptions of the studies, document components being studied, identified functions of document components, and the methods adopted in the studies.  These studies show that document components have their functions for the use of documents, especially as indicated by Vaughan and Dillon?s study on IMRD components, thus the identification and utilization of individual functional units within components should be a potential area to explore.  Swales? model inspires the development of a functional unit taxonomy manifesting types of information within components.  The previous studies of genre theory in information classification and design provide evidence that employing functional units in a structured document design is meaningful in information studies.  	30??Table 2.2: Summary of studies on journal article component use Previous Studies Descriptions of Study Document Components Functions of Document Components Methods Bishop (1998, 1999), Bishop et al. (2000) Disaggregate and reaggregate journal article components ? Search: o Article title o Title, heading or caption o Article author name o Any author name o Anywhere in article body o Abstract o Table text o Figure caption o Cited references o Journal title ? View: o Article citation o Author affiliations o Abstract o Links to the article?s full bibliographic record in INSPEC & Compendex o Link to the fulltext of the article o Lists of and links to, the article?s figures and tables o References o Links to any articles in DeLIver that cite the article ? Finding relevant documents ? Assessing document relevance before retrieval ? Reading articles o Get oriented (form an impression of the article?s content and relevance) o Get an overview of how the article is organized o Directing attention (decide what to skim, skip, or read carefully) o Aid comprehension o Trigger further reading by suggesting gaps in, or additional sources of, information ? Creating document surrogates ? Reaggregation and integration into new documents  DeLIver testbed   Focus groups Interviews Observations Usability testing User registration Transaction logging User surveys   	31??Previous Studies Descriptions of Study Document Components Functions of Document Components Methods Dillon (2004, chapter 6) Capture process data on reading ? Why What How  Table of contents Abstract Introduction Section headings Diagrams  Tables  Conclusions  ? Orientation ? Overview ? Comprehension  Simulated usage   Concurrent verbal protocols  Sandusky & Tenopir (2008) Find and use tables and figures from journal articles Tables Figures  ? Finding relevant components and documents ? Assessing relevance and interpreting components ? Component reaggregation and use ProQuest CSA prototype   Pre-search questionnaire Observation/Think aloud Structured diaries Postsearch questionnaire Vaughan & Dillon (1998) Expertise in understanding journal articles  Introduction  ? Set up author?s study ? Make an argument ? Set up past studies as a foil for presenting a new study ? Set up the background ? Justify a current or future study Verbal protocols Methods  ? Explain the procedures used in the study Results  ? Display and report on the results ? Report on the type of analyses that were done Discussions  ? Justify the study?s results ? Admit mistakes ? Interpret the results ? Provide alternative explanations of causation ? Argue for implications of what was observed 	32??2.3 RELEVANCE THEORY This section reviews the main themes of Relevance Theory proposed by Sperber and Wilson as a linguistic theory, continuing with the rich notion of ?Relevance? in the area of information studies, and further with the impact of Sperber and Wilson?s Relevance Theory on information studies.  A few, yet significant, studies on bridging Relevance Theory and the concept of genre such as Unger?s works are addressed.  This review shows that Sperber and Wilson?s Relevance Theory provides a cognitive approach to recognizing and assimilating genre information contained in journal articles.  2.3.1 Defining Relevance Theory The publication of the book Relevance: Communication and Cognition co-authored by Sperber and Wilson in 1986 marks the establishment of Relevance Theory.  Originally proposed as a theory of everyday speech utterances, Relevance Theory aims to lay the foundation for a unified theory of cognitive science, for it attempts to answer ?not only philosophical questions about the nature of communication, but also psychological questions about how the interpretation process unfolds in the hearer?s mind? (Wilson, 2001, p. 1).  In their second edition, Sperber and Wilson (1995) differentiate between two principles of relevance (p. 260):  Cognitive Principle of Relevance: Human cognition tends to be geared to the maximization of relevance. Communicative Principle of Relevance: Every act of ostensive communication              communicates a presumption of its own optimal relevance.  The Cognitive Principle of Relevance claims that the human cognitive system has developed such that human beings automatically tend to perceive potentially relevant stimuli, activate potentially relevant assumptions, or process them in the most productive way.  The input, which can be a sight, a sound, or a memory, is relevant when it has some positive cognitive 	33??effects in a context.  The cognitive effects are yielded when the input strengthens the existing assumptions, or contradicts the existing assumptions, or combines with the existing assumptions to yield contextual implications.  The greater the cognitive effects achieved, the greater the relevance. However, the processing of input and the derivation of cognitive effects cost some mental effort.  The smaller the processing effort required, the greater the relevance.  Relevance is determined by the interplay between cognitive effects and processing effort.  Consider the following examples: (a) The next bus to Broadway Station is at 8:24 am. (b) The next bus to Broadway Station is after 8 am. (c) The next bus to Broadway Station is 36 minutes before 9 am. Though all three statements may be relevant, statement (a) is more relevant than either (b) or (c).  Statement (a) is more relevant than (b) for reasons of cognitive effects because it achieves effects for all the consequences derivable from (b) and more besides.  Statement (a) is more relevant than (c) for reasons of processing effort because it requires less effort for the same consequences derivable from (c), i.e., requires a calculation.  However, usually it is not the case that the least effort means the greatest effects; it is also the case that the extra effort results in extra effects.  The demand of additional effort encourages the search for additional effects, effects that would not be achieved with lesser effort.  Optimal relevance is to strike a balance between the two variables: contextual effects and processing effort, that is, it achieves enough contextual effects to be worth the audience?s attention on the one hand, and puts the audience to no gratuitous processing effort in achieving those effects on the other (Wilson, 1994).   The communication model advocated by Sperber and Wilson is an ostensive-inferential 	34??one.  From the communicator?s perspective, communication is a process of ostension; the communicator tries to make manifest both her intention and that she has this intention.  From the audience?s perspective, communication is a process of inference; the audience tries to infer the communicator?s intention, first by decoding the literal meaning, then by inferring the intended meaning.  Since the audience only pays attention to an input seemingly relevant enough, the communicator can produce an ostensive stimulus and encourage the audience to presume it is relevant enough.  Hence, the Communicative Principle of Relevance is revealed.  An intentional communication conveys the presumption of optimal relevance: it is at least relevant enough to be worth the audience?s processing and it is the most relevant one the communicator could have made with her abilities and preferences.  If the communicator leaves an empty glass on the desk, you may conclude that she might like a drink.  If the communicator waves the empty glass at you, you will conclude that she would like a drink.  The utterance is only worth processing if it is more relevant than any alternative input available at the time.  If the communicator says she?s written a third of the paper, you assume that she has only written a third of the paper, because if she had written more, she should have said so.  The utterance is the most relevant one the communicator is willing and able to produce. Comprehension starts with the recovery of the linguistically encoded meaning, and continues with enrichment of meaning at the explicit level and complement of meaning at the implicit level.  The audience follows a path of least effort and stops at the first interpretation that satisfies their expectations of relevance.  This is the Relevance-theoretic Comprehension Procedure (Wilson & Sperber, 2004, p. 613): a. Follow a path of least effort in computing cognitive effects: Test interpretive hypotheses (disambiguations, reference resolutions, implicatures, etc.) in order of accessibility. b. Stop when your expectations of relevance are satisfied (or abandoned).   	35??If Mary utters, ?it is cold in here?, first we take what is literally said, then arrive at its explicature, ?The temperature in the room is low?, and move on to its implicated premises, ?Mary freezes?, from which we may get an implicated conclusion, ?the window should be shut?.  It is reasonable for the audience to follow a path of least effort because relevance varies inversely with effort, and it is reasonable for the audience to stop at the first interpretation that satisfies their expectations of relevance, because the communicator is expected to make her utterance as easy as possible to understand within the limits of her abilities and preferences, if it conforms to the presumption of optimal relevance.  Van der Henst and Sperber (2004) reviewed experimental tests of the Cognitive Principle of Relevance and the Communicative Principle of Relevance.  The Wason Selection Task, used in the experiment, investigated which of four cards (each with a number on one side and a letter on the other) were selected.  Two cards were presented with the letter-side up and the other two were presented with the number-side up, to be turned over to check whether a conditional statement was true in scenarios of combined effect and effort factors: effect_/effort+, effect_/effort_, effect+/effort+, effect+/effort_.  They found that the selected cards differed with conditional statements with the manipulation of effort and effect factors.  The Speech Production Task, used in the experiment, investigated whether replies were accurate to the minute or rounded to the nearest multiple of five when asking a stranger the time in three scenarios: whether the stranger was wearing an analogue watch or digital watch, whether participants just asked for the time or indicated they were setting the watch while asking the time, whether participants indicated a coming appointment in more or less than 15 minutes while asking the time.  They found that replies tended to be made to minimize the effort of the audience.  They concluded that relevance-guided comprehension procedure could be experimentally tested by the 	36??manipulation of the effort factor by changing the order of accessibility of various interpretations, or by the manipulation of the effect factor by making a specific interpretation more or less likely to satisfy the expectations of relevance.         After two decades of development, Relevance Theory has demonstrated its enormous impact as one major component of pragmatics research.  Relevance Theory has been showing its wide applicability to a variety of areas such as grammar, humor, media discourses, literature, politeness, translation, etc. 1  2.3.2 Applications of Relevance Theory in Information Studies  2.3.2.1 ?Relevance? in Information Studies       Relevance has long been a fundamental and central concept for information science, in particular as the primary criterion for designing and evaluating information retrieval systems.  Much has been written about relevance over the past half century, including diverse and even contradictory approaches.  I review here only a selection of items that are most closely related to Relevance Theory, as proposed by Sperber and Wilson.  Though definitions regarding relevance in information studies are often fuzzy and vague, relevance may be understood as ?a relation between information or information objects (?) on the one hand and contexts, which include cognitive and affective states and situations (information need, intent, topic, problem, task ?) on the other hand, based on some property reflecting a desired manifestation of relevance (topicality, utility, cognitive match ?)? (Saracevic, 2007a, p. 1918).  The manifestations of relevance are generally considered as ????????????????????????????????????????????????????????????1?Relevance Theory Online Bibliographic Service offers a glimpse of this extensive research (http://www.ua.es/personal/francisco.yus/rt.html).  This site was created and has been updated by Francisco Yus at Universidad de Alicante. 	37??referring to the following four forms: topical relevance, cognitive relevance, situational relevance, and affective relevance (Saracevic, 1996, 2007a; Xu, 2007). In the literature, there is a division between system-oriented relevance and user-oriented relevance, with the former emphasizing the matching between a query and information or information objects by means of an algorithm, and the latter dealing with the relationship between the information and the user?s information needs or problem situations.  Relevance studies in information science show a growing understanding of the complexity of relevance by exploring its nuance and associating it with information behavior.  Schamber, Eisenberg and Nilan (1990) characterized relevance as a multidimensional cognitive concept, a dynamic concept, and a complex but systematic and measurable concept.  Saracevic (1996) proposed a stratified model in which the user and the computer interacted via different strata or levels: on the user side were layers which were cognitive, situational and affective, on the computer side were layers of content, processing and engineering.  He further stated that there existed an interdependent system of relevancies with interplay within and between themselves, illustrated by five manifestations: system or algorithmic relevance, topical or subject relevance, cognitive relevance or pertinence, situational relevance or utility, motivational or affective relevance.  Studying four domain searching tasks using the Google search engine, Toms et al. (2005) identified three underlying constructs of relevance as user, system, and task, and developed eleven measures for Saracevic?s five manifestations of relevance, which further refined the Saracevic?s dimensions. Directly related to the linguistic theory of relevance, cognitive relevance is taken as the ?relation between the cognitive state of knowledge and of a user, and information or information objects?, with cognitive correspondence, informativeness, novelty, information quality, etc. as 	38??criteria (Saracevic, 1996, 2007a).  Cosijn and Ingwersen (2000) suggested a wide range of measures for cognitive relevance based on attributes of relevance: relation ? state of knowledge/cognitive information need and information objects; intention ? highly personal and subjective, related to information need, intentions and motivations; context ? context-dependent (user?s/assessor?s context); inference ? subjective and individualized process of cognitive/pragmatic interpretation, selection and filtering; and interaction ? relevance judgments were content, feature, form and presentation dependent.  Toms et al. (2005) used ?certainty? and ?modified queries? to measure cognitive relevance, treating ?certainty? as a signal of a good match from a user perspective and ?modified queries? as a signal of a mismatch from a system perspective.  In a non-problem solving context such as searching for information for epistemic value or entertainment, Xu (2007) identified correlated informative relevance (cognitive relevance) and affective relevance, with novelty, reliability and topicality as key aspects of informative relevance, while topicality and understandability were key aspects of affective relevance.  Xu (2007) further differentiated cognitive relevance from situational relevance in cognitive relevance?s focus on informativeness rather than problem solving.  However, ?informativeness? was proposed by Jean Tague (1992) as a measure of task outcome, the resolution of a problematic situation (as cited in Belkin, 2010).  It seems measures on cognitive relevance are not consistent across researches.  Another emphasis of relevance studies is relevance criteria.  It is found that a finite set of criteria is employed by different users, classes of users, tasks, or progress in tasks, though applied with different weights, as summarized by Saracevic (2007b, p. 2130):  ? Content: topic, quality, depth, scope, currency, treatment, clarity 	39??? Object: characteristics of information objects, e.g., type, organization, representation, format, availability, accessibility, costs ? Validity: accuracy of information provided, authority, trustworthiness of sources, verifiability ? Use or situational match: appropriateness to situation, or tasks, usability, urgency; value in use ? Cognitive match: understanding, novelty, mental effort ? Affective match: emotional responses to information, fun, frustration, uncertainty ? Belief match: personal credence given to information, confidence  Tombros et al. (2005) identified web page content, structure, and quality as the three most important categories to assess the relevance of web pages for information seeking tasks.  Features belonging to the structure category included layout, links, links quality and table data/table layout.  Ahn (2003) found that 89% of criteria employed during web browsing were matched with 32 out of 40 criteria in the ?User Criterion Scale Instrument? developed by Schamber and Bateman for traditional bibliographic information retrieval.  Link title was both the most often used web page element for a link-following decision and rated as the most important web page element, followed by descriptions, and others including graphics (pictures, maps and icons), search-word box (pull-down menu), precedent heading, embedded links, and URL information. One theory closely related to relevance assessment in the web environment is information scent.  Information scent theory was derived from information foraging theory which claims that like animals foraging for food, humans forage for information by maximizing the rate of valuable information gained per unit cost (Pirolli & Card, 1999).  This sounds very similar to the 	40??relevance assessment criteria of effects and effort in striking a balance between gain and cost.  Thus information at hand has a scent by which to judge where to go for the desired information: ?users? navigation in the Web environment can be seen as involving assessments of proximal information scent cues in order to make action choices that lead to distal information sources? (Pirolli, 2004, p. 3).  These proximal cues could be bibliographic citations, WWW links, or icons representing the sources (Pirolli & Card, 1999).  2.3.2.2 Impact of Relevance Theory on Information Studies Saracevic (2007a) noted that relevance in communication, that is, Relevance Theory proposed by Sperber and Wilson, had more impact on thinking about relevance in information studies than thoughts on relevance from other fields.  Harter (1992) made the first attempt to apply Relevance Theory to information studies by proposing psychological relevance, which was based on the essence of relevance in Relevance Theory ? any information that causes cognitive change is relevant. ??Information need? ? is the current cognitive state of an information seeker and, as such, is fluid, constantly changing? (Harter, 1992, p. 606).  Harter rejected topicality as the sole criterion for evaluation of relevance in information retrieval since whether information was relevant or not depended on ?new cognitive connections, fruitful analogies, insightful metaphors, or an increase or decrease in the strength of a belief? (p. 612).  Harter showed how a set of bibliographic citations retrieved and considered irrelevant to the topic of the search turned out to be relevant, for the citations caused the creation of a new context and a set of cognitive changes in that context.  Harter even claimed that psychological relevance could serve as a unified theory to underlie a number of existing information-seeking models.  Unlike his earlier review (Saracevic, 1996) that criticized psychological relevance as a limiting framework for 	41??relevance in information studies, Saracevic?s most recent review (2007a) argued that Harter did not successfully build a firm theoretical footing for relevance in information studies but pointed the direction and elicited discussions.  More recent research in this regard by White (2007) used cognitive effects and processing effort within Relevance Theory to indicate term frequencies (effects) and inverse document frequencies (effort) in a two dimensional pennant diagram of bibliometric retrieval.  Works cocited with a seed author, or authors cocited with a seed author, or books and articles cocited with a seed author, were shown to be distributed in three sectors of a pennant diagram to visualize the degree of relevance.  In commenting on these two works, Saracevic (2007a) noted that, unlike Harter, White put his constructs to practical work, yet both had their weaknesses as that of Sperber and Wilson?s Relevance Theory, that is, they lacked experimental or observational verifications. Besides the above two applications in information retrieval, Relevance Theory also finds its place in the study of web hypertext.  Tosca (2000) fully exploited Relevance Theory.  She adapted the Communicative Principle of Relevance as ?every link communicates a presumption of its own optimal relevance?, for a hypertext node and its links were viewed as an utterance from the writer, and the reader?s action ? choosing links and moving through the text were viewed as the response to the writer.  Double inferences were involved in link traversal ? a kind of movement of meaning: before traveling the link it was an expansive movement to generate a set of implicatures, and after traveling the link it was a concretive movement to identify the right one from a set of available implicatures.  	42??More significantly, Tosca applied concepts of cognitive effects and processing effort to the production of guidelines as follows for writing relevant hypertexts depending on what kind of interpretive movement to provoke: Maximal (Informational) Cognitive Effects at Minimum Processing Effort for presenting concrete information to enable the reader to know where she is and where she can go at all times:  1. Provide descriptive links.  Efficient anchors avoiding ambiguity;  2. Suggest few strong implicatures;  3. Make clear where you are going (definition, bibliography, a related quote?).  If there are different kinds of links, a way should be found to visually distinguish them;  4. Provide navigational aids (buttons, maps?): links with known destination;  5. Indexes and other ways of integrating nodes into wider structures.  Maximal (Lyrical) Cognitive Effects at Increased Processing Effort for taking advantage of hypertext?s power of suggesting implicatures for the readers to explore:  1. Provide evocative links.  Words that are highly meaning-charged in their relationship to the rest of the text;  2. Suggest many weak implicatures;  3. Play with different linking schemes (but ideally, separate the informative links, like bibliography, from the ?lyrical links?); 4. Let the reader make out the structure of the hypertext, but give her evidence to gather to do so;  5. Play with the reader?s expectations when traversing links, reward the exploration of  implicatures enriching context. Tosca further suggested that the second structure, the one with maximal cognitive effects at increased processing effort, was applicable to non-fictional hypertext as well as fictional hypertext and that hypertext lent itself especially well to the second structure for its lyrical quality.  Laine (2003) extended Tosca?s pragmatics of links to all clickable textual labels on hyperlinks and buttons, namely, interactive texts.  Laine said i-texts could and should encourage users to interact by assigning the user an active role, challenging the user to interact, and 	43??providing the user with sufficient information about the consequences of the interactive operation.  Laine discussed the impact of explicit information of i-texts on concrete user action, virtual interactive operation, and target page content.  But Laine mainly used the concept of explicitness within Relevance Theory to show how the explicitness of the interactive text could enhance the strength of interactivity in the setting of online shopping.  Though still in search of a grand theory, ?Relevance? in information studies is inspired by aspects of Relevance Theory.  This is manifested particularly by psychological relevance (Harter, 1992) and the assessment criteria of effects and effort (White, 2007).  Sperber and Wilson? Relevance Theory has a greater effect on the idea of ?Relevance? in information studies than any other theory (Saracevic, 2007a).  However, if examining Tosca and Laine?s work which start from Relevance Theory itself, we may find the notion of cognitive relevance in information studies is the application of Relevance Theory and does not cover all aspects of Relevance Theory.  The interpretation of cognitive relevance given by either Toms et al. or Xu is dramatically different from that of Harter.  The commonality of the term ?relevance? ? in information studies and in Sperber and Wilson?s use of the term ? is that relevance can be measured at the psychological level, and relevance can be measured by effects and effort.   The Relevance Theory proposed by Sperber and Wilson, particularly the Relevance-theoretic Comprehension Procedure, provides the theoretical motivation for this research.  Sperber and Wilson?s Relevance-theoretic Comprehension Procedure indicates a path along which humans cognitively pursue the relevance in comprehending verbal utterances.  The way the functional units are organized and presented simulates people?s comprehension path extending from linguistically encoded meaning to explicit meaning and further to implicit 	44??meaning.  Thus Relevance-theoretic Comprehension Procedure provides a way to organize and present functional units to meet the readers? expectations of relevance.  2.3.3 Bridging Genre and Relevance Genre falls within the scope of social pragmatics, the study of how social information enters into communicative behavior.  This can usefully be combined with another dimension of pragmatics ? cognitive pragmatics, the study of how the inferential processes in comprehension work, to which Relevance Theory belongs (Escandell-Vidal, 2004; Unger, 2002).  From the perspective of Relevance Theory, Unger (2002, 2006) provided a cognitive pragmatic explanation for the social-pragmatic phenomenon of genre.  Unger suggested that genre information entered into the comprehension procedure to provide contextual assumptions for the inferential process, thus fine-tuning expectations of relevance.  Genre information could generate expectations of relevance more or less precise: such as rather precise like what utterances to be expected in which sequence, less precise like the form and content of the text to be expected, or the kind of cognitive effects to be expected, or the level of relevance to be expected.  In discussing the interrelationship between genre, global coherence, and relevance, Unger (2006) argued that coherence did not allow locally unconnected yet acceptable sequences in texts, which could be fully explained by relevance, and that genre could be incorporated into relevance-theoretic comprehension procedure for the influence of genre on comprehension.  Nevertheless, Unger?s statement is limited to linguistic discourses.  Yus (2007) extended the idea of interrelated genre and relevance to weblog templates in stabilizing weblog genre.  Relevance Theory differentiates procedural meaning (words encode the manipulation of conceptual representations) from conceptual meaning (words encode 	45??concepts).  Yus (2007) considered that weblog templates possessed procedural quality.  In other words, verbal or visual features of weblogs could trigger an instant identification of the weblog genre, and ?genre identification is bound to save mental effort and direct the addressee towards particular interpretive paths and lead to specific expectations of weblog information? (p. 124).         The above studies position the concept of genre within a relevance-theoretic framework, in response to the criticism of Relevance Theory as being asocial.  Other remarks (Escandell-Vidal, 2004; Sperber & Wilson, 1997), though not directly addressing genre, also pointed out that Relevance Theory was not exclusive of social considerations.   2.4 READING This section includes a selective review of research on reading.  The topics fall into the following categories: semantic navigation, reading patterns, and information use, which address the different stages in a complete reading process.  This review shows that less has been known about information use to date, and that document structural conventions play a role in cognitively affecting people?s navigating, close reading, comprehending, and using of information within an article.    2.4.1 Semantic Navigation It is thought that people face similar problems and undertake similar activities in information spaces as they do in geographical spaces: wayfinding, exploration of the space, and object identification (Benyon, 2007).  Defined as ?moving oneself sequentially around an environment, deciding at each step where to go? (Jul & Furnas, 1997), navigation in the electronic world is as concerned with position and course as navigation in the physical world.  	46??Dillon and Vaughan (1997) pointed out that navigation through the information space was not only to move from one place to another, but also to interact with the meaning of information, ?to reach an end point of comprehension? (p. 101).  By distinguishing comprehension from navigation, they differentiated two dimensions of navigation: the movement through the space as physical navigation and the interaction with the information as semantic navigation.  Therefore, simply equating navigation with movement blurs the distinction between physical navigation and semantic navigation since it merely attends to the former while largely ignoring the latter, which is critically important in information use and information design.  Furthermore, retrieval and navigation are not the complete task, which includes close reading, comprehending, and making use of what is retrieved and what is navigated (Dillon, 2008).  Therefore semantic navigation occurs ?as people move through information, they actively construct meaning, and make use of both explicit and implicit features of the information and its environment as guides in this process? (Kopak et al., 2011). Navigation issues have been discussed in the context of hypertext and in specific hypertext environments such as the Web.  The nonlinear and nonsequential hypertext lead to a complex information structure, and the Web, as the most wide-spread application of hypertext systems, maintains most of the structural characteristics of hypertext.  Unlike print text, which has indices and tables of contents as a means of locating information directly, hypertext enables the reader to freely organize a personalized path to access the information.  However, its primary advantage is also its disadvantage, resulting in the ?lost in space? and ?disorientation problem? as identified by Conklin (1987) in his seminal paper.  Conklin pointed out disorientation (?the tendency to lose one?s sense of location and direction in a nonlinear document?) and cognitive overhead (?the additional effort and concentration necessary to maintain several tasks or trails at 	47??one time?) as two fundamental problems with hypertext (p. 40).  In the hypertext and Web environment, readers often find themselves not knowing where to go next, or not knowing how to get there, or not knowing where they are in the overall structure (Edwards & Hardman, 1999; Fleming, 1998).  Storrer (2002) also indicated three crucial issues regarding hyperdocuments: discontinuous text processing, the lack of tangible and physical document boundaries, and the lack of a fixed text sequence.  This was echoed by Dillon (2004) while addressing the electronic document: ?there is a striking consensus among many researchers in the field that this process [navigation] is the single greatest difficulty for readers of electronic text.  This is particularly (but not uniquely) the case with hypertext where frequent reference is made to ?getting lost in hyperspace?? (p. 50). Managing navigational tasks and its coordination with information-seeking tasks comes as additional work, which takes away effort and concentration from the content (Conklin, 1987; Vaughan & Dillon, 2006).  Thus navigation can be defined as essentially ?the creation and interpretation of an internal (mental) model?, comprising activities of browsing, modeling, interpretation and formulation of browsing strategy (Spence, 1999, p. 920).  Incomprehensibility of the information structure rather than the complexity of the information structure itself is believed to be the underlying reason for navigation problems (Vora & Helander, 1997).  Creating coherent structures to explicate relations between nodes is a major strategy to improve structure comprehension (Vora & Helander, 1997). Thuring et al. (1991) maintained that ?it is not enough to provide structure but it becomes necessary to notify explicitly or even explain the structure to the reader.  In order to improve both navigation and comprehension, an author has to construct hyperdocuments which enhance the perception of local and global coherence relations? (p.165).  van Dijk and Kintsch?s 	48??discourse comprehension theory, especially the notion of superstructure, ?a schematic form that organizes the global meaning of a text? with rules specifying the ordering and grouping of constituent elements (Vora & Helander, 1997, p. 885), provided guidance in designing coherent hypertext structure and thus contributing to the comprehension of hypertext structure.  Thuring et al. (1995) put forward eight principles for cognitive design of hypermedia: using typed link labels; indicating equivalencies between information units; preserving the context of information units; using higher order information units; visualizing the structure of the document; including cues into the visualization of structure which show the reader?s current position, the way that led to this position and navigational options for moving on; providing a set of complementary navigation facilities which cover aspects of direction and distance; and using a stable screen layout with windows of fixed position and default size.  These principles were either for increasing local coherence or global coherence of a hyperdocument or for reducing the reader?s cognitive overhead by improving orientation or facilitating navigation.  SEPIA system?s presentation interface SPI integrated structure and content in a simultaneous display by presenting both side by side on the horizontal dimension.  Additionally, it considered structure and content at both global and local levels by presenting both side by side on the vertical dimension.  According to Foltz?s studies (1996), readers of hypertext preferred the transitions within the same textual context and relied on the map and node titles to follow the previously accessed nodes.  Few differences were found in comprehending the text in hypertext format and in linear text format, since the different text formats were approached by readers with the same reading strategy ? maintaining coherence.  Foltz pointed out that the structure of the text and the title of nodes influenced the reader?s success in guessing about ?whether following a particular link will 	49??lead toward the relevant information and also be coherent with the current context? (p. 128).  In addition to an appropriate context, Foltz suggested that a clear structure of the text and good labels for the nodes should be provided to support readers for identifying coherent paths. As noted by Storrer (2002), there were three types of cues: structural overviews helping the user ?to identify the main entry points and the structural backbone of the hyperdocument?, global context cues helping the user ?to identify the status of the node currently processed in the overall structure of the hyperdocument?, and local context cues helping the user ?to identify the topical relationship between two succeeding nodes?.  To this end, web view (site maps) and topic maps could be provided as structural overviews; topic indicators such as node titles, node headings, topical sentences could be used as global context cues; link titles might be adopted as local context cues.   2.4.2 Reading Patterns  What is presented here is not a comprehensive review of the reading literature, but aims to cover studies that focus on describing reading patterns and the associated reading purposes.  ?On a psychological level individuals are more likely to make distinctions [between texts] in terms of the type of reading strategy that they employ with a text, its relevance to their work or the amount/type of information that a text contains? (Dillon, 2004, p. 94).  That is, why they are read, what type of information they contain, and how they are read constitute three levels to distinguish text typologies.  Therefore reading patterns are closely related to reading purposes of different document genres. Reading purposes of scholarly journals were specifically discussed by Tenopir and her colleagues (King & Tenopir, 1999; Rowlands, 2007; Tenopir, 2003; Tenopir et al., 2009a; 	50??Tenopir et al., 2009b), who conducted a number of studies on scholarly journal use across subject disciplines in both university and non-university settings over three decades.  Data gathered using questionnaire surveys (Tenopir et al., 2009a) showed that for university science faculty members in the US, the principal purposes for reading were research, teaching, writing articles, proposals, reports, etc., current awareness / keeping up, continuing education, advising others, and other endeavours.  Science faculty spent more time on reading for conducting research and writing than other purposes, and spent the least amount of time for current awareness.  Nearly all readers thought the last reading had some effect on the principal purpose, such as inspiring new thinking / ideas, improving the result, narrowing / broadening / changing the focus, saving time or other resources, resolving technical problems, resulting in faster completion, and resulting in collaboration / joint research.  Uses of electronic journals (Tenopir, 2003), as identified by focus groups consisting of faculty and students, included keeping current with articles in the user?s area of research, keeping up to date with what published more broadly in related areas, gathering background information on a new area on which the user might be embarking, preparing for a specific event such as writing an essay or grant proposal, and performing tasks associated with teaching.   In Wilson?s report (1994), information use of journal articles was much more specific, but characterized as a mixture of work tasks and information tasks.  Besides providing background information (adding to the person?s general knowledge of the field, confirming or clarifying ideas, and allowing comparison with the ideas or practice of others), uses included training or personal development, practical guidance on how to do something, and a specific task like writing a report, providing the basis for a project, being quoted to support a point made in a meeting.  Dillon (2004) addressed journal reading as two reasons: work reasons, such as keeping 	51??up with the literature, as a source of reference and as a source of learning; and personal reasons, i.e., out of interest with no immediate work requirements. Alder et al. (1998) identified ten categories of reading activities observable while reading any kind of documents.  Reading activities ranged from lightweight sorts of reading such as reading in order to identify, skimming, reading own text to remind, to more intensive sorts of reading such as reading to learn, and reading to edit or critically review text.  The reading activities included reading in groups such as reading to support listening, and reading to support discussion.  They might address different goals such as reading for cross-referencing, reading to search / answer questions, reading to self inform.  It was found that the activity occurring most frequently was reading for cross-referencing, followed by reading to search / answer questions, reading to support discussion, and skimming.   Dillon (2004) distinguished three clusters of texts, with those in the same cluster sharing similar characteristics.  For example, journals and textbooks belonged to the same cluster of work-related material characterized as being read repeatedly and for long-term information.  Dillon discussed the reading patterns in reading academic journals in contrast to reading software manuals.  For journal reading, readers normally skim titles and authors, and scan abstract and main sections.  Then the important sections are read non-linearly to extract relevant information or the text is read serially from start to finish.  Dillon also noted that readers tended to read more in the Introduction and the Discussion, ?most readers reported also browsing the start of the introduction before flicking through the article to get a better impression of the contents ? Browsing the conclusions also seems to be a common method of extracting central ideas from the article and deciding on its relevance? (p. 109).  On the contrary, software manual reading 	52??involves checking the index or contents sections to find something relevant, then dipping into and scanning sections of text.  Lengthy serial reading is rare. Loizides and Buchanan (2009) identified four reading patterns testing with PDF files: step-up pattern, flatline pattern, mountain pattern, and beginning-and-end pattern, representing the user?s initial judgment of a document?s potential relevance to information need.  The dominant behavior is a step-up pattern, where the initial part of the document, including the title, abstract, and a large part of the Introduction, is viewed for a prolonged period, whereas other parts of the document receive periodic spans of attention.  The second most common behavior is a flatline pattern, where readers simply scrutinize the first page or visible part of the document.  For the mountain pattern, the readers scrutinize from the beginning to the end of the document, and then return to an already viewed part for a longer timespan.  For a beginning-and-end pattern, only two parts of the documents ? the beginning and the conclusion ? are viewed by the readers.  Document content and visual features were reported having impact on users? reading behavior.  Certain document features such as the initial page, headings, the conclusion section and some graphical content gained more attention. Some other studies do not distinguish the materials involved in reading while discussing reading patterns.  In addressing reading of any genre (e.g., book, newspaper, novel) in any media (e.g., electronic, print), Marshall (2009) specified six reading types: reading, skimming, scanning, glancing, seeking, and rereading.  The first of these types, reading, is defined as ?canonical careful reading.  The reader traverses the text linearly.  The aim is comprehension? (p. 20).  Marshall addressed a space of reading with two dimensions: active reading, with purpose in mind, and immersive reading, with focused attention.  For example, Marshall considered that a lawyer reading key cases and viewing a video deposition was in a high level of both active 	53??reading and immersive reading.  The other five reading types seem more like selective reading.  Glancing is an activity while turning pages, aiming to detect important page elements.  Seeking is to quickly scan for a particular page element, aiming to comprehend fully.  Skimming is linear traversal to get a general impression of the text while scanning is non-linear traversal to locate specific information.  Rereading deserves mention because some skimming and scanning occur before or after in-depth reading.     The above studies show that the document genres inform the reading purposes, with which the reading patterns vary, and people tend to navigate and manipulate the document features for the use of contents.     2.4.3 Information Use Though information use is one dimension of information behavior, it has been the least studied and understood (Vakkari, 1997).  It is true that ?the issues raised by the design of electronic multimedia documents such as hypertexts draw attention to how little we understand the range of cognitive processes recruited when people are working with written materials.  Indeed they highlight how little we seem to know about reading for information-use as distinct from reading in order to learn or reading for pleasure? (Wright, 1993).  The ISIC (Information Seeking in Context) conference papers were classified by Vakkari (1997) into three groups which indicate the aspects of information use: information use to solve problems, information use to change the individuals? understanding, information use in groups.  Similarly, in addressing information use in the organizational context, Choo (2002) stated that ?information use may result in the making of meaning or the making of decisions.  In either case, the use of information is a social process of inquiry that is fluid, reciprocal, and iterative? (p. 58).   	54??Wilson (2000) defined information use behavior as that which ?consists of the physical and mental acts involved in incorporating the information found into the person?s existing knowledge base? (p. 50).  In Dervin?s sense making approach (1992), information was used to bridge the gap and achieve the outcome.  Information use was a process where the user tried to make sense of discontinuous reality in a series of internal behaviors (comparings, categorizings, polarizings, sterotypings, etc.) and external behaviors (ignorings, agreeings, disagreeings, attendings, etc.).  As noted by Choo et al. (2000), ?information use occurs when the recipient processes information by engaging mental schemas and emotional responses within a larger social and cultural context.  The outcome of information use is a change in the individual?s state of knowledge (increase awareness, understand a situation), or capacity to act (solve a problem, make a decision, negotiate a position)? (p. 14).  Choo et al. (2000) conducted a comprehensive review of information use as one of three elements in information seeking activity from three perspectives: cognitive, affective, and situational.  It was noted that an individual?s cognitive styles and preferences would influence the way information to be used.  For example, field dependent individuals tend to respond uncritically to environmental cues while field independent individuals do not, and innovators are more likely than adaptors to challenge existing paradigms.  The affective factors that would influence information use included avoiding embarrassment, conflict, and regret, maintaining self-image, and enhancing status and reputation.  At the situational level, what would influence the use of information were the rules and routines structuring the tasks, and organizational culture and information politics.  Saracevic and Kantor (1997) presented a three-step model of information use in addressing library and information services: acquisition (getting information or objects potentially conveying information, as related to some intentions); cognition (absorbing, understanding, integrating the information); 	55??application (use of this newly understood and cognitively processed information).  However, almost all discussions on information use are at a conceptual level. Taylor (1991) is one of the few who discussed information use by categorizing information use into eight classes:  projective, motivational, personal or political, factual, confirmational, enlightenment, problem understanding, instrumental.  These eight classes of information use were interpreted by Choo et al. (2000) as: project future events; motivate or sustain personal involvement; develop relationships, enhance status, reputation or personal fulfillment; get the facts about something; confirm another item of information; develop a context; understand a particular situation; know what and how to do something.  Though these classes are of a high-level observation of ways in which people use information, they make a distinction between the different uses.  2.5 SUMMARY The preceding review discusses the background and motivation behind this research.  There is a need to help readers attend to the more specific information within article.  The few studies on document component use indicate this is a way to tackle the above challenge in reading.  However, literature on internal document components is sparse compared with the volume of work focusing on the whole documents.  A theoretical approach is needed, in order to better understand the impact of utilizing document components on the reading outcomes of information use, and on the reading process composed of locating and consuming information.   Move analysis, though focusing on granular genre characteristics, has been used to analyze the structural conventions for writing.  Introducing move analysis into genre research in information studies enables a better understanding and utilization of document components.  	56??Relevance Theory proposed by Sperber and Wilson provides a framework to interpret and predict how people cognitively process verbal communication, which casts insight into the organization and presentation of the relevant information contained in document components.    A review of existing research that directly investigated the effect of genre information on    information seeking and use provided little clear evidence concerning the utility of document components.  Of the few studies on document component use reviewed, some identified the functions of semantic article components based on users? conceptions, while others focused on the individual retrieval of physical components from an article.  It is claimed here that the real use of document components is, perhaps, dependent on identifying the functions of the smallest information units and organizing these units properly for information use.  This dissertation analyzed and empirically tested the functional units within article components and their associations with information use tasks.  Results, as presented in the following chapters, will help to fill in the gaps in this area.	57??3 DEVELOPING A FUNCTIONAL UNIT TAXONOMY: METHODS 3.1 OVERVIEW This chapter provides a detailed description of the procedures and methods used in the first phase of the research.  The objective of this phase was to identify and validate the relationships between functional units and information tasks using scholarly journal articles in order to develop a functional unit taxonomy.  The research questions addressed in this phase are: Research Question 1: What are the most common functional units within psychology journal articles?  Research Question 2: How are functional units related to different tasks requiring use of information in psychology journal articles? 2.1 How are the IMRD components of a journal article related to different information tasks? 2.2 How are the functional units in a component of a journal article related to different information tasks? Research Question 3: How are functional units related to each other for a particular task requiring use of information in psychology journal articles? 3.1 For a particular information task, which functional unit is first attended to? 3.2 For a particular information task, how is a functional unit related to other functional units of the same component? 3.3 For a particular information task, how is a functional unit related to other functional units of different components? Table 3.1 shows the data sources and methods of analysis used in Phase I.  The approach to studying functional units in the context of information use was, first, to identify common 	58??information tasks using scholarly journal articles from the literature and to identify the functional units within psychology journal articles.  A survey was then conducted to validate these sets of information tasks and functional units, and another survey to validate the relationships between functional units and information tasks, and furthermore to validate the relationships among a set of functional units for a particular task.  Content analysis was used to identify functional units from the literature and a statistical analysis was carried out on data from validation surveys.   Table 3.1: Data source and analysis for Research Questions 1, 2 & 3 Research questions Data Source Data Collection Analysis RQ1 Literature    Sample  articles Move analysis of journal article components  Content analysis RQ2 Online survey I Ratings & ranking of information tasks  Ratings of functional units Statistical analysis RQ3 Online survey II Ratings & ranking of relationships between functional units and information tasks   3.2 IDENTIFYING INFORMATION USE TASKS AND FUNCTIONAL UNITS As stated in Section 1.3, it is through tasks, the activities undertaken to accomplish something, that we may investigate the effect of functional units.  So there was a need for a set of information tasks representative of using scholarly journal articles, which were identified from Taylor?s information use model and other relevant literature.  Through a review and analysis of the literature, the functional units existing within journal articles in the field of psychology were identified, to prepare for the subsequent validation surveys.   	59??3.2.1 Identifying Information Use Tasks Most studies on tasks (Bystrom & Hansen, 2005; Li & Belkin, 2008) discuss work tasks, information seeking tasks and information search tasks, while they attend less to information use tasks.  No systematic study has been found from the literature discussing information tasks associated with journal article use.  Therefore information tasks using scholarly journal articles in this study were identified from the relevant literature in two areas: the reading of scholarly journals and Taylor?s information use model.  The six information use tasks identified in this study and their descriptions are as follows: Keeping up: To keep current with articles in the user?s area of research Refer to facts: To consult specific factual information, e.g., data, phenomena Refer to arguments: To consult arguments, ideas or suggestions supporting a point  made by the user Learn about background: To get to know a new area on which the user is embarking Learn about particular: To understand a particular problem with its details and  associated interpretation, judgment, etc. Learn how to: To learn how to do something, e.g., an operation, a procedure Notwithstanding variations in the literature (Dillon, 2004; King & Tenopir, 1999; Tenopir, 2003; Tenopir et al., 2009a, 2009b; Wilson, 1994), ?keeping up?, ?reference?, and ?learning? were most frequently addressed as purposes for using scholarly journal articles for academic work.  However, the purposes cannot stand for tasks which are activities people engage in to achieve a goal.  Furthermore, the three purposes ?keeping up?, ?reference? and ?learning? were broad and needed to be mapped to an information use model.  	60??As shown in Table 3.2, Taylor?s eight classes of information use (1991) provide a general framework to characterize ways in which people use information:  projective, motivational, personal or political, factual, confirmational, enlightenment, problem understanding, instrumental.  Taylor?s information use model addresses information use in the general sense rather than information use with specific reference to journal articles.  Thus I further classified each of the eight classes as ?keeping up?, ?reference? or ?learning?, with those that shared a category providing a variant focus: projective, motivational, and personal or political were categorized in ?keeping up?; factual and confirmational in ?reference?; enlightenment, problem understanding, and instrumental in ?learning?.  Thus Taylor?s eight classes of information use seemingly fit into the three general purposes.  Taylor?s motivational and personal or political classes arose for personal reasons and thus were not considered here.  The other six classes addressing work-related information use were further adapted to suit the specific context of scholarly journal article use. Table 3.2 presents Taylor?s eight classes of information use (1991), their interpretations by Choo et al. (2000), and the six information use tasks of scholarly journal articles adopted in this work.  For example, Taylor?s information class ?projective? has been adapted as the ?Keeping up? task in using journal articles.  Its original reference, as interpreted by Choo et al., is very general: ?information is used to predict what is likely to happen in the future.  Projective information use is typically concerned with forecasts, estimates, and probabilities?, while ?Keeping up? refers to keeping current with articles in the user?s area of research, thus maintaining the same meaning as ?projective? but more specific to information use of scholarly journal articles.  	61??Table 3.2: Six information use tasks adapted from Taylor?s model Taylor (1991)  Choo et al. (2000) Information Use Tasks of Scholarly Journal Articles  Projective  Information is used to predict what is likely to happen in the future. Projective information use is typically concerned with forecasts, estimates, and probabilities. Keeping up Keeping up  Motivational  Information is used to initiate or sustain personal involvement, in order to keep moving along on a particular course of action. -  - Personal or political  Information is used to develop relationships; enhance status, reputation, personal fulfillment. - Factual  Information is used to determine the facts of a phenomenon or event, to describe reality. Factual information use is likely to depend on the actual and perceived quality of the information that is available. Reference  Refer to facts  Confirmational   Information is used to verify another piece of information. Confirmational information use often involves the seeking of a second opinion. Refer to arguments  Enlightenment  Information is used to develop a context or to make sense of a situation. Learning  Learn about background  Problem understanding Information is used in a more specific way than enlightenment ? it is used to develop a better comprehension of a particular problem. Learn about particular  Instrumental   Information is used so that the individual knows what to do and how to do something. Under some conditions, instrumental information use requires information use in other classes. Learn how to   3.2.2 Identifying Functional Units As discussed in Section 1.1, functional units are the smallest information units embedded in the Introduction, Methods, Results, and Discussion components of a journal article, each of which serves a distinct communicative function.  Since the genre of scholarly journal articles is more stable than that of web genres, and existing move analysis of IMRD provides sets of ?moves? and ?steps?, I identified the functional units within psychology journal research articles in the following way: (1) examined existing move structures from well-acknowledged models; 	62??(2) identified a preliminary set of functional units by refining the existing move       structures; (3) tested the framework of functional units developed above through an analysis of        twelve psychology journal articles   Examining existing move structures from well-acknowledged models Examined were the rhetorical structure of the Introduction, Results, and Discussion components from the existing move models in the literature.  About four models for each component were examined, including generally acknowledged models, upon which subsequent research on other disciplines and corpora has been based.  Swales? CARS model of Introduction is the origin of other move analysis studies.  Brett?s analysis of Results covers both the presenting and commenting of results.  Hopkins and Dudley-Evans? analysis of Discussion contains the major functions in the discussion component.  These models were used as prototypes, complemented with move analysis in other works.  However, there were scant existing move models to refer to for the Methods component because this component is highly discipline-specialized.  Therefore the set of functional units within Methods were developed directly from a corpus of psychology journal articles.  The existing move structures used in this study are listed in Appendix 1, presented in separate tables regarding the Introduction, Results and Discussion components.  Identifying a preliminary set of functional units by refining existing move structures  By putting together and integrating these existing move structures of the IRD components it was possible to obtain an extensive taxonomy.  However, to ensure that the categories were mutually exclusive and to have a manageable number of functional units for implementation in 	63??an information system, the initial taxonomy needed to be reduced and refined.  Some studies indicated steps under each move while other studies did not.  Since not all models of functional units distinguished between macro (moves) and micro (steps) functions, for the sake of a parallel comparison, I took the smallest units of each move structure as the basis for the identification of functional units.  Based on the descriptions and examples in the literature, the functional units were refined as follows: ? Duplicate or similar ones were merged  e.g., in the Introduction component, ?summarizing methods? (Swales, 1990, 2004; Lewin et al., 2001), ?reference to main research procedure? (Nwogu, 1997) and ?describing procedures? (Kanoksilapatham, 2005) were merged into one functional unit entitled ?summarize methods? and defined as ?summarizing the methods used? ? Those judged supplementary were integrated with more dominant ones  e.g., ?justifying hypothesis? (Lewin et al., 2001) and ?positing an ideal way to fill the gap that has just been created? (Lewin et al., 2001) were integrated with a more dominant functional unit ?presenting RQs or hypotheses? (Swales, 1990, 2004), with its title as ?present hypotheses? and definition as ?presenting hypotheses or research questions?; ?adding to what is known? (Swales, 1990, 2004) was combined with ?indicating a gap? (Swales, 1990, 2004) as one functional unit, with a more common title ?indicate a gap in previous research?, and a definition covering both situations, ?pointing out deficiencies in the present state of knowledge? 	64??? Those impractical or vague were removed e.g., ?metacomments? (Lewin et al., 2001) was a very micro-level analysis, and ambiguous in meaning, and thus was discarded Thus beginning with the moves and steps as illustrated in Appendix 1, a preliminary set of functional units was developed with 13 functional units in the Introduction component, 14 functional units in the Results component, and 15 functional units in the Discussion component.   Testing the framework of functional units developed above through an analysis of twelve psychology journal articles Since the framework of functional units developed in steps 1 and 2 was from ?moves? and ?steps? in articles of various disciplines, I applied it to Introduction, Results, and Discussion components in twelve sample articles in order to assess its applicability to the psychology domain.  From a graduate psychology course reading list (PSYC 583: Special topics in cognition Spring 2006), I selected twelve articles (see Appendix 2) according to the following criteria: original research articles (excluding review articles and theory articles) in scholarly journals (excluding proceeding papers and book chapters) in reverse chronological order of publication year.  None of the articles had identical authors.  In the subsequent validation study, the functional units, except those related to the Methods component, were derived from the literature, and there was an opportunity for participants to suggest additional items.  Therefore, twelve articles were deemed sufficient to represent the psychology research article genre here.  These twelve articles from a graduate course reading list were published in seven journals, yet they shared genre characteristics, despite variations in length and reading level.  From an informal interview of two expert users in the Department of Psychology at UBC, and from 	65??examination of psychology journals in Journal Citation Reports: Social Science Edition (2009), there were no substantial structural differences observed between research articles in cognitive psychology and those in other areas of psychology.  Therefore, the genre characteristics of these sample articles were judged to be generalizable to the broader psychology discipline.  The rhetorical structure of the four IMRD components within these articles was analyzed.  Other components, such as abstract or references were excluded from the analysis.  For the IRD components, the functional units identified above were used as a code book for coding the sample articles.  For the Methods component, functional units were directly identified from the corpus.  The coding unit adopted by traditional genre analysis for the purpose of academic writing is the sentence or clause.  However, one can hardly overlook the surrounding text while viewing a sentence.  Therefore, a single paragraph was taken as a coding unit and was assigned at least one and at most three distinct functional unit values, providing that these functions were equally important for the paragraph.  The structure of psychology journal articles shows that they usually start with an Introduction without subheading, continue with the reporting of several experiments, and end with a General Discussion after the last experiment.  For each experiment there is a separate account with a brief Introduction, followed by Methods, Results, and Discussion components.  The preliminary set of functional units for the Results component included both factual and commentary statements, which were respectively used in coding the Results and Discussion in each of several experiments in a sample article.  The preliminary set of functional units for the Discussion component was used in coding the contents of the General Discussion after the last experiment in an article.  The functional units were counted based on their frequency in four internal components of twelve sample articles.  The frequency distribution of the functional units 	66??demonstrated how often the functional units occur in these articles, which is a reflection of genre convention of psychology journal research articles.  Since the identification of functional units is influenced by an individual?s comprehension of the text and understanding of the textual functions, inter-coder reliability procedures were performed to ensure reliability.  Six randomly chosen articles (#1, #3, #5, #8, #10, #12) from the corpus were coded by a second coder who had taken psychology class.  The other coder independently coded the articles after a training session was given on the coding scheme.  She had a discussion with the researcher to resolve uncertainties in her practices in applying the various codes.  Each coder assigned a primary functional unit value to each paragraph of six articles.  The inter-coder reliability was determined by Cohen?s kappa value and percentage agreement.  Because each component varied in length and structural regularity, Cohen?s kappa value and percentage agreement were measured for each component as well as for all four components. In total, 52 functional units were identified, with 13 functional units in the Introduction component, 10 in the Methods component, 14 in the Results component, and 15 in the Discussion component.  Table 3.3 presents the preliminary set of functional units within IMRD components.  	67??Table 3.3: A preliminary set of functional units Component  Sources Title Definition Introduction Swales (1990, 2004) Lewin,  Fine & Young (2001) Nwogu (1997) Kanoksilapatham (2005) Claim importance of topic Showing that the research area is important, central, interesting, problematic, or relevant in some way Narrow down topic Increasingly narrowing down the topic of the research Review previous research Reviewing items of previous research in the area Indicate a gap in previous research Pointing out deficiencies in the present state of knowledge Provide reason to conduct research Providing positive reasons for conducting the research reported Point out contribution of previous research Pointing out contribution of previous research Introduce present research  Outlining purposes of the present research Present hypotheses Presenting hypotheses or research questions  Clarify definition Discussing the definitions of key terms Summarize methods Summarizing the methods used Announce principal outcomes Announcing principal findings State value of present research Stating the value of the present research Outline structure of paper Indicating the structure of the research paper Methods - Relate to prior/next experiments Describing own experiments prior/subsequent to the current experiment being discussed Justify methods  Stating the rationale for the decision to use particular experimental methods, procedures, or techniques Preview methods Previewing methods in the following experiment Predict results Predicting results in the following experiment Describe participants Describing the participants in the study Describe materials Describing materials/stimuli in the study Describe tasks Describing tasks in the study Outline experimental procedures Describing the procedures of an experiment Present variables Describing variables in the study Outline data analysis procedures Describing the procedures used in data analysis 	68??Component  Sources Title Definition Results Brett (1994) Thompson (1993) Nwogu (1997) Kanoksilapatham (2005) Metatext  Indicating the order and content of the text which follows Describe analysis conducted Explaining how and why data have been produced Restate hypotheses Restating the aims of the research, or creating further hypotheses from the findings State findings Extracting meaning from the numerical data with a written statement about it State additional findings Stating the data that neither support nor conflict with the major finding State non-validated findings Accounting for the data that do not support the major finding Explain findings Suggesting reasons for the findings Evaluate findings Evaluating findings with those from previous research Evaluate hypotheses Evaluating findings with regard to the hypotheses Summarize results Summarizing a number of results and explanations Raise further question(s) by finding  Probing a finding or raising questions about shortcomings of a finding Indicate implications of findings Providing ideas about the implications and present/future consequences of the finding Admit interpretative perplexities Admitting difficulty in explaining results Call for further research Emphasizing the need for future research in order to elucidate problems of interpretation Discussion Hopkins & Dudley-Evans (1988) Holmes (1997) Lewin,  Fine & Young (2001) Dubios (1997) Nwogu (1997) Kanoksilapatham (2005) Recapitulate present research Strengthening the discussion by recapitulating main points such as research questions, aims and purposes, theoretical or methodological information Provide established knowledge of topic Describing established knowledge on the topic Metatext  Indicating the order and content of the text which follows Highlight overall outcome Highlighting the overall research outcomes Indicate (un)expected outcome Commenting on whether the results are expected or not Compare results with previous research Referring to previous research for comparison Interpret outcome Explaining specific research outcomes 	69??Component  Sources Title Definition Support explanation of results Claiming support for explanation by exemplifying or citing Generalize results Making a claim about the generalizability of the particular results Recommend future research Making suggestions for future research Outline parallel or subsequent developments Summarizing data additional to that given in the main body of the article Indicate significance of outcome Indicating significance of the outcome Ward off counterclaim Providing arguments in response to the potential criticism raised by the reader Indicate limitations of outcome Indicating limitations of outcomes Evaluate methodology Evaluating the effectiveness of the methodology in hindsight 	70??Following the coding of functional units within the twelve articles, frequencies were taken of occurrence of each functional unit within each component.  Results show that 10 of the 13 functional units in the Introduction were in use, 9 of the 14 functional units in Results were in use, and 14 of the 15 functional units in Discussion were in use.  In a couple of articles, some functional units were not found in the components where they might have been expected, but they did appear in other components.  For example, the functional units ?preview methods?, ?justify methods? and ?relate to prior/next experiments?, which normally occurred in the Methods component, occasionally emerged in the Discussion of an experiment.  In all, the initial functional unit framework developed above was able to cover almost all functional units in psychology journal articles.  The frequency distribution of the functional units in the twelve sample articles shows that one functional unit is dominant per component, such as ?review previous research? in Introduction, ?outline experimental procedures? in Methods, ?state findings? in Results, ?support explanation of results? in Discussion, and ?compare results with previous research? in General Discussion.    Inter-coder reliability of functional units identified is shown in Table 3.4 below.  A value of .5 for kappa represents moderate agreement, above .7 represents good agreement, and above .8 represents very good agreement.   Kappa values ranged from .865 to .958, and percentage rates from 88.31% to 96.64%, showing a high level of agreement in identifying these functional units across IMRD components.      	71??Table 3.4: Inter-coder reliability of functional units identified Component Number of coded units Agreed coded units Kappa value Percentage  Introduction  66 60 .877 90.91Methods  149 144 .958 96.64Results 128 115 .867 89.84Discussion  77 68 .865 88.31Total 420 387 .917 92.14 3.3 VALIDATION STUDY The above-stated framework of functional units was identified from the literature, and required validation by users of psychology journal articles.  Two surveys were conducted through online questionnaires to validate the findings from the first phase with psychology users, and to refine the preliminary taxonomy.  Survey I was conducted to validate that the information use tasks and functional units were typical in the case of psychology journal articles.  The purpose of Survey II was to validate that the functions of functional units were recognizable and meaningful to the information use tasks. In advance of participant involvement in Phase I and Phase II, a detailed description of the study purpose and instruments used in the study was submitted to, and approved by the Behavioral Research Ethics Board (BREB) at the University of British Columbia.    3.3.1 Validating Functional Units and Information Use Tasks Participants From mid June to mid July in 2009 email advertisements (see Appendix 3) were sent to the graduate student listservs of the Departments of Psychology at both the University of British Columbia and Simon Fraser University.  Psychology graduate students were recruited because they were expected to be experienced in using scholarly journals, and more accessible as study subjects than faculty members.  Each participant was compensated with $10 for completing two 	72??online surveys, each of which took approximately 30 minutes.  Thirteen people participated in Survey I.  The thirteen participants, eleven female and two male, included six PhD students, five Masters students, one postdoctoral fellow and one PhD graduate.  Most of them specialized in cognitive psychology or clinical psychology.  Three were under 26, eight were in the age range 26-30, one was in the range 31-35, and one was in the range 36-40.  Three people had used journal research articles (non-specified) for 6 years,  three for 7 years, two for 8 years and two for 10 years, and one each for 4, 5 and  18 years.  One participant reported using journal research articles once a month, three used them 2-3 times a month, one used them once a week, two used them 2-3 times a week, and six used them daily.  Instruments On Survey I (see Appendix 4), the participants were first asked to indicate how frequently they used journal articles for the six information use tasks listed, by rating them on a seven-point Likert scale (1 = Never, 7 = Very Frequently) and also by ranking them (1 = Most Frequently).  They were also free to suggest tasks other than those provided.  On Survey I there were a total of 52 functional units identified, including 13 functional units in Introduction, 10 in Methods, 14 in Results, and 15 in Discussion.  To minimize misinterpretations of the functional units, a one-sentence definition was provided for each in place of a title.  Each definition of a functional unit was listed as a separate item for rating.  Participants were asked to indicate how frequently they thought each functional unit typically occurred in the Introduction, Methods, Results, and Discussion components of a psychology journal article.  They indicated the level of frequency on a five-point Likert scale (Never ? 	73??Rarely ? Occasionally ? Very Frequently ? Always).  They were also free to suggest other functional units that they thought frequently occurred within a particular component but were not in the list.   3.3.2 Validating Relationships between Functional Units and Information Use Tasks Participants Part II of the validation study included nine participants, all of which participated in Survey I.  Instruments  The purpose of Survey II was to validate how useful each functional unit was for a task.  The questionnaire (Appendix 5) presented participants with six scenarios: Refer to facts, Learn about background, Refer to arguments, Learn about particular, Keeping up, Learn how to.  Given a scenario, the participants were asked to rate the usefulness of functional units within the IMRD components on a five-point Likert scale (1 = Not Useful at All, 5 = Highly Useful).  They also ranked the six most useful functional units within a component by putting 1 next to the most important, and so on.  Based on the responses from Survey I, on Survey II there were 41 functional units, including 11 functional units in Introduction, 10 in Methods, 7 in Results, and 13 in Discussion.  3.4 SUMMARY A preliminary set of 52 functional units were identified from the literature, from which a refined set of 41 functional units were determined by journal users.  To examine the functions of 	74??these smallest information units, a set of representative tasks involved in the use of information in journal articles was identified and validated.  The above sets of functional units and tasks enabled validation of the relationships between functional units in four components and information use tasks of psychology journal articles.  Two surveys were carried out to validate the prevalence of the identified functional units in psychology journal articles, and the relationships between functional units and each information use task.  In the next chapter, the findings of the validation studies and the resulting taxonomy of functional units will be presented. 	75??4 DEVELOPING A FUNCTIONAL UNIT TAXONOMY: RESULTS 4.1 OVERVIEW This chapter reports the results of two validation surveys.  In the first survey, the information use tasks and functional units identified from the literature were validated by psychology graduate students via an online survey.  The second survey validated how a refined list of functional units was related to different information use tasks, and how the functional units were related to each other for a particular task.  4.2 RESULTS OF SURVEY I  4.2.1 Information Use Tasks The participants rated (1 = Never, 7 = Very Frequently) and ranked (1 = Most Frequently) how frequently they used journal articles for six information tasks.  Table 4.1 shows a high level of consistency in the mean scores of task rating and task ranking: ?Learn about background? and ?Refer to facts? were most frequent, followed by ?Refer to arguments? or ?Learn about particular?, ending with ?Keeping up? and ?Learn how to?.  Although some additional tasks were suggested by participants, most were judged similar to one of the six information tasks provided, e.g. one addressed the information search task (?to find material to cite in my papers?), and another (?general literature review?) was covered by the ?Learn about background ? task. The low scores of ?Keeping up? and ?Learn how to? tasks may reflect the fact that students may not monitor the literature in their area and oversee the research process like faculty do.    	76??Table 4.1: Mean frequency scores of 6 information use tasks  Rating mean score (1=Never, 7=Very Frequently) Ranking mean score (1=Most Frequently)1 Learn about background 6.23 Learn about background 2.00 2 Refer to facts 6.00 Refer to facts 2.77 3 Learn about  particular 5.62 Refer to arguments 3.77 4 Refer to arguments 5.23 Learn about particular 4.00 5 Keeping up 4.77 Keeping up 4.23 6 Learn how to 4.23 Learn how to 5.08  4.2.2 Functional Units The participants rated how frequently each listed functional unit occurred in four components on a five-point scale.  To distinguish between the occurrence frequency of these functional units, each set of functional units of the four internal components were placed in one of three categories according to mean scores, from high to low: 4.0-5.0 (Very Frequently ? Always), 3.0-3.9 (Occasionally ? Very Frequently), or 2.0-2.9 (Rarely ? Occasionally).  As shown in Table 4.2, the three categories differentiated the level of frequency with which these functional units occurred in an individual component.  Only two of the functional units had a standard deviation higher than 1.0, suggesting that participants? ratings were quite consistent.   Functional units that received mean scores of 2.0-2.9 (Table 4.2, right column), indicating rare or occasional occurrence, were filtered out from further study.  There were two each in the Introduction, Methods and Discussion components with a low score.  However, nine functional units were found to occur rarely or occasionally in the Results component.  The functional units for the Results component were derived from the literature and thus included commentary statements as well as factual statements.  Almost all functional units for commentary purposes overlapped with those in the Discussion component, such as ?explain findings?, ?raise further question(s) by finding?, ?admit interpretative perplexities?, ?evaluate findings?, ?call for further research?, and ?indicate implications of findings?. 	77??Table 4.2: Mean scores of 52 functional units Component  4.0-5.0 3.0-3.9 2.0-2.9 Introduction  claim importance of topic (4.62) review previous research (4.62) present hypotheses (4.54) introduce present research  (4.46) narrow down topic (4.15) indicate a gap in previous research (4.08) provide reason to conduct research (4.00) state value of present research (3.92) point out contribution of previous research (3.69) clarify definition (3.62) summarize methods (3.31) announce principal outcomes (2.69) outline structure of paper (2.38)  Methods describe materials (4.92) outline experimental procedures (4.77) describe tasks (4.77) present variables (4.69) describe participants (4.62) outline data analysis procedures (4.08) preview methods (3.62) justify methods (3.31)  predict results (2.92) relate to prior/next experiments (2.67) Results  state findings (3.69) describe analysis conducted (3.46) summarize results (3.46) state additional findings (3.23) evaluate hypotheses (3.15)  restate hypotheses (2.85) state non-validated findings (2.77) metatext (2.77) explain findings (2.69) raise further question(s) by finding (2.54) admit interpretative perplexities (2.38) evaluate findings (2.38) call for further research (2.08) indicate implications of findings (2.00) Discussion highlight overall outcome (4.77) interpret outcome (4.54) indicate (un)expected outcome (4.54) recapitulate present research (4.38) indicate significance of outcome (4.38) compare results with previous research (4.38) indicate limitations of outcome (4.23) recommend future research (4.23) support explanation of results (4.15) ward off counterclaim (4.08) generalize results (3.85) provide established knowledge of topic (3.85) evaluate methodology (3.38)  outline parallel or subsequent developments (2.92) metatext (2.92)  	78??Table 4.3: Validated taxonomy of functional units Component  41 Functional units on Survey II Functional units on Survey I but not on Survey II Introduction claim importance of topic announce principal outcomes outline structure of paper narrow down topic review previous research indicate a gap in previous research provide reason to conduct research point out contribution of previous research introduce present research present hypotheses clarify definition summarize methods state value of present research Methods  relate to prior/next experiments predict results justify methods preview methods describe participants  describe materials  describe tasks  outline experimental procedures present variables  outline data analysis procedures present reliability/validity Results describe analysis conducted metatext  explain findings raise further question(s) by finding admit interpretative perplexities evaluate findings call for further research indicate implications of findings restate hypotheses state findings state additional findings state non-validated findings evaluate hypotheses summarize results Discussion recapitulate present research outline parallel or subsequent developments metatext   provide established knowledge of topic highlight overall outcome indicate (un)expected outcome compare results with previous research interpret outcome support explanation of results generalize results recommend future research indicate significance of outcome ward off counterclaim indicate limitations of outcome evaluate methodology  	79??Thus I dropped all functional units that occurred rarely or only occasionally for the Results component except the top two ?restate hypotheses? and ?state non-validated findings? that had no counterparts in Discussion.  Also I dropped those that occurred rarely or occasionally for the other three components with one exception: ?relate to prior/next experiments? which was observed with a high frequency in the prior identification and was temporarily kept for further examination.  Additionally, an item ?present reliability/validity? suggested by a participant was added to the Methods component in Survey II.  The result is the 41 functional units on Survey II as shown in the second column of Table 4.3.   4.3 RESULTS OF SURVEY II Participants rated and ranked how useful a functional unit was for each of the six information use tasks.  To examine the variations in perceived usefulness of functional units within the four components for six tasks, a multivariate analysis-of-variance was conducted.  A large inconsistency in the responses for the task ?Learn about particular? on Survey II indicates that participants had difficulty in understanding that task.  So the task ?Learn about particular? was removed from further analysis.  Table 4.4 presents the mean scores and results of significance tests for the functional units for each of the remaining five information use tasks.  A group of functional units within a component showed significant differences for some of the tasks.  Post-hoc tests were further conducted to identify which pairings of functional units were significantly different in means.  In case homogeneity of variances within groups was violated, Games-Howell test instead of Turkey?s HSD test was used as the post-hoc test.  Post-hoc test results show the usefulness of a functional unit or a component varied with information use tasks. 	80??For the Introduction component there were significant differences among the means of different functional units  for three tasks: ?Learn about background?, F(10,88)=3.867, p<.001, ?Refer to arguments?, F(10,88)=4.997, p<.001, and ?Keeping up?, F(10,88)=2.587, p=.009.  Specifically, for the task ?Learn about background?, the functional unit ?review previous research? rated significantly higher than ?provide reason to conduct research?, ?summarize methods? and ?state value of present research?; and ?point out contribution of previous research? rated significantly higher than ?summarize methods? and ?state value of present research?.  For the task ?Refer to arguments?, the functional unit ?indicate a gap in previous research? rated significantly higher than ?clarify definition?, ?narrow down topic? and ?summarize methods?. Also ?provide reason to conduct research? and ?state value of present research? rated significantly higher than ?narrow down topic? and ?summarize methods?.  No functional unit was significantly different from others for the task ?Keeping up?.  For the Methods component there were also significant differences among the means of different functional units  for three tasks: ?Refer to facts?, F(9,80)=3.657, p=.001, ?Refer to arguments?, F(9,80)=2.794, p=.007, and ?Learn how to?, F(9,80)=3.004, p=.004.  The functional unit ?justify methods? rated significantly lower than ?outline experimental procedures? and ?describe tasks? for the task ?Refer to facts?, whereas it rated significantly higher than ?preview methods? and ?describe participants? for the task ?Refer to arguments?.  For the task ?Learn how to?, no functional unit was significantly different from other functional units.  There were significant differences among the means of different functional units within the Results component for two tasks: ?Refer to facts?, F(6,56)=5.126, p<.001, and ?Learn how to?, F(6,56)=3.462, p=.006.  For the task ?Refer to facts?, the functional unit ?state findings? rated significantly higher than ?describe analysis conducted?, ?state non-validated findings? and 	81???restate hypotheses?.  For the task ?Learn how to?, the functional unit ?describe analysis conducted? rated significantly higher than ?state non-validated findings?, ?evaluate hypotheses?, ?summarize results?, ?state additional findings? and ?restate hypotheses?. There were two tasks for which the means of different functional units within the Discussion component were significantly different: ?Refer to facts?, F(12,104)=2.026, p=.029, and ?Learn about background?, F(12,104)=4.174, p<.001.  For the task ?Learn about background?, the functional units ?provide established knowledge of topic? and ?compare results with previous research? rated significantly higher than ?generalize results?, ?recommend future research?, ?indicate limitations of outcome?, ?evaluate methodology? and ?ward off counterclaim?.  No functional unit was shown to be significantly different from others for the task ?Refer to facts?.  For some tasks a component showed statistically significant differences, while the post-hoc tests did not find significant differences between the functional units in that component.  In these cases, the functional units in a component may be indiscriminately useful for certain tasks.  On the other hand, one of these functional units may be significantly more or less useful than other functional units for different tasks.  This suggests that in the same component a single functional unit may be perceived as substantially more useful than others for certain tasks.   	82??  Table 4.4: Mean scores and significance tests for functional units by tasks                            TaskFunctional unit Refer to facts Learn about background  Refer to arguments Keeping up Learn how to I Claim importance of topic 2.89 3.89 3.89 4.00 2.67 Narrow down topic 2.67 4.11 2.56 3.44 2.22 Review previous research 4.44 5.00 3.89 4.22 3.33 Indicate a gap in previous research 4.11 4.56 4.44 4.56 2.78 Provide reason to conduct research 3.00 3.78 4.11 4.44 2.89 Point out contribution of previous research 3.56 4.78 3.89 4.44 2.89 Introduce present research  3.00 3.89 3.11 4.00 2.78 Present hypotheses 3.56 3.56 3.22 3.78 3.22 Clarify definition 3.44 4.11 2.67 3.33 2.89 Summarize methods 3.22 3.33 2.44 3.11 4.22 State value of present research 2.78 3.22 4.11 3.44 3.00  Significance F(10,88)=1.829 p=.067 F(10,88)=3.867 p<.001 F(10,88)=4.997 p<.001 F(10,88)=2.587 p<.05 F(10,88)=1.830 p=.067 M Relate to prior/next experiments 3.00 4.00 2.67 3.67 3.78 Justify methods  3.11 3.89 3.78 3.89 4.89 Preview methods  3.11 3.00 2.33 3.11 4.33 Describe participants 4.00 2.89 2.33 2.89 4.33 Describe materials 4.33 3.22 2.67 3.11 5.00 Describe tasks 4.44 3.22 2.78 3.44 5.00 Outline experimental procedures 4.56 3.33 2.78 3.44 5.00 Present variables 4.11 3.56 2.56 3.22 4.78 Outline data analysis procedures 3.89 3.22 2.89 3.33 4.67 Present reliability/validity 3.67 3.56 3.67 3.11 4.22  Significance F(9,80)=3.657 p<.05 F(9,80)=.827 p=.593 F(9,80)=2.794 p<.05 F(9,80)=.591 p=.801 F(9,80)=3.004 p<.05 	83??                           TaskFunctional unit Refer to facts Learn about background  Refer to arguments Keeping up Learn how to R Describe analysis conducted 3.89 3.00 2.89 3.67 4.67 Restate hypotheses  2.44 3.22 3.22 3.67 2.78 State findings 5.00 3.33 3.56 4.22 3.67 State additional findings 3.78 3.11 3.89 4.00 3.00 State non-validated findings 3.33 3.22 3.89 3.78 3.22 Evaluate hypotheses 4.00 3.33 3.78 3.67 3.22 Summarize results 4.00 3.67 3.44 4.11 3.22  Significance  F(6,56)=5.126 p<.001 F(6,56)=.304 p=.932 F(6,56)=1.056 p=.400 F(6,56)=.516 p=.794 F(6,56)=3.462 p<.05 D Recapitulate present research 4.00 4.00 3.67 4.00 3.22 Provide established knowledge of topic 3.56 4.89 4.33 3.89 3.11 Highlight overall outcome 4.78 4.00 4.44 4.22 3.33 Indicate (un)expected outcome 3.89 3.56 4.11 3.78 2.89 Compare results with previous research 4.56 4.89 4.56 4.00 3.44 Interpret outcome 4.44 3.78 4.44 4.22 3.33 Support explanation of results 3.89 4.33 4.78 3.89 3.00 Generalize results 3.67 3.33 4.22 3.67 3.00 Recommend future research 3.22 3.33 3.44 4.33 3.44 Indicate significance of outcome 3.44 3.78 4.11 4.00 2.78 Ward off counterclaim 3.56 3.00 4.00 3.56 3.11 Indicate limitations of outcome 4.33 3.33 3.56 3.78 3.89 Evaluate methodology 3.67 3.22 3.67 3.67 4.22  Significance  F(12,104)=2.026 p<.05 F(12,104)=4.174 p<.001 F(12,104)=1.479 p=.144 F(12,104)=.657 p=.788 F(12,104)=1.123 p=.350     	84?? Table 4.5: Ranking scores of functional units on usefulness Component  Functional Units Refer to facts Learn about background Refer to arguments Keeping up Learn how to  Introduction Claim importance of topic 13 19 32 18 14 Narrow down topic 4 16 2 5 7 Review previous research 39 41 29 35 26 Indicate a gap in previous research 24 27 30 32 18 Provide reason to conduct research 6 7 25 19 12 Point out contribution of previous research 19 28 22 23 14 Introduce present research  10 7 4 8 5 Present hypotheses 15 6 6 11 13 Clarify definition 25 16 3 7 10 Summarize methods 8 1 1 6 40 State value of present research 5 0 14 4 9 Methods Relate to prior/next experiments 6 30 28 27 0 Justify methods  8 36 37 22 23 Preview methods 9 17 12 11 12 Describe participants 16 8 9 6 13 Describe materials 31 14 7 12 32 Describe tasks 41 17 14 23 26 Outline experimental procedures 27 16 17 27 33 Present variables 15 13 6 16 13 Outline data analysis procedures 9 9 13 17 13 Present reliability/validity 6 8 25 7 3 Results Describe analysis conducted 19 15 26 23 43 Restate hypotheses 5 27 19 23 18 State findings 45 30 32 36 41 State additional findings 26 20 31 20 12 State non-validated findings 20 22 23 22 23 Evaluate hypotheses 24 23 22 20 20 Summarize results 29 31 15 24 11 	85??Component  Functional Units Refer to facts Learn about background Refer to arguments Keeping up Learn how to  Discussion Recapitulate present research 18 22 16 12 12 Provide established knowledge of topic 17 41 21 20 5 Highlight overall outcome 31 21 20 32 10 Indicate (un)expected outcome 13 12 15 11 1 Compare results with previous research 22 34 20 15 21 Interpret outcome 21 11 12 22 17 Support explanation of results 5 17 23 11 5 Generalize results 5 4 9 6 12 Recommend future research 5 0 2 14 11 Indicate significance of outcome 10 4 7 8 8 Ward off counterclaim 3 1 15 4 5 Indicate limitations of outcome  12 1 7 8 24 Evaluate methodology 6 0 1 5 37 *The top ranked functional units are shown in bold face.  Participants also ranked the top six useful functional units for each task.  A score was calculated for each functional unit based on the frequency with which it was assigned each rank, using the formula ?(7-n)*freq(n) where n is the rank and freq(n) is the number of times the unit was assigned rank n.  Since one participant did not complete the ranking, ranking scores come from eight participants.  In the comment box, some participants expressed difficulty in ranking the top six functional units though they completed the ranking, so only those with the highest rank score were considered.  These results are presented in Table 4.5, with the highest ranking functional units for each component and task bolded.   As shown in Table 4.6, the functional unit ranked highest in the second ?ranking? procedure was not always consistent with the highest rated functional unit in the initial ?rating? 	86??of functional units.  For example, for the task ?Refer to facts?, the functional unit ranked first within Methods is ?describe tasks?, whereas the functional unit rated highest within Methods is ?outline experimental procedures?.  Functional units with the top ranking scores were used to complement rating scores in the subsequent analysis.   Table 4.6: A comparison between functional units with highest rating and ranking scores Tasks Components Rated Highest Ranked Highest Refer to facts I review previous research review previous research M outline experimental procedures describe tasks R state findings state findings D highlight overall outcome highlight overall outcome Learn about background I review previous research review previous research M relate to prior/next experiments justify methods R summarize results summarize results D provide established knowledge of topic compare results with previous research provide established knowledge of topic  Refer to arguments I indicate a gap in previous research claim importance of topic M justify methods justify methods R state additional findings state non-validated findings state findings D support explanation of results support explanation of results Keeping up I indicate a gap in previous research review previous research M justify methods  relate to prior/next experiments  outline experimental proceduresR state findings state findings D recommend future research highlight overall outcome Learn how to I summarize methods summarize methods M describe materials describe tasks outline experimental proceduresoutline experimental proceduresR describe analysis conducted describe analysis conducted D evaluate methodology evaluate methodology  The tables below present how the functional units in three columns vary from task to task within each component.  Tables 4.7 to 4.11 show the mean rating scores of functional units in 	87??three columns for the five information use tasks.  The three columns 4.0-5.0, 3.0-3.9, and 2.0-2.9 represent degrees of usefulness of functional units for each of five tasks (1=Not Useful at All, 5=Highly Useful).  The standard deviations of functional unit mean scores by five tasks were not large, 1.11, 1.16, 1.16, 1.02, and 1.23 respectively.  This analysis together with the above analysis of ranking scores were conducted to determine the classification of functional units into different categories according to their varying usefulness for each task, in order to answer research questions 2 and 3.     	88??Table 4.7: Mean scores of functional units by task "Learn about background" Component  4.0-5.0 3.0-3.9 2.0-2.9 Introduction review previous research (5.00) point out contribution of previous research (4.78) indicate a gap in previous research (4.56) narrow down topic (4.11) clarify definition (4.11) claim importance of topic (3.89) introduce present research (3.89) provide reason to conduct research (3.78) present hypotheses (3.56) summarize methods (3.33) state value of present research (3.22)  Methods relate to prior/next experiments (4.00)  justify methods (3.89) present variables (3.56) present reliability/validity (3.56) outline experimental procedures (3.33) describe materials (3.22) describe tasks (3.22) outline data analysis procedures (3.22) preview methods (3.00) describe participants (2.89) Results  summarize results (3.67) state findings (3.33) evaluate hypotheses (3.33) restate hypotheses (3.22) state non-validated findings (3.22) state additional findings (3.11) describe analysis conducted (3.00)  Discussion provide established knowledge of topic (4.89) compare results with previous research (4.89) support explanation of results (4.33) recapitulate present research (4.00) highlight overall outcome (4.00)  interpret outcome (3.78) indicate significance of outcome (3.78) indicate (un)expected outcome (3.56) generalize results (3.33) recommend future research (3.33) indicate limitations of outcome (3.33) evaluate methodology (3.22) ward off counterclaim (3.00)   ? ?	89??Table 4.8: Mean scores of functional units by task "Refer to facts" Component  4.0-5.0 3.0-3.9 2.0-2.9 Introduction review previous research (4.44) indicate a gap in previous research (4.11)    point out contribution of previous research (3.56) present hypotheses (3.56) clarify definition (3.44) summarize methods (3.22) provide reason to conduct research (3.00) introduce present research (3.00) claim importance of topic (2.89) state value of present research (2.78) narrow down topic (2.67)  Methods outline experimental procedures (4.56) describe tasks (4.44) describe materials (4.33) present variables (4.11) describe participants (4.00) outline data analysis procedures (3.89) present reliability/validity (3.67) justify methods (3.11) preview methods (3.11) relate to prior/next experiments (3.00)  Results state findings (5.00) evaluate hypotheses (4.00) summarize results (4.00) describe analysis conducted (3.89) state additional findings (3.78) state non-validated findings (3.33) restate hypotheses (2.44) Discussion highlight overall outcome (4.78) compare results with previous research (4.56) interpret outcome (4.44) indicate limitations of outcome (4.33)  recapitulate present research (4.00)  indicate (un)expected outcome (3.89) support explanation of results (3.89) generalize results (3.67) evaluate methodology (3.67) provide established knowledge of topic  (3.56) ward off counterclaim (3.56) indicate significance of outcome (3.44) recommend future research (3.22)         	90??Table 4.9: Mean scores of functional units by task "Refer to arguments" Component  4.0-5.0 3.0-3.9 2.0-2.9 Introduction indicate a gap in previous research (4.44) provide reason to conduct research (4.11) state value of present research (4.11) claim importance of topic (3.89) review previous research (3.89) point out contribution of previous research (3.89) present hypotheses (3.22) introduce present research (3.11) clarify definition (2.67) narrow down topic (2.56) summarize methods (2.44) Methods  justify methods (3.78) present reliability/validity (3.67)  outline data analysis procedures (2.89) describe tasks (2.78) outline experimental procedures (2.78) relate to prior/next experiments (2.67) describe materials (2.67) present variables (2.56) preview methods (2.33) describe participants (2.33) Results    state additional findings (3.89) state non-validated findings (3.89) evaluate hypotheses (3.78) state findings (3.56) summarize results (3.44) restate hypotheses (3.22) describe analysis conducted (2.89)  Discussion support explanation of results (4.78) compare results with previous research (4.56) highlight overall outcome (4.44) interpret outcome (4.44) provide established knowledge of topic (4.33) generalize results (4.22)  indicate (un)expected outcome (4.11) indicate significance of outcome (4.11) ward off counterclaim (4.00) recapitulate present research (3.67) evaluate methodology (3.67) indicate limitations of outcome (3.56) recommend future research (3.44)     	91??Table 4.10: Mean scores of functional units by task "Keeping up" Component  4.0-5.0 3.0-3.9 2.0-2.9 Introduction indicate a gap in previous research (4.56) provide reason to conduct research (4.44) point out contribution of previous research (4.44) review previous research (4.22) claim importance of topic (4.00) introduce present research (4.00) present hypotheses (3.78) narrow down topic (3.44) state value of present research (3.44) clarify definition (3.33) summarize methods (3.11)  Methods  justify methods (3.89) relate to prior/next experiments (3.67) describe tasks (3.44) outline experimental procedures (3.44) outline data analysis procedures (3.33) present variables (3.22) preview methods (3.11) describe materials (3.11) present reliability/validity (3.11) describe participants (2.89) Results state findings (4.22) summarize results (4.11) state additional findings (4.00)  state non-validated findings (3.78) describe analysis conducted (3.67) restate hypotheses (3.67) evaluate hypotheses (3.67)  Discussion recommend future research (4.33) highlight overall outcome (4.22) interpret outcome (4.22) recapitulate present research (4.00) compare results with previous research (4.00) indicate significance of outcome (4.00)  provide established knowledge of topic (3.89) support explanation of results (3.89) indicate (un)expected outcome (3.78) indicate limitations of outcome (3.78) generalize results (3.67) evaluate methodology (3.67) ward off counterclaim (3.56)      	92??Table 4.11: Mean scores of functional units by task "Learn how to" Component  4.0-5.0 3.0-3.9 2.0-2.9 Introduction summarize methods (4.22) review previous research (3.33) present hypotheses (3.22) state value of present research (3.00) provide reason to conduct research (2.89) point out contribution of previous research (2.89) clarify definition (2.89) indicate a gap in previous research (2.78) introduce present research (2.78) claim importance of topic (2.67) narrow down topic (2.22) Methods describe materials (5.00) describe tasks (5.00) outline experimental procedures (5.00) justify methods (4.89) present variables (4.78) outline data analysis procedures (4.67) preview methods (4.33) describe participants (4.33) present reliability/validity (4.22) relate to prior/next experiments (3.78)   Results describe analysis conducted (4.67) state findings (3.67) state non-validated findings (3.22) evaluate hypotheses (3.22) summarize results (3.22) state additional findings (3.00) restate hypotheses (2.78)  Discussion evaluate methodology (4.22)  indicate limitations of outcome (3.89) compare results with previous research (3.44) recommend future research (3.44) highlight overall outcome (3.33) interpret outcome (3.33) recapitulate present research (3.22) provide established knowledge of topic (3.11) ward off counterclaim (3.11) support explanation of results(3.00) generalize results (3.00) indicate (un)expected outcome (2.89) indicate significance of outcome (2.78) 	93??To differentiate the functions of these information units, the functional units were grouped in terms of how useful they were for a particular task, based on their rating and ranking scores in Survey II.  The categorization of functional units by average means was to generate a richer set of functional units.  The result is task-centered functional unit taxonomy as shown in Table 4.12, in which functional units are in three categories: primary, related and additional related.  The three categories represent the degree of usefulness of functional units within IMRD components for each of five tasks as expressed by the participants.   From all four components, the functional units with the highest rating score were selected as ?primary functional units?.  Those selected as ?related functional units in primary component? were other functional units that scored from 4.0 to 5.0 in the same component as the primary functional units.  Those in ?additional related functional units in other components? were functional units with the highest rating scores in the other three components.  The functional units with top ranking scores were added to ?additional related functional units in other components? if not duplicated with those in this category.  This organizational structure was designed to enable close reading of the most relevant information in a component for each task, but also allow close reading of the relevant information selected from other components.  For example, for the task ?Learn about background?, the functional unit ?review previous research? (5.00) within the Introduction component received the highest rating score across the four components and thus was selected as the primary functional unit.  Other than ?review previous research?, the functional units which scored from 4.0-5.0 in the Introduction component were selected as related functional units, including ?point out contribution of previous research? (4.78), ?indicate a gap in previous research? (4.56), ?narrow down topic? (4.11), and ?clarify definition? (4.11).  Functional units rated highest in components other than the Introduction were also 	94??selected as additional related functional units: ?relate to prior/next experiments? (4.00) in Methods, ?summarize results? (3.67) in Results, ?provide established knowledge of topic? (4.89) and ?compare results with previous research? (4.89) in Discussion.  For this task, the functional unit that ranked first yet was different from that rated highest across all four components was ?justify methods? within Methods, so ?justify methods? was added to the category of additional related functional units.  	95??  Table 4.12: Task-centered functional unit taxonomy Information Tasks Primary Component and Functional Unit (highest rating score across four components) Related Functional Units in Primary Component (other functional units scored 4.0-5.0 in the primary component)  Additional Related Functional Units in Other Components (highest rating score in other three components + 1st ranking, if not duplicate)  Learn about background   I: review previous research (5.00)  point out contribution of previous research (4.78) indicate a gap in previous research (4.56) narrow down topic (4.11) clarify definition (4.11) M: relate to prior/next experiments (4.00)       justify methods (rank)  R: summarize results (3.67)  D: provide established knowledge of topic        (4.89)      compare results with previous research          (4.89) Refer to facts   R: state findings (5.00)  evaluate hypotheses (4.00) summarize results (4.00) I: review previous research (4.44)  M: outline experimental procedures (4.56)       describe tasks (rank)        D: highlight overall outcome (4.78) Refer to arguments   D: support explanation of results (4.78)  compare results with previous research (4.56) highlight overall outcome (4.44) interpret outcome (4.44) provide established knowledge of topic (4.33) generalize results (4.22) indicate (un)expected outcome (4.11) indicate significance of outcome (4.11) ward off counterclaim (4.00) I: indicate a gap in previous research (4.44)    claim importance of topic (rank)     M: justify methods (3.78)  R: state additional findings (3.89)      state non-validated findings (3.89)      state findings (rank) 	96??Information Tasks Primary Component and Functional Unit (highest rating score across four components) Related Functional Units in Primary Component (other functional units scored 4.0-5.0 in the primary component)  Additional Related Functional Units in Other Components (highest rating score in other three components + 1st ranking, if not duplicate)  Learn how to   M: describe materials (5.00) describe tasks (5.00) outline experimental procedures (5.00)  justify methods (4.89) present variables (4.78) outline data analysis procedures (4.67) preview methods (4.33) describe participants (4.33) present reliability/validity (4.22)  I: summarize methods (4.22)  R: describe analysis conducted (4.67)  D: evaluate methodology (4.22)  Keeping up   I: indicate a gap in previous research (4.56)  provide reason to conduct research (4.44) point out contribution of previous research (4.44) review previous research (4.22) claim importance of topic (4.00) introduce present research (4.00) M: justify methods (3.89)       relate to prior/next experiments (rank)       outline experimental procedures (rank)       R: state findings (4.22)  D: recommend future research (4.33)      highlight overall outcome (rank)     	97??Results indicate that relationships exist between certain tasks and components.  The Introduction component is shown to be more useful for two tasks ?Learn about background? and ?Keeping up?; the Methods component is more useful for the task ?Learn how to?; the Results component for the task ?Refer to facts?; and the Discussion component for the task ?Refer to arguments?.  Normally, for each component, the number of functional units in the category of 4.0-5.0 is no larger than those in the category of 3.0-3.9.  However, a larger number of functional units within the Discussion component fall into the category of 4.0-5.0 for the task ?Refer to arguments?, as does the Methods component for the task ?Learn how to?.  Therefore compared with the pairings between other components and tasks, as shown in Table 4.9 and Table 4.11, there is a much closer relationship between the functional units within the Methods component and the ?Learn how to? task, and between the functional units within the Discussion and ?Refer to arguments?.   The functional units that occur most frequently in a particular component were observed to have the highest perceived usefulness.  From the analysis of twelve sample psychology articles, the functional unit ?review previous research? is most commonly present in the Introduction, and through the validation study it appears to be highly useful in the Introduction for most tasks: primary functional unit for ?Learn about background?, related functional unit for ?Keeping up?, and additional related functional unit for ?Refer to facts?.  So are the functional unit ?state findings? in the Results, and functional unit ?outline experimental procedures? in the Methods, which appears to be highly useful for three tasks in that component. A different primary functional unit may relate to different functional units in the same component.  For the Introduction component relevant to two tasks ?Learn about background? and ?Keeping up?, the primary functional unit is ?review previous research? for the former task 	98??whereas ?indicate a gap in previous research? for the latter task.  While the ?review previous research? is the primary functional unit, two related functional units (?point out contribution of previous research?, ?indicate a gap in previous research?) comment on the primary functional unit and the other two related functional units (?narrow down topic?, ?clarify definition?) set the context for the primary functional unit.  While ?indicate a gap in previous research? is the primary functional unit, two related functional units (?point out contribution of previous research?, ?review previous research?) address primary functional unit from other angles, two related functional units (?provide reason to conduct research?, ?introduce present research?) extend the meaning from the primary functional unit, and one (?claim importance of topic?) set the context for the primary functional unit.  The additional related functional units show a weak relationship with the primary functional units except those in the Discussion which seem to provide summarized information.  Another analysis was conducted to see the usefulness of each of 41 functional units as presented in Table 4.13.  Each functional unit is indicated as if it is a primary, related, or additional related functional unit for each of five tasks, or scored 4.0-5.0 for each task.          	99??Table 4.13: 41 Functional units with varying usefulness Component   Functional units  Learn about background Refer to facts Refer to arguments Keeping up Learn how to Introduction claim importance of topic   A R *  narrow down topic R *     clarify definition R *     review previous research P * A *  R *  indicate a gap in previous research R * * A * P *  provide reason to conduct research   * R *  point out contribution of previous research R *   R *  introduce present research    R *  present hypotheses      summarize methods     A * state value of present research   *   Methods relate to prior/next experiments A *   A  justify methods A  A A R * preview methods     R * describe participants   *   R * describe materials   *   P * describe tasks   A *   P * outline experimental procedures  A *  A P * present variables   *   R * outline data analysis procedures     R * present reliability/validity     R * Results describe analysis conducted     A * restate hypotheses      state findings  P * A A *  state additional findings   A *  state non-validated findings   A   evaluate hypotheses  R *    summarize results A R *  *  Discussion recapitulate present research * *  *  provide established A *  R *   	100??Component   Functional units  Learn about background Refer to facts Refer to arguments Keeping up Learn how to knowledge of topic highlight overall outcome * A * R * A *  indicate (un)expected outcome   R *   compare results with previous research A * * R * *  interpret outcome  * R * *  support explanation of results *  P *   generalize results   R *   recommend future research    A *  indicate significance of outcome   R * *  ward off counterclaim   R *   indicate limitations of outcome  *    evaluate methodology     A * *P: primary; R: related; A: additional related; *: scoring 4.0-5.0  Most functional units are useful as a related or an additional related functional unit only.  Two functional units, ?review previous research? and ?indicate a gap in previous research?, show varying usefulness in all levels for three different tasks, all in the range of scoring 4.0-5.0.  Three functional units ?indicate a gap in previous research?, ?highlight overall outcome? and ?compare results with previous research? were scored 4.0-5.0, an indicator of high usefulness, by participants in all tasks except the task ?Learn how to?.  Some functional units were only related to a single task, such as ?summarize methods?, ?describe analysis conducted? and ?evaluate methodology? for the task ?Learn how to?.   In some components, almost all functional units were only related to particular tasks, such as the functional units within Methods for task ?Learn how to?.  It reveals that information contained in the Methods is much more specific than those in other components.  The above results suggest that some functional units in a component 	101??convey more general information and thus are widely applicable while others are more specific and more limited in their application to particular tasks.  4.4 SUMMARY Results of the two validation surveys were used to identify relationships between 41 functional units within four components and 5 information use tasks, and furthermore the relationships among a set of functional units for a particular task. The functional units were classified into three categories (primary, related, additional related) according to their perceived usefulness for each task, and their location.  Findings with respect to research questions 2 and 3 indicate that (a) the usefulness of a component and the functional units within a component depends on the information task, and (b) the extent to which a functional unit is related to other functional units depends on the information task.   The individual functional units and their relationships with information use tasks were used to inform the design of a prototype system in the next phase of study as described in Chapter 5.   	102??5 EVALUATION OF THE UTILITY OF FUNCTIONAL UNITS IN A PROTOTYPE SYSTEM: METHODS 5.1 OVERVIEW The objective of the second phase of the research was to implement and evaluate the utilization of functional units to support particular information tasks using scholarly journal articles.  Based on the functional unit taxonomy developed in the previous chapters, a prototype journal reading system was designed and implemented, and used as a testing environment.  This phase of the study was motivated by the following research questions:  Research Question 4: Does the signaling of functional units to readers enhance reading effectiveness? 4.1a Does the signaling of functional units help readers to complete tasks more effectively? 4.1b If so, how does the signaling of functional units help readers to complete tasks more effectively? 4.2 Does the impact of functional units on effectiveness vary with reading tasks? Research Question 5: Does the signaling of functional units to readers enhance reading efficiency? 5.1a Does the signaling of functional units help readers to complete tasks more efficiently? 5.1b If so, how does the signaling of functional units help readers to complete tasks more efficiently? 5.2 Does the impact of functional units on efficiency vary with reading tasks? 	103??The purpose of this portion of the study was to test whether and how the signaling of functional units enhances reading effectiveness and efficiency.  Reading differences between the media of paper and screen can be captured by outcome metrics (i.e., speed, accuracy, comprehension) and process metrics (i.e., manipulation, navigation) (Dillon, 1992).  Therefore, this study investigates the impact of signaling functional units from two perspectives: reading outcomes and reading process.  Based on the research questions addressed in this chapter, the following hypotheses regarding the effectiveness and efficiency in reading outcomes and process were generated and tested:  H1 Participants in the experimental condition would feel they have read more relevant text than those in the baseline condition H2 Participants in the experimental condition would feel they have comprehended more relevant text than those in the baseline condition H3 Participants in the experimental system would be more satisfied with the information obtained than those in the baseline condition H4 Participants in the experimental system would be more confident in fully answering the question than those in the baseline condition H5 Participants in the experimental system would feel more efficient in obtaining information than those in the baseline condition H6 More relevant text would be highlighted in the experimental condition than in the baseline condition H7 The comprehension question would be answered more fully in the experimental condition than in the baseline condition H8 The mean time to complete an experimental reading task would be lower in the experimental condition than in the baseline condition H9 The primary component would be explored more in the experimental condition than in the baseline condition H10 The impact of functional units on effectiveness would vary with tasks H11 The impact of functional units on efficiency would vary with tasks 	104??The effectiveness and efficiency of the reading process were investigated primarily through qualitative data.  The assumptions underlying this analysis are based on Relevance Theory: in the experimental condition participants could focus on the highly relevant information within an article, and furthermore, participants could use pieces of relevant information across the article, and move from the most relevant to the least relevant information and stop when they judge they have obtained the adequate information for a task.  The data sources and analyses for research questions 4 and 5 are outlined in Table 5.1 and Table 5.2.   	105??Table 5.1: Data sources and analyses for Research Question 4 Research questions  Measures Data source Analysis  Screen recordings Post-task questionnaires Post-study questionnaire Interview transcripts RQ 4.1a (IF Effective) perceived amount of relevant text read, measured on a 7-point Likert scale  Q3   Statistical analysis perceived extent of relevant text comprehended, measured on a 7-point Likert scale  Q4   satisfaction with the information obtained, measured on a 7-point Likert scale  Q5   confidence in fully answering the question, measured on a 7-point Likert scale  Q7   relevant information obtained, measured by amount of relevant paragraphs highlighted *    quality of answer, measured by amount of major concepts covered *    Content analysis   Statistical analysis RQ 4.1b (HOW Effective) usefulness of functionalities, measured on a 7-point Likert scale  Q8, Q9, Q10   Statistical analysis  likes and dislikes, comments and suggestions, measured by features mentioned   *  Content analysis   use of functional units, measured by how ?use? and ?move?  *    how read effectively by experimental system, measured by reading styles mentioned     *  	106??Table 5.2: Data sources and analyses for Research Question 5 Research questions Measures Data source Analysis Screen recordings Post-task questionnaires Interview transcripts RQ 5.1a (IF Efficient) perceived efficiency in obtaining information, measured on a 7-point Likert scale  Q6  statistical analysis time on each task, measured by time elapsed from task page loading to clicking on ?Done? button to submit answer *   RQ 5.1b (HOW Efficient) amount of text explored, measured by which components highlighted *   how read efficiently by experimental system, measured by reading styles mentioned   * content analysis  	107??5.2 SYSTEM DESIGN Based on the relationships between functional units and information use tasks developed in phase one, a prototype journal system was designed and implemented in order to test the practical value of making use of the functional unit taxonomy to support journal reading.  This section describes the design rationale, the interface functionalities, and the content presented in the prototype system.  5.2.1 Design Rationale The aim in applying the Phase I results to the design of a prototype system was to examine whether reading effectiveness and efficiency would be improved by signalling functional units.  As discussed in the Phase I results (Chapter 4), a close relationship exists between functional units within IMRD components and information use tasks, which could be utilized to enhance reading effectiveness and efficiency.  Particularly, these results indicate how functional units could support information tasks when using scholarly journal articles in the following ways:  (a) A functional unit is the smallest information unit.  By employing functional units, we can help readers to focus on the highly relevant information within an article.  (b) A functional unit is associated with other functional units in the same and different components for a particular task when they share a high level of perceived task usefulness.  By employing these associations between functional units, we can help readers to use pieces of relevant information across the article. (c) Functional units are classified into three categories according to how useful they are for a particular task.  By employing functional units of varying usefulness, we can 	108??help readers to move from the most relevant to the least relevant information, and stop at the amount of information the reader desires.  Currently, there are a limited number of approaches to using genre in information system design, which may inform the design of a system signaling functional units.  Genre information has been incorporated into document representations differently in various studies.  For example, genre has been described in each search result (Rosso, 2005; Glover et al., 2001), it has been used in query formulation where search was limited to specified genres (Roussinov et al., 2001), and genre has been used to customize the ranking of search results for different task scenarios (Freund, 2008; Yeung, Freund & Clarke, 2007).  Based on these approaches, the signalling of functional units can be realized by labelling each paragraph by function or by highlighting the functional units relevant to a particular task.  Therefore the prototype system made use of two mechanisms to signal functional units: a functional unit indicator, and a functional unit selector.  Both were incorporated into the system so that the system could suggest information according to its relevance to a particular task, and additionally, the system could grant autonomy to users to select information they consider as relevant for that task.  Therefore the system signals a functional unit by its relevance and also by its function.  In the process of developing the functional unit taxonomy, we learned that a set of 41 functional units exists within psychology journal articles, which can be classified into three categories by level of relevance for a particular information use task.  This taxonomy provided guidance in the design of the current prototype journal system toward facilitating information use by promoting the functional units strongly related with a particular task.  There were two ways to achieve this purpose: a functional unit selector that suggests relevant information for a particular task based on its relevance to that task, and a functional unit indicator that simply indicates the 	109??functions of information units and lets the reader decide on the relevance.  All forty-one functional units presented in Chapter 4 (Table 4.3) were used in the functional unit indicator, while the functional units in three categories (Table 4.12) were used in the functional unit selector.  Based on the taxonomy created in Phase I of the research, a prototype journal system was created using documents structured with XML (Extensible Markup Language) elements.  For the journal articles used in the systems, a DTD was first created, the XML element set was applied to the articles, and XSLT was used to present the articles.  JavaScript was also employed to produce certain dynamic effects such as the highlight on-off function.  Appendix 6 shows the DTD and a sample XML document.  5.2.2 Interface design In order to evaluate the use of functional units by the prototype system, two different presentations were implemented to enable a comparison.  In the baseline system (see Figure 5.1), an article was presented in a typical html-style presentation, whereas in the experimental system (see Figure 5.2), the functional unit taxonomy was incorporated.  The interface functionalities consist of functional units categorized in three boxes in the right pane, with a toggle on-off button in each box to highlight relevant paragraphs.  On the left margin are label(s) by the side of each paragraph to indicate its function(s).  Both the experimental and the baseline system incorporate a small window that pops up in the upper right corner to provide the task instructions.   	110??Figure 5.1: Baseline interface   	111??Figure 5.2: Experimental interface  (1) Relevant functional units categorized into three boxes (2) Functional unit titles (3) Toggle on-off button (4) Paragraph numbers and labels (5) Highlighted paragraphs   12345	112??The major design issues were how to present an article and how to make salient the relevant part of the article.  Design decisions took into consideration the aim to facilitate reading without interfering with natural reading behaviour.  Thus the whole article instead of just the relevant text snippets was presented, and the article was presented in its original format with all headings and components, with the added functionality with both panes adjacent to the article text.  On the right pane is the functional unit selector presented like the tab dividers of a binder, while on the left pane is the functional unit indicator as like the line numbers and subheadings on a paper document.   As shown in Figure 5.2, the functional unit selector on the right pane contains three boxes for functional units in three categories: the primary functional units are listed under ?Top Hits?, related functional units in the same component are presented under ?Next Best Hits?, and additional functional units in other components are presented under ?More?.  The labels of ?Top Hits?, ?Next Best Hits? and ?More? were intended to indicate the varying usefulness of functional units in three groups.  These task-related functional units in three boxes vary according to the information use tasks.  The relationships are based on the task-related functional unit taxonomy presented in Table 4.12.  Take the task ?Learn how to? as an instance.  In the first box ?Top Hits? are functional unit labels ?tasks? and ?experimental procedures? from the Methods component; in the second box ?Next Best Hits? are other related functional units in the Methods component, ?justify methods?, ?preview methods? and ?participants?; in the third box ?More? there are additional functional units from the other three components, ?summarize methods? in the Introduction, ?describe analysis conducted? in the Results, and ?evaluate methodology? in the Discussion.  For each box there is a toggle on and off button ? ?Turn Highlight ON?.  Once clicked on, for those paragraphs whose functions are listed in the box, 	113??their paragraph numbers are highlighted in the same color as that of the box, and the reader is taken to the first highlighted paragraph.  Once the button ?Turn Highlight ON? is clicked, it changes to ?Turn Highlight OFF?, and the reader can turn off highlighting with one more click.  Besides a toggle on-off button, each box has a list of functional unit titles and the titles of the components in which they occur so that readers have an idea of what relevant contents are available.  As shown in Figure 5.2, in the functional unit indicator on the left pane shows paragraph numbers and functional unit label(s) for each paragraph.  Each paragraph is numbered sequentially, and is labelled with at least one and at most two functions if both functions are equally significant.  Paragraph numbers are shown as XX (current paragraph number) of YY (total number of paragraphs).  The intent is to give readers a sense of the size of the article and their location within the article.  Each functional unit label in the bracket indicates a distinct function of that paragraph.  While designed to signal the relevant content, the whole article is presented to ensure its coherence for comprehension.  The left margin is used to signal the relevant content because reformatting the content to insert other types of cues might cause confusion about whether or not this was done by the author or the system.   5.2.3 Content This study used five psychology journal research articles.  A different journal article was used for each task to avoid priming from prior reading.  Five journal articles were selected from the Journal of Experimental Psychology: General.  Either the first or the last research article per issue was randomly selected from the first three issues of 2009.  Recommended by two expert 	114??users of psychology journals, Journal of Experimental Psychology: General was used since it publishes articles dealing with everyday life phenomena through the lens of psychology.  These articles were deemed suitable for most participants, who were undergraduate psychology students.  They all had Flesh reading ease scores in the range of 33-43 and Flesch-Kincaid grade scores in the range of 13-14 (see Appendix 2), which indicates that they should be understandable to university students.  One additional article was used for a practice task.  This was the first of twelve sample articles used in Phase I of the study.  All constituent parts of the original articles were kept including title, author name(s), author affiliation(s), abstract, notes, and references.  To ensure the user?s focus would be on the texts within IMRD components, figures and tables were edited so that their captions stayed in place, but the complete figures or tables were linked to rather than embedded in the text.  The format (i.e., font size, font style, font weight) of all headings and subheadings were kept intact.   5.3 PARTICIPANTS AND RECRUITMENT An email advertisement (see Appendix 7) was sent to the Department of Psychology at the University of British Columbia to recruit participants.  Thirty 3rd and 4th year undergraduate students majoring in psychology participated in the study.  Since domain expertise was considered as the most important factor in reading scholarly articles for the purposes of this study, the participants were recruited from amongst students with a similar academic background where, as a group, they were assumed to have a similar level of domain expertise.  Each participant was given an honorarium of $20 upon completion of the session.   	115??Of the study participants, twenty-eight were in the age range 18 to 25, one was in the range 26 to 35, and one was in the range 36 to 45.  Twenty-three were female and seven were male.  To learn participants? journal use experience, they were asked three questions: what purposes they had for using journal articles, how many years they had been using journal articles, and how often they used journal articles.  As shown in Table 5.3, the most common purpose was to ?Refer to arguments supporting your point?, followed by ?refer to specific factual information? and ?learn background information in a new area?.  The other two: ?keep current in your area of research? and ?learn how to do something?, received very low counts.  Only five participants used journal articles as long as five years, and half of thirty participants used journal articles no more than once a month.  These data indicate that the participants as undergraduate students did not have extensive experience using journal articles. Two questions were asked that aimed at learning participants? expertise in using electronic journals.  In the first question participants were asked to indicate their level of expertise in the psychology domain, with respect to the psychology journal article genre, and in searching electronic journal articles on a 7-point scale (1 = Novice and 7 =  Expert).  In the second question they were asked to indicate how frequently they applied the listed article cues, article structure and interface functionality to identify useful information within an electronic journal article on a 7-point scale (1 = Never and 7 = Always).       	116??Table 5.3: Journal use experience and expertise Journal use experience Questions Values FrequencyPurposes of using journal articles (more than one response was possible) Refer to arguments supporting your point 26 refer to specific factual information 19 learn background information in a new area 15 keep current in your area of research 4 Learn how to do something 1 Years of using journal articles 5 years 5 4 years 12 3 years 7 2 years 3 1 year 3 Frequency of using journal articles 4-5 times a week 1 2-3 times a week 4 once a week 4 2-3 times a month 6 once a month 5 less than once a month 10 Journal use expertise Questions Values Mean (SD) Level of expertise (1 = Novice, 7 = Expert) Finding information in electronic journal articles 4.10 (1.32) The psychology discipline 4.07 (1.23) The structure of research articles in psychology journals 3.87 (1.43) Items to identify useful information in article (1 = Never, 7 = Always) article  cues abstract 6.34 (.77) section headings and subheadings 5.83 (1.37) first paragraph of a section 5.50 (1.20) keywords 5.43 (1.52) diagrams/tables 4.10 (1.47) article  structure awareness of type of content in a section before actually reading 5.73 (1.28) interface functionality browser ?find? function 4.63 (1.88) table of contents link 4.27 (1.86) links embedded in article 3.43 (1.77)  As shown in Table 5.3, the means were all above the midpoint in participants? genre expertise, domain expertise, and search expertise.  They were low compared to the means for use of article structure, although the scales measure different things.  Participants were allowed to note the cues they frequently used but that were not on the list.  Five participants reported cues that were not present on the list and four of them addressed a combination use, i.e., ?I will often 	117??read the abstract, the hypothesis, the methods and actually skip the results and go straight to the discussion portion of the article?, ?read only the ?abstract?, ?intro? + ?conclusion??, ?discussion of findings and introduction?, ?number of keywords that repeats in an article; author names; word definitions?.  Overall, user feedback on journal article use suggests that participants had a very general sense of the article structure.  5.4 EXPERIMENTAL TASKS A within-subjects design was used in which each participant used both systems and completed five experimental tasks ? one corresponding to each information use task type.   Each experimental task involved reading and interacting with one journal article and answering a question by highlighting relevant text and writing a brief response.   The order in which systems were used and tasks were performed was randomized according to a Latin Square design as shown in Table 5.4.  Here, ?E? stands for the experimental system while ?B? stands for the baseline system; ?T? stands for task and ?A? for article.  Each experimental reading task was performed by 30 participants, 15 times with each system.  For each round (ten participants), each task in either system occurred once in the 1st, 2nd, 3rd, 4th, and 5th place in the sequence.          	118??Table 5.4: Participant task assignment Participants  Conditions  #1 E(T1/A1) E(T5/A5) B(T2/A2) B(T4/A4) B(T3/A3) #2 E(T2/A2) E(T1/A1) B(T3/A3) B(T5/A5) B(T4/A4) #3 E(T3/A3) E(T2/A2) B(T4/A4) B(T1/A1) B(T5/A5) #4 E(T4/A4) E(T3/A3) B(T5/A5) B(T2/A2) B(T1/A1) #5 E(T5/A5) E(T4/A4) B(T1/A1) B(T3/A3) B(T2/A2) #6 B(T3/A3) B(T4/A4) E(T2/A2) E(T5/A5) E(T1/A1) #7 B(T4/A4) B(T5/A5) E(T3/A3) E(T1/A1) E(T2/A2) #8 B(T5/A5) B(T1/A1) E(T4/A4) E(T2/A2) E(T3/A3) #9 B(T1/A1) B(T2/A2) E(T5/A5) E(T3/A3) E(T4/A4) #10 B(T2/A2) B(T3/A3) E(T1/A1) E(T4/A4) E(T5/A5) #11 E(T1/A1) E(T5/A5) B(T2/A2) B(T4/A4) B(T3/A3) #12 E(T2/A2) E(T1/A1) B(T3/A3) B(T5/A5) B(T4/A4) #13 E(T3/A3) E(T2/A2) B(T4/A4) B(T1/A1) B(T5/A5) #14 E(T4/A4) E(T3/A3) B(T5/A5) B(T2/A2) B(T1/A1) #15 E(T5/A5) E(T4/A4) B(T1/A1) B(T3/A3) B(T2/A2) #16 B(T3/A3) B(T4/A4) E(T2/A2) E(T5/A5) E(T1/A1) #17 B(T4/A4) B(T5/A5) E(T3/A3) E(T1/A1) E(T2/A2) #18 B(T5/A5) B(T1/A1) E(T4/A4) E(T2/A2) E(T3/A3) #19 B(T1/A1) B(T2/A2) E(T5/A5) E(T3/A3) E(T4/A4) #20 B(T2/A2) B(T3/A3) E(T1/A1) E(T4/A4) E(T5/A5) #21 E(T1/A1) E(T5/A5) B(T2/A2) B(T4/A4) B(T3/A3) #22 E(T2/A2) E(T1/A1) B(T3/A3) B(T5/A5) B(T4/A4) #23 E(T3/A3) E(T2/A2) B(T4/A4) B(T1/A1) B(T5/A5) #24 E(T4/A4) E(T3/A3) B(T5/A5) B(T2/A2) B(T1/A1) #25 E(T5/A5) E(T4/A4) B(T1/A1) B(T3/A3) B(T2/A2) #26 B(T3/A3) B(T4/A4) E(T2/A2) E(T5/A5) E(T1/A1) #27 B(T4/A4) B(T5/A5) E(T3/A3) E(T1/A1) E(T2/A2) #28 B(T5/A5) B(T1/A1) E(T4/A4) E(T2/A2) E(T3/A3) #29 B(T1/A1) B(T2/A2) E(T5/A5) E(T3/A3) E(T4/A4) #30 B(T2/A2) B(T3/A3) E(T1/A1) E(T4/A4) E(T5/A5)  In this study, the five experimental tasks correspond to the five information use task types, each of which was represented by a particular question posed to participants.  The questions are worded to be easy to understand and to avoid providing any clues about the answer.  Table 5.5 illustrates the relationship between the questions asked about each article and the respective information use task type.  In the following chapters, the five information use task types: ?Learn about background?, ?Refer to facts?, ?Refer to arguments?, ?Learn how to?, ?Keeping up?, refer to the experimental tasks in the experiments.   	119?? Table 5.5: Relations between task types and questions Task types Target content Questions #1 Learn about background To get to know a new area on which the user is embarking ? Introduction What is known in the area about the role of comparison in human information processing? #2 Refer to facts To consult specific factual information, e.g., data, phenomena ? Results  What data further supports the claim that recognition of song fragments is influenced by song familiarity? #3 Refer to arguments To consult arguments, ideas or suggestions supporting a point made by the user ? Discussion  Why is agenda-based regulation considered to have greater explanatory power as compared to current theories on allocating study time? #4 Learn how to To Learn how to do something, e.g., operation, procedure ? Methods  How are the contextual factors controlled in examining the preference for gains/losses immediately or later in different domains? #5 Keeping up To keep current with articles in the user?s area of research ? Introduction, Discussion What is reported new regarding the different changes in flashbulb memories and event memories over time?  5.5 MEASURES According to Sperber and Wilson (1995), optimal relevance is determined by the interplay between two variables: effects and effort.  Effects and effort correspond to effectiveness and efficiency, two common measures of relevance in evaluating information systems.  Therefore, this study has a range of dependent variables in two categories: effectiveness and efficiency.  Here, the effectiveness category includes measures related to the output, while efficiency category includes measures related to the input.  Effectiveness and efficiency are measured through the reading process as well as the reading outcomes.  The outcomes were further examined from both subjective perceptions and objective performance with respect to task completion. The aim of the experimental study was to test the utilization of functional units for the purpose of information use of scholarly journal articles.  Thus the measures cover different aspects of information use: navigating, close reading, comprehending, and using information.  	120??Instead of employing a general task scenario, in this study the participants were required to answer a specific comprehension question for each article.  To answer the comprehension question, participants needed to locate information, to read and comprehend the relevant content, and then to integrate bits and pieces of information across the article, and finally to form their own thoughts in composing an answer.    There is a view that usefulness is a better criterion than relevance as traditional criterion for evaluating an interactive information retrieval system since the system can be evaluated by how well the task has been accomplished after interacting with the system (Belkin, 2010).  Since this research employs Sperber and Wilson?s understanding of relevance from the cognitive perspective, the extent of changes in a reader?s cognitive state of knowledge can be approximated by the answers to the comprehension questions.  This study focuses on the cognitive effects of signaling relevant passages rather than the pragmatic effects of system use.  However, the cognitive effects of signaling relevant passages can be evaluated through the observable outcome in the usefulness of functional units, which require different types of information for answering different questions.  Here, the task outcome lies in the quality of answer to the question.  And it is through this question answering that changes in the knowledge base are examined.    5.6 INSTRUMENTS Cognitive relevance is highly subjective and personal (Cosijn & Ingwersen, 2000), and cognitive changes can hardly be understood by quantitative data only.  A mixed methods approach was used for data collection since mixed methods can provide more convincing results (Creswell, 2009).  The combination of techniques affords a rigorous approach.  The utilization of 	121??functional units was examined for both reading outcomes and reading process and the results for subjective perceptions and objective performance, hence offering complementary data to be utilized for greater validity in the findings.  Both quantitative and qualitative data were collected from post-task and post-study questionnaires, system logs and interview transcripts.  Post-task questionnaire (Appendix 9): All questions employed a 7-point Likert Scale to record responses.  The first two questions examined participants? perceptions of the difficulty of the comprehension question just completed and their familiarity with the article topic.  Questions 3-7 collected participants? perceptions of task completion:  questions 3 and 4 asked for perceptions of the amount of relevant text read and comprehended and questions 5, 6 and 7 asked participants to rate their levels of satisfaction, efficiency and confidence in completing the task.  Under the experimental condition, there were three more questions (questions 8-10) which addressed specific features of the prototype interface.    Post-study questionnaire (Appendix 10):  Questions 1-8 collected background information from participants.  The questionnaire consisted of three parts: demographics, journal use experience, and journal use expertise.  Demographics questions were about academic status, age, gender (questions 1-3).  Journal use experience questions included major purposes, years and frequency of using journal articles (questions 4-6).  Journal use expertise questions included self-assessment of expertise with domain, genre, and search on a 7-point Likert Scale (question 7), and self-reported items that were frequently applied in identifying useful information on a 7-point Likert Scale (question 8).  	122??The last two questions (questions 9-10) were open-ended questions to obtain feedback on the prototype system.   Answer box: To answer the comprehension question for each article, the participants were asked to type their answers into a text box.  They were further instructed to limit their answers to two to three sentences.  They were provided the instructions to follow: ?note that you should highlight as much texts as you can, and each highlighting should be as specific as possible.  Finally Click Here and summarize your answer with 2-3 sentences?.   Screen recordings: Participants? interactions with articles in the two conditions were also recorded by Morae, a usability testing software package.  Besides logging time, and mouse and key events, screen capture videos were recorded and replayed for the purposes of retrospective interviews.  The participants? verbal protocols were also audio-recorded by Morae.   Verbal protocols: The concurrent think-aloud method has drawbacks, the major one being its obtrusiveness.  Retrospective interview has been shown to be an appropriate method to provide rich qualitative data about participants? reflections on task completion without interfering with task performance (Guan et al., 2006).  Although retrospective think-aloud is limited in indicating what is attended to contemporaneously with specific behaviour in task performance, this technique was employed since speed and accuracy are important measures in this study, and therefore completing tasks 	123??without distraction is the first priority.  When participants completed all five tasks, the recordings of the fourth task and the second task (one in the experimental condition, and one in the baseline condition) were replayed.  While viewing the replays, participants were asked to narrate what they were thinking.  If participants were quiet for a while, they were prompted to say what they were thinking at that point.   5.7 EXPERIMENTAL PROCEDURES From late January to mid-March in 2010, the prototype system evaluation was conducted in an office at the School of Library, Archival and Information Studies at agreed upon times.  The participants came for the experiment one by one and the researcher was present during the entire process.  First, the participant signed the consent form (see Appendix 8) which outlined the objectives and procedures of the experiment and other information required for informed consent, as specified by the University?s Behavioural Research Ethics Board.  Concerning procedures, participants were told not to use the ?Find? function while reading to ensure that they used only the functionality provided by the reading system.  Though the ?Find? function can help locate a keyword instantly, its use works against the idea of creation of a meaningful network-like structure as occurs with the use of functional units.  After looking at the instructions on the welcome screen (Figure 5.3), the participant viewed the tutorial explaining the interface features on both panes, and walked through a task, and then practiced using the functionality of the system with the practice article in the experimental condition. 	124??Figure 5.3: Welcome screen 	125??Each participant completed five experimental tasks in sequence within 90 minutes, and was allowed to allocate an appropriate time for each task.  None of the experimental tasks required a complete reading of the entire article.  For each task, the participant read a psychology journal research article, which could be in one of two presentations, experimental or baseline, and answered a comprehension question.  The participant read the article with that question in mind.  Then the participant answered the question by highlighting the relevant pieces of text from the article and by summarizing the answer in two or three sentences.  They were able to view the article while answering the question.  A pop-up window (Figure 5.4) provided the instructions for the completion of each task.  Figure 5.4: Instruction window   	126??After answering the question, the participant completed a post-task questionnaire.  After completing the post-task questionnaire for the last article, the participant filled out a post-study questionnaire where they provided background information and general comments.  In the last 30 minutes, while reviewing a replay of two task sessions, the participant was asked to narrate his/her thoughts in interacting with the system and to answer some follow-up questions.  The data was logged by Morae software for later analysis.  5.8 PILOT STUDY  Prior to the test, a pilot study was conducted to examine the experimental procedures.  Five students (4 Masters students, 1 PhD student) from the School of Library, Archival and Information Studies (SLAIS), and two undergraduate students from the Department of Psychology participated in the pilot study.  Each SLAIS student did two of five tasks, while each psychology student did all 5 tasks.  From the results of the pilot study, adjustments were made in the selection of articles for ease of reading, and in the wording of the comprehension questions.   5.9 DATA ANALYSIS Quantitative data was input into SPSS 17.0 for statistical analysis.  Box-and-whisker plots were used to examine the degree of variability within the dataset and to identify the outliers (Huck, 2007).  The outliers were removed from the dataset before conducting the analytical statistical analysis.  Due to the ordinal nature of much of the data (except completion time), non-parametric techniques, Wilcoxon signed rank test and Friedman test, were used in the analysis (Hinton et al., 2004).  The Wilcoxon signed rank test and the Friedman test are the non-	127??parametric equivalents of the paired-samples t test and the one-way repeated-measures ANOVA.  The level of significance was set to p<.05. Qualitative data (open-ended questions on the post-study questionnaire and retrospective interview transcripts) was analyzed using simple content analysis (Krippendorf, 2004).  A verbatim transcript was made from the Morae recordings.  Answers to open-ended questions and interview transcripts were analyzed in the same way.  I read through the printed answers and the interview transcripts several times to identify a set of initial categories based on the manifest content of the information.  I categorized the responses as they were encountered.  I repeated this process until I was satisfied that all pertinent responses were under an appropriate category.    5.10 SUMMARY First, an experimental system was built to represent the relationships between a set of functional units and five information use tasks.  Next, to evaluate the usefulness of signaling functional units as implemented in the journal system, an experimental study was conducted to compare the participant perceptions and task performance by two systems, an experimental system and a baseline system.  Each of thirty participants, who were psychology undergraduate students, did five tasks within 90 minutes.  For each task, each participant read a psychology journal research article, which was in one of two interfaces, and answered a comprehension question by highlighting relevant pieces of text, and then writing a short summary of answers.  The collected data comprised that from questionnaires completed by participants after each task and at the end of the experiment, the logs of participant interactions with the system, and the verbal protocols provided by participants while reviewing a replay of the session.  	128??Content analysis was used to analyze the qualitative data collected from the retrospective interviews, from the open-ended questionnaire questions, and the screen captures.  Statistical analysis was carried out for the quantitative data collected from responses to the 7-point scale questions, the logs containing the time-based data, and the amount of highlighting done by the participants.  The quality of the answers to comprehension questions was assessed first by content analysis and then by statistical analysis.   Reading outcomes and process metrics associated with effectiveness and efficiency were examined.  The experimental results will be reported in the next chapter.    	129??6 EVALUATION OF THE UTILITY OF FUNCTIONAL UNITS IN A PROTOTYPE SYSTEM: RESULTS 6.1 OVERVIEW This chapter presents the results of several analyses performed on data collected through the application of the methods and procedures outlined in Chapter 5.  The purpose of these analyses is two-fold.  First, they attempt to examine the reading outcomes: whether the signaling of functional units enhances reading effectiveness or reading efficiency, in order to compare and contrast them with the results in the baseline condition.  Second, they attempt to interpret the reading process: how signaling functional units helps readers to complete tasks more effectively or more efficiently.  This chapter begins with the findings of task ease and topic familiarity in regard to five experimental tasks carried out in each of the baseline and prototype systems.  The main findings are then presented in two major sections to examine the impact of signaling functional units on reading effectiveness and efficiency.  Results are reported in terms of the reading outcomes and reading process.  The results of reading outcomes are presented first by system, and then by task.  6.2 TASK EASE AND TOPIC FAMILIARITY The results on task ease and topic familiarity are presented first since it is important to establish whether or not there was significant variation among the five experimental tasks.  To examine whether the experimental system was perceived as able to reduce task difficulty and to increase knowledge, task ease and topic familiarity were examined through the first two questions on the post-task questionnaires (Q1-Q2):  	130??Question 1: The comprehension question you have just answered is: (1=Not At All Easy, 7=Very Easy) Question 2: The topic of the article you have just read is:  (1=Not At All Familiar, 7=Very Familiar) It is of interest to see whether the participants? perceptions of task ease and topic familiarity were different between the five experimental tasks in each system.  Since the participants responded after completing a task in different systems, it was necessary to see whether differences existed between each experimental task for the two systems.  Additionally, one?s subject knowledge might affect how easy he/she perceived the task to be, thus it was also necessary to see whether task ease and topic familiarity were correlated.  Figure 6.1 presents the mean scores in perceived task ease between the five experimental tasks and between the two systems.  Results of a Friedman test revealed a significant effect across systems, ?2=18.354, p=.001, and in the baseline condition, ?2 =17.351, p=.002.   In order to test whether differences in means within groups were significant, a post-hoc test with Wilcoxon signed ranks tests was used.  There were similar results across systems and in the baseline system: the ?Learn how to? question was significantly more difficult than the questions for the other four experimental tasks. In the experimental system, or the baseline system, or across both systems, the means indicating task ease for ?Learn how to? was the lowest, ?Refer to facts? was the second lowest and ?Refer to arguments? was the third lowest.  ?Learn how to? was significantly more difficult than the other four experimental tasks in the baseline system, and more difficult when data was analysed together across both systems, but not so when using the experimental system alone.   	131??Figure 6.1: Means of task ease  (higher = easier)  Figure 6.2 presents the mean scores in topic familiarity between five experimental tasks.  A Friedman test was also conducted to examine the differences in topic familiarity across the five experimental tasks and by system.  Results revealed significant differences in topic familiarity across systems, ?2=27.008, p<.001, and in the experimental condition, ?2 =20.709, p<.001.  Wilcoxon signed ranks tests were conducted as a post-hoc test for each to test for differences in means within group.  Overall they show that the topics of ?Keeping up? and ?Learn about background? articles were significantly more familiar than the ?Refer to facts?, ?Refer to arguments?, or ?Learn how to? articles.  In the experimental system, it shows that the ?Keeping up? topic was also significantly more familiar than the topics for these three 	132??experimental tasks, while ?Learn about background? was only significantly more familiar than ?Refer to facts?. In either of the experimental and baseline systems, or across systems, the means indicating topic familiarity for ?Refer to facts? was the lowest, ?Refer to arguments? and ?Learn how to? was either the second or the third lowest.  The topic for the task ?Refer to facts? was significantly less familiar than ?Keeping up? and ?Learn about background? across systems and in the experimental system, but not significantly so in the baseline system.   Figure 6.2: Means of topic familiarity  (higher = more familiar)  Wilcoxon signed ranks tests were used to examine the differences between the two systems for task ease and topic familiarity.  No significant differences were found using this test, 	133??indicating that across all tasks, the perceptions of task ease and topic familiarity were not different using the experimental system or the baseline system. To see whether the perceived task ease was affected by the subject knowledge, Spearman?s rank order correlation coefficient was used to examine whether a relationship existed between task ease and topic familiarity.  A correlation was only found for two tasks performed while using the experimental system, ?Learn about background?, rs=.802, p<.001, and ?Keeping up?, rs=.524, p=.045.  6.3 RESULTS OF READING EFFECTIVENESS The testing of the utilization of functional units on reading effectiveness was in response to Research Question 4:  RQ 4: Does the signaling of functional units to readers enhance reading effectiveness? 4.1a Does the signaling of functional units help readers to complete tasks more effectively? 4.1b If so, how does the signaling of functional units help readers to complete tasks more effectively? 4.2 Does the impact of functional units on effectiveness vary with reading tasks? The results below are reported in terms of sub-question 4.1a regarding the reading outcome and sub-question 4.1b regarding the reading process.  Results of sub-question 4.2 are incorporated into the reporting of results for research questions 4.1a and 4.1b.   	134??6.3.1 Results for RQ 4.1a  Two kinds of data were collected to address research question 4.1a: subjective perceptions collected from post-task questionnaires and objective assessment of performance from system logs.  System differences were examined across tasks to obtain overall understanding of the participants? perceptions and performance of task completion.  System differences were also examined task by task to see the task effect on participant perceptions and performance.  Statistical significance was tested using non-parametric Wilcoxon signed ranks test.  Results support the following hypotheses regarding effectiveness stated in Section 5.1.  The overall significant differences indicate people were more satisfied with the information obtained, highlighted more relevant text and answered more fully using the experimental system.  The significant differences for particular tasks indicate that people using the experimental system felt they comprehended more relevant text and were more satisfied with information obtained for ?Learn how to?, and highlighted more relevant text for ?Learn how to? and ?Keeping up?.  A more detailed analysis of the measures is reported below.   6.3.1.1 Perceptions of Task Completion Four questions on the post-task questionnaire were used to elicit participant perceptions regarding effectiveness:  Question 3: Of all the text in the article likely to be relevant to the question, how much did you read? (1=Almost None, 7=Almost All) Question 4: Of all the relevant text that you read, how much did you comprehend? (1=Almost None, 7=Almost All) 	135??Question 5: How satisfied were you with the information you obtained from the relevant text? (1=Not At All Satisfied, 7=Very Satisfied) Question 7: How confident are you that you fully answered the question with the information you obtained? (1=Not At All Confident, 7=Very Confident) Table 6.1 shows the results for participant perceptions of effectiveness across five tasks and for each task.  In the following tables, the higher means are shown in bold, and the significant results are shown with an asterisk.  Overall participants were significantly more satisfied with the information obtained when using the experimental system than when using the baseline system.  The other measures, although not significant, showed the mean was higher when using the experimental system except for the perception of reading more relevant text.  Particularly, for the experimental task ?Learn how to? participants felt that they significantly comprehended more of the information and were significantly more satisfied with the information when using the experimental system.  From Table 6.1 we can see that the means were higher when using the experimental system for most experimental tasks, although not significantly so.  However, when using the experimental system the means were lower in perceived amount of relevant text read for ?Learn how to?, perceived extent of relevant text comprehended for ?Refer to facts?, satisfaction with information obtained for ?Learn about background?, and confidence in fully answering question for ?Learn about background? and ?Keeping up?.      	136??Table 6.1: User perceptions regarding effectiveness Measures   Experimental tasks Systems N Mean (Std. Deviation) Z statistic Sig. Perceived amount of relevant text read All B  74 3.53 (1.47) -.343 .732 E  74 3.49 (1.40) Learn about background B 14 3.14 (1.23) -.962 .336 E 15 3.40 (1.72) Refer to facts B 15 3.47 (1.77) -.603 .547 E 15 3.73 (1.33) Refer to arguments B 15 3.67 (1.23) -.052 .959 E 15 3.67 (1.54) Learn how to B 15 3.73 (1.53) -1.439 .150 E 15 2.93 (1.28) Keeping up B 15 3.60 (1.64) -.884 .377 E 14 3.71 (.99) Perceived extent of relevant text comprehended All B  72 3.94 (1.34) -1.815 .070 E  72 4.36 (1.12) Learn about background B 12 4.25 (.75) -.499 .618 E 15 4.47 (1.51) Refer to facts B 15 4.13 (1.46) -.450 .653 E 15 3.93 (1.16) Refer to arguments B 15 4.00 (1.31) -.184 .854 E 13 4.46 (.88) Learn how to B 15 3.13 (1.36) -2.555 .011*E 14 4.21 (.97) Keeping up B 15 4.27 (1.44) -1.106 .269 E 15 4.73 (.88) Satisfaction with information obtained All B  73 3.68 (1.48) -2.263 .024*E  71 4.21 (1.21) Learn about background B 13 4.46 (.78) -.108 .914 E 14 4.43 (1.34) Refer to facts B 15 3.40 (1.40) -1.590 .112 E 13 4.31 (1.03) Refer to arguments B 15 3.93 (1.39) -.726 .468 E 14 4.29 (.83) Learn how to B 15 2.47 (1.25) -2.155 .031*E 15 3.73 (1.53) Keeping up B 15 4.27 (1.62) -.223 .824 E 15 4.33 (1.18) Confidence in fully answering question All B  75 3.32 (1.62) -.768 .443 E  75 3.56 (1.54) Learn about background B 15 3.93 (1.58) -.250 .802 E 15 3.73 (1.39) Refer to facts B 15 2.73 (1.53) -1.224 .221 E 15 3.53 (1.64) Refer to arguments B 15 3.40 (1.40) -.349 .727 E 15 3.67 (1.72) 	137??Measures   Experimental tasks Systems N Mean (Std. Deviation) Z statistic Sig. Learn how to B 15 2.40 (1.45) -.977 .329 E 15 3.07 (1.79) Keeping up B 15 4.13 (1.60) -.669 .503 E 15 3.80 (1.21)  6.3.1.2 Task Performance  6.3.1.2.1 Relevant Text Highlighted Participants were asked to highlight text they considered relevant to answering the assigned question.  Each task is associated with three categories of functional units, as shown in the three boxes on the right pane of the interface.  If the participant highlighted paragraphs with the functions listed in the three boxes, then these were considered to be relevant text.  Since it is hard to determine which part of the text on a screen participants intensively read, highlighting was used to indicate and measure the text that gained focused attention.  Table 6.3 shows the results of the amount of relevant text that was highlighted by participants.  A paragraph was taken as the unit of measurement.  Overall, and particularly for the experimental tasks ?Learn how to? and ?Keeping up?, participants highlighted significantly more relevant text when using the experimental system than the baseline system.  The means were higher when using the experimental system for ?Learn about background? and ?Refer to arguments?, but not for ?Refer to facts?.      	138??Table 6.2: Relevant text highlighted  Experimental tasks Systems  N Mean Std. Deviation Z statistic Sig. All  B  73 3.84 3.48 -2.032 .042* E  71 4.65 3.01 Learn about background B 15 3.40 2.35 -1.265 .206 E 14 4.57 2.82 Refer to facts B 15 7.33 5.04 -1.635 .102 E 14 4.21 3.17 Refer to arguments B 14 3.57 2.71 -.916 .360 E 15 4.60 2.85 Learn how to B 14 1.36 1.45 -2.375 .018* E 14 3.50 2.62 Keeping up B 15 3.33 1.72 -2.457 .014* E 14 6.36 3.20  6.3.1.2.2 Quality of Answers  The participants were asked to summarize their answers with two or three sentences (see Figure 6.3).  The quality of answers was assessed in the following way: first, two coders (a psychology student completing her Masters degree at UBC, and the researcher) independently wrote down a list of the major points needed to answer each comprehension question based on information available in the corresponding article.  Then the two coders met to compare the results.  Differences in criteria for a correct answer were examined, and reconciled after further discussion.  Based on the agreed answer key (see Appendix 11), the two coders independently assessed all participants? answers to see what major points were included in each answer.  For all 150 answers, the overall percentage agreement between coders was 86.67%.  The 13% of the answers on which the two coders did not agree were given to a third coder for further independent assessment.  If aspects of a participant?s answer were noted as being part of a correct answer by two of the three assessors, these were counted as correct.  For each question, the highest number of major points (as determined by the coders as described above) a participant was able to include was set as the full score, from which derived 	139??the value of each point.  The participant?s answer was scored by multiplying the number of included points by the value of each point.  For example, though a perfect answer for the task ?Learn about background? should include six major points, the best answer provided by participants included three points.  Thus, to normalize the responses, the three points was set as the full score, and the value of each point was 3.33.  If a participant only addressed two points in her answer, her score was 6.66, and so on.   Table 6.4 shows mean points and mean scores for all and each of the experimental tasks presented to the participants.  Overall the participants included significantly more major points and scored significantly higher for their answers when using the experimental system than when using the baseline system.  For each experimental task, the means for both points and scores were higher for participants using the experimental system, although differences were not statistically significant.    Table 6.3: Quality of answer Experimental tasks Systems  N Mean Std. Deviation Z statistic Sig. points scores points scores All  B  68 .53 - .78  -2.481 .013* E   71 .77 - .90  B  68 - 1.98  2.80 -3.056 .002* E  71 - 3.47  3.92 Learn about background B 14 .71 2.38 .73 2.42 -1.508 .132 E 14 1.00 3.33 1.04 3.46 Refer to facts B 15 1.13 3.77 1.13 3.75 -.212 .832 E 15 1.20 4.00 1.08 3.60 Refer to arguments B 13 .69 3.46 .48 2.40 -1.897 .058 E 15 1.20 6.00 .68 3.38 Learn how to B 14 .00 .00 .00 .00 -1.732 .083 E 14 .21 2.14 .43 4.26 Keeping up B 12 .00 .00 .00 .00 -1.414 .157 E 13 .15 1.54 .38 3.76 	140??Figure 6.3: A screenshot of highlights and answer in the experimental system 	141??6.3.2 Results for RQ 4.1b  How the signaling of functional units influences reading effectiveness was investigated through an analysis of participants? interactions with the functional units and through participant evaluations of the interface functionalities.  The functionalities including functional unit selector and functional unit indicator were examined via three questions on the post-task questionnaires and the open-ended questions on the post-study questionnaire.  The use of functional units was examined using data collected from the system logs and the interview transcripts.  6.3.2.1 User Evaluations of Interface Functionalities After completing a task using the experimental system, participants were presented with three questions on the post-task questionnaires (Q8-Q10) that asked them to evaluate the usefulness of the interface functionalities: Question 8: How useful were the paragraph labels on the left margin in helping you answer the question? (1=Not At All Useful, 7=Highly Useful) Question 9: How useful was the "Turn Highlight ON" function on the right pane in helping you answer the question? (1=Not At All Useful, 7=Highly Useful) Question 10: How useful were the labels listed under Top Hits/Next Best Hits/More on the right pane in helping you answer the question? (1=Not At All Useful, 7=Highly Useful) A Friedman test revealed significant differences between the three major functionalities, ?2=12.827, p=.002.  The Wilcoxon signed ranks test shows the labels listed under Top Hits/Next Best Hits/More (M=4.23, SD=1.67) were significantly less useful than the ?Turn Highlight ON? function on the right pane (M=4.82, SD=1.46), ?2=-3.465, p=.001, or paragraph labels on the left 	142??margin (M=4.58, SD=1.63), ?2=-2.609, p=.009.  No significant differences were found for the usefulness of these functionalities across the five tasks using the Friedman test. Open-ended questions on the post-study questionnaire asked participants what they liked and did not like about the prototype system.  These were intended to complement the responses on the post-task questionnaires by providing reasons for why they did or did not like a particular feature.    The participants generally agreed that the experimental system enabled them to recognize information easier and to locate information faster.  Some representative comments point to these benefits: ? ?I liked that it allowed me to jump from paragraph to paragraph and read information rather than skimming an entire lengthy article looking for information? (P20) ? ?it was able to identify relevant information, as well as target my attention, so that I know what information was relevant for the question posed.  It also allowed me to skim through the article faster? (P21) ? ?it can be especially difficult to read a long article so this system helps the reader focus on what they wish to find out about the research? (P30) Participants seemed to like to have the information divided up into the three categories (Top Hits/Next Best Hits/More), and organized by relevance and function.  One participant noted, ?I liked the highlighting functions that organized the information by relevance (most to least)? (P23), and another noted that the ?prototype system helped in dividing the article up in sections so it was easy to find summary vs. methods? (P14).  For the highlight function, aspects participants liked were that it highlighted the key points and relevant information and that a click took them to a particular section directly.  Two participants mentioned they liked the highlight 	143??function because of the way it highlighted: one participant said the mere act of highlighting strengthened their memory of the information, and another participant appreciated that the toggle on-off button enabled the highlighting for functional units in different categories to stay on simultaneously.  In the retrospective interview one participant said that she liked the functionality because it suited her reading behavior: ?I found the highlighting function to be very, very useful because when I read textbooks I like, if I just read by myself without any manipulations of the text, I find I have a hard time remember things.  So all my texts they have like numerous colors of highlighting, with each color representing one thing and another color representing another thing, and dark color usually represent more important things, yellow is more of just follow-up still important points? (P19). For the paragraph labels, participants liked the two or three word summary giving an idea of what the paragraph was about.  Participants also liked the combination of paragraph labels and highlight buttons, identified by colored labels ?for easy paragraph search, and easy to see because it is color coded? (P10), or by words in labels ?that was helpful to navigate (as a second step after clicking on the link on the right)? (P9).  As to the functional unit titles in the three boxes, they considered them helpful in deciding where one needed to look to find relevant information.  Several participants, despite expressed appreciation of the experimental system, noted that the extracted information was out of context: ? ?because the program selects relevant statements, it sometimes felt that they were out of context a bit, i.e., that the background paragraphs important for the selected statement were not highlighted? (P21); 	144??? ?skip the top information and go to the most relevant information caused me to read less background information on the study thus hindering my comprehension? (P29).   Another thing that participants disliked about the functionalities was that they did not provide specific enough information.  They wished the three categories (Top Hits/Next Best Hits/More) could be further refined: ? ?it was helpful as a category system but was not narrow enough.  It would have been helpful to have more than 3 categories or sub categories? (P24).   They thought the highlight area was a little too broad, and instead they would like to see more specific information highlighted in the text.  Several participants addressed this issue: ? ?if BEST HITS has both previous research and gaps in previous research, I would like to be able to select just one of them, and not all of them? (P9) ? ?focus on specific research findings rather than highlight the ENTIRE findings paragraph? (P11) ? ?include comparisons of different studies (ie in a different color)? (P14).   Additional criticisms were that too much text in the three boxes made it nebulous and confusing, and that the headings of each box (e.g., Best Hits, etc.) were not meaningful in the context of the article.  The participants were also asked to make general comments or suggestions about the prototype system so as to see what should be improved in the design and implementation.  Consistent with what they expressed in likes and dislikes, they considered the prototype system as ?a wonderful tool to keep your reading organized, especially when there is a lot of information presented in the article? (P15), and ?an effective method to narrow down on desired items and 	145??understand the information at hand? (P14).  A few system enhancements were suggested as follows: ? signal specific information within paragraph;  ? signal paragraphs corresponding to each functional unit label under Best Hits/Next Best Hits/More;  ? new tab/category that allows the reader to obtain information on keywords/background on the subject being talked about in the paper; ? another window or a side bar that saves what is highlighted, or helps summarize what is highlighted in one paragraph.  6.3.2.2 Use of Functional Units 6.3.2.2.1 Use of Functional Units in Three Categories One aspect of how participants made use of functional units was examined by considering how they used the functional units in the three categories (primary, related, additional) and how they moved through these functional units.  To do this, first it was determined which paragraphs the participant had read.  This analysis was done using the screen capture data according to the following heuristic criteria: a paragraph was considered as read if the cursor was moving across the lines of that paragraph, whether participants highlighted it or not; if the cursor stayed motionless for more than 5 seconds, then all paragraphs fully visible on the screen were considered as read.  Foltz (1993) took ?5 seconds? as the criterion to decide whether a page or a node was read or not.  The data used in this analysis was collected from use of the experimental system only. 	146??The data analysis in the section above shows that the experimental system appears to be more effective for more difficult tasks.  Therefore, data from the three experimental tasks: ?Refer to facts?, ?Learn how to? and ?Refer to arguments,? which were considered to be relatively difficult, was analyzed to understand how the signaling of functional units affected the reading process.  These three experimental tasks required the participants to integrate information from different parts of an article.  Furthermore the article topic for ?Refer to facts? task was the least familiar to the participants, while the comprehension question for ?Learn how to? task was the most difficult.   It is of interest to see how the signaling of functional units enables participants to focus on the particular information and use bits and pieces of relevant information across the article.  Such uses can be examined by the manner in which functional units in the same category were used from different places in an article, and how functional units from one category were used with those in other categories.  There are four studies reported in the article for the ?Refer to facts? task (Table 6.4).  The functional units in the ?related? category were the functional units most often used from several studies in the primary component of the article, followed by the primary functional units.  The primary functional units were usually used with related functional units in the first study rather than in the subsequent studies.  The additional functional units were usually used from the Introduction or General Discussion.   ???  	147??Table 6.4: Functional units used - "Refer to facts" Participants  Primary functional units  Related functional units Additional functional units state findings summarize results review previous research experimental procedures highlight overall outcome S1 S2 S3 S4 S1 S2 S3 S4 Introduction S1 S2 S3 S4 General Discussion P17               P6               P7, P12, P23               P13               P26               P18               P22               P3               P8               P27               P16               P28               P2               *The shading represents the functional units that were read in each component.  ?S1? stands for Study 1 when several studies were reported in the article, and so on.  For the experimental task ?Learn how to? (Table 6.5), participant 4 read through almost the whole article and read paragraph by paragraph, and thus her data was not analyzed here.  Of the remaining fourteen participants, only one did not use primary functional units.  It seems the functional units in the ?related? category were the functional units usually used from several studies, followed by the primary functional units.  Similarly, for the ?Refer to facts? task, the primary functional units were most often used with related functional units in study 1, whereas the additional functional units were most often used from the Introduction or General Discussion or the first study.     	148??Table 6.5: Functional units used - "Learn how to" Participants  Primary functional units Related functional units Additional functional units tasks; experimental procedures justify methods; preview methods; participants summarize methods describe analysis conducted evaluate methodology S1 S2 S3 S1 S2 S3 Introduction S1 S2 S3 General Discussion P14            P25            P5            P19            P24            P9            P29            P10            P8, P28            P18            P15            P30            P20            *The shading represents the functional units that were read in each component.  ?S1? stands for Study 1 when several studies were reported in the article, and so on.  For the experimental task ?Refer to arguments? (Table 6.6), participant 3 read through almost the whole article and his data was not included here.  Because the primary functional units were located in the general discussion component and intertwined with some related functional units, only one participant did not use the primary and related functional units in the general discussion component.  Half of the participants used related functional units from all four studies.  Additional functional units were usually used from the Introduction component.       	149??Table 6.6: Functional units used - "Refer to arguments" Participants  Primary functional units Related functional units Additional functional units support explanation of results compare results with previous research; interpret outcome (un)expected outcome; highlight overall outcome; indicate significance of present research; established knowledge of the topic; compare results with previous research; interpret outcome indicate a gap in previous research; claim importance of topic justify methods; state findings General Discussion S1 S2 S3 S4 General Discussion Introduction S1 S2 S3 S4 P13            P7            P14            P23            P29            P17            P19            P4            P18            P8            P9            P27            P28            P24            *The shading represents the functional units that were read in each component.  ?S1? stands for Study 1 when several studies were reported in the article, and so on.  Also it is of interest to see how the signaling of functional units enables participants to move from more relevant to less relevant information.  Table 6.7 presents the descriptive statistics of move patterns.  First, observations were made on whether participants turned on the three buttons one by one, or turned them on all at once or just turned one on.  From this we may examine whether participants attended to the suggested functional units in one category before moving to another category, or paid more attention to the suggested functional units while 	150??reading linearly.  This is reported in the ?Read by category? column of Table 6.7.  Meanwhile, data was examined to see whether only the suggested functional units were read or non-suggested ones were read as well (Read suggested only), and furthermore, data was examined to see whether participants reread some contents (Read only once).   Table 6.7: Descriptive statistics of move patterns Tasks Read by category Read suggested only Read only once Y N Y N Y N Refer to facts 10 (66.67%) 5  (33.33%) 7  (46.67%) 8  (53.33%) 10 (66.67%) 5  (33.33%) Learn how to 9  (64.29%) 5  (35.71%) 6  (42.86%) 8  (57.14%) 6  (42.86%) 8  (57.14%) Refer to arguments 9  (64.29%) 5  (35.71%) 12 (85.71%) 2  (14.29%) 7  (50%) 7  (50%) Total  28 (65.12%) 15 (34.88%) 25 (58.14%) 18 (41.86%) 23 (53.49%) 20 (46.51%)  For all three tasks, two-thirds of the participants read category by category.  For ?Refer to arguments? only two participants read non-suggested functional units, probably because primary and related functional units are located in the General Discussion.  For the other two tasks almost half the participants read non-suggested functional units as well.  Almost all of these non-suggested units were in the first few paragraphs of the Introduction or somewhere in the General Discussion.  There were two situations in which participants reread some parts of the article: brought back by a need to review content or brought back by a click on the highlight button.  For example, five participants reread some parts for ?Refer to facts?.  Participant 22 reread the first few paragraphs before turning on ?Top Hits? and jumping to particular paragraphs.  Some found the primary functional units important when they went down to primary functional units in the second study (P6) or to the end of the article (P12), and then revisited the first study.  Or a click on the ?Next Best Hits? button took them to related functional units which were next to the 	151??primary functional units they had viewed (P12, P17, P23).  For the other two tasks, about half of the participants reread the contents, but most participants were brought back by a click.  Most participants followed the path from more relevant to less relevant functional units.  However, for each task there was one participant who went to functional units in the ?More? category first.  And three participants went to the ?Next Best Hits? category first for ?Refer to arguments? and one participant to the ?Next Best Hits? category first for the other two tasks.  Only eight participants clicked and opened all three buttons for the ?Refer to facts? task, six for ?Refer to arguments?, and five for ?Learn how to?.  This indicates that these participants did not feel that they needed information from all three categories to complete a task. At the end of the experiment, the participants were asked to narrate what they were doing while provided with a replay of two task sessions.  The verbal protocols of the three tasks analyzed above in the experimental condition were used to understand the participants? reasoning behind all of their actions, and to corroborate the data analyzed above.  One participant describes the typical use of the functional unit selector and functional unit indicator which was to move from information in a more relevant category to a less relevant category, and meanwhile, to judge the paragraphs labels, first by color then by titles: I read the question and then hit the first button and went through down the highlighted areas and read through a lot of paragraphs, and if it was relevant I would highlight them, if I didn?t find enough information, I would go to the second one, the relevant one, this one, and then I think that was the results that was the yellow one the one that I used most, and then highlighted the one that I found useful.  If I need more information I hit the purple button, but I didn?t find the purple button that useful.  So I mainly generally used the yellow, the next best hits results sections for the question asked ? When I pushed the 	152??buttons, for example, the top hit ones, I read through the title of the paragraphs to see if those are what I want or not.  So I judged the paragraphs based on the title of the paragraphs. (P18, ?Refer to facts? task) The analysis of screen captures indicates that most participants attended to the functional units in the ?Best hits? and ?Next best hits? only.  For example, participant 8 said, ?I just focused on the first two [boxes].  I didn?t find the third one too useful, and too big.  I am only interested in results.  I only care about these things here? (?Refer to facts? task).  However, some participants didn?t go in the order from a more relevant category to a less relevant category.  Participants? choices of functional units in three categories depended on their understanding of the comprehension question, and their judgment of the functional unit labels in the three boxes.  As noted by participant 20, ?I looked at the question, and I compared the question to the subheadings under the boxes ? as one that I may be able to use to answer the question? (?Learn how to? task).  Some participants used the functional units in the third category more than those in the first or second category. I mostly used this purple section here, because the first one was just the results of their study but I think the question was referring to what other additional evidence in addition to their study that would support this RWI thing being related to familiarity.  So I wouldn?t find it their findings or summaries of their findings but instead of in viewing other research that is related to what they found.  (P28, ?Refer to facts? task) Participants tended to read the abstract, the first few paragraphs, or the definitions to orient themselves with the article before turning to highlighted paragraphs.  Some went back to the very beginning after being brought to specific paragraphs by the highlight function.  	153??So I went back to beginning look through abstract again, again just sort of that layout in my mind of the study.  And there highlighting what I jumped out to be means relevant information.  So at this point I think I am gone back and started reading more.  I feel like I was missing some information, maybe some background information.  (P4, ?Refer to arguments? task) Some participants also visited the text preceding the highlighted paragraphs. I read the parts, the paragraphs, that were highlighted in the sections I wanted.  But often I would read the paragraphs just before even if they weren?t highlighted just in case that I missed something or it gave some more background.  (P10, ?Learn how to? task) Participants were likely to attend to the ending as well as the beginning of an article even in the experimental system, though their usual reading behavior was more or less affected by the signaling of functional units. I went down to the bottom to see if I could find an overall conclusion, but that was one that was highlighted by the box, so I scrolled back up to other information.  I think here I was looking mainly for an overall conclusion that would give me a better idea of what the study was about ? And I highlighted these because these seem pretty good overall conclusive results along with what I highlighted above ? I think I ended up writing from the first paragraph of general discussion, because it seems to summarize pretty well the conclusions I highlighted from the previous experiments.  (P29, ?Refer to arguments? task) The above verbal protocols help to explain why some participants started reading from a less relevant category, read non-suggested contents as well, and reread some contents.  The explanations are that in some cases, participants needed more background or overview information, both prior to and during reading of relevant text, while in some cases they assessed the functional units suggested and did not agree with the system recommendations. 	154?? 6.3.2.2.2 Effective Reading Participants were asked ?how did you read differently in these two different presentations? at the end of the retrospective interview.  Their remarks were categorized as positive, negative, and neutral.  Snippets from all participants, organized in these categories, are presented in Appendix 12.  There are five negative remarks and three neutral remarks.  Two negative remarks are in regard to effectiveness.  Most of the positive remarks related to three themes: focused reading, selective reading and less reading.  Among the positive remarks, those on focused reading relate to effectiveness.  The results regarding effectiveness are discussed here, while those regarding efficiency (selective reading and less reading) are discussed in Section 6.4.2.2.  Comments that relate to ?focused reading? indicate that the experimental system was more effective since it deals with actual reading.  In focused reading, participants went into the information suggested by the system in depth.  The time they spent was on comprehending the relevant information rather than on locating information, as noted by participants: ? ?I can have more time to spend on reading in detail? (P1) ? ?this one as I said I read more because they are divided in sections right ? read important parts that I need? (P14) ? ?? allowed me to read more in-depth paragraphs, not just the first paragraphs ?? (P29) Thus they were able to attend to particulars within the article, ? ?bring me to ? specific information may be within different parts of the text that I wouldn?t usually jump to? (P5) ? ?read those more specifically? (P6) 	155??? ?looking specifically at certain aspects? (P21).  Criticisms of the experimental system were generally of two kinds: the participants needed the larger context of the article under review while reading a particular text segment in order to understand the information suggested by the system, and they needed more time to get used to the system.  The comments noted on the lack of context indicate a shortcoming of the system, at least concerning its effectiveness.  One participant said, ?I found that even though I found information quicker, I understood information better in the first one [baseline system] which I found on my own? (P9).  Another said ?whereas the new way it?s helpful in finding very specific information, but after finding that because I don?t have the background knowledge, it doesn?t really help me understanding what I am trying to answer? (P17).  As noted by participant 9, the relevant information can hardly be isolated from the context of the article, e.g. ?I first read the introduction, the abstract I guess and then the first paragraph of introduction to get an idea and then to use hints on the right side?.    6.4 RESULTS OF READING EFFICIENCY The testing of the utilization of functional units on reading efficiency was in response to Research Question 5 as follows:  RQ 5: Does the signaling of functional units to readers enhance reading efficiency? 5.1a Does the signaling of functional units help readers to complete tasks more efficiently? 5.1b If so, how does the signaling of functional units help readers to complete tasks more efficiently? 5.2 Does the impact of functional units on efficiency vary with reading tasks? 	156??The results below are reported in terms of sub-question 5.1a regarding reading outcome and sub-question 5.1b regarding reading process.  Results of sub-question 5.2 are incorporated into the reporting of results for research questions 5.1a and 5.1b.   6.4.1 Results for RQ 5.1a  The data analysis for this research question was conducted in the same way as for research question 4.1a on the effectiveness of functional units.  The data consists of subjective perceptions from post-task questionnaires and objective performance from system logs.  Differences between groups within each task as well as group differences across tasks were examined.  The results support the hypotheses regarding efficiency stated in Section 5.1.  Overall, results indicate that participants thought it more efficient to obtain information using the experimental system.  Specifically, significant differences indicate that participants using the experimental system thought it more efficient to obtain information for ?Refer to facts? and ?Learn how to?.  Participants also took less time to complete the ?Refer to facts? and ?Refer to arguments? tasks.  A more detailed analysis of measures is reported below.   6.4.1.1 Perceptions of Task Completion One question on the post-task questionnaire was used to elicit user perceptions regarding ease, which is interpreted here as an indicator of perceived efficiency:  Question 6: How easy was it to obtain relevant information in answering the question? (1=Not At All Easy, 7=Very Easy) 	157??Statistical significance was determined using non-parametric Wilcoxon signed ranks test.  Table 6.9 presents the means of perceived efficiency to obtain information.  Overall, participants felt it was significantly easier to use the experimental system to obtain relevant information than the baseline system, and particularly for the experimental tasks ?Refer to facts? and ?Learn how to?.  The means of perceived efficiency were higher when using the experimental system for two other tasks, but not significantly so.  Table 6.8: User perceptions regarding efficiency Measures Experimental tasks Systems N Mean (Std. Deviation) Z statistic Sig. Perceived efficiency (ease)  to obtain information  All B  73 3.34 (1.45) -2.898 .004*E  75 4.01(1.47) Learn about background B 15 4.27 (1.44) -.450 .653 E 15 4.00 (1.73) Refer to facts B 13 2.69 (.63) -2.694 .007*E 15 3.87 (1.46) Refer to arguments B 15 3.60 (1.12) -1.037 .300 E 15 4.07 (1.16) Learn how to B 15 2.33 (1.29) -2.157 .031*E 15 3.80 (1.74) Keeping up B 15 3.73 (1.67) -1.116 .265 E 15 4.33 (1.29)  6.4.1.2 Task Performance  The time spent on each task was measured as time elapsed from the instant the task page loaded until the participant clicked on the ?Done? button when submitting their answer.  This span of time includes the time taken in understanding the question, reading the article, highlighting the text, and composing the answer.   Overall participants spent less time completing a task when using the experimental system.  The average task time in the experimental condition is 12 minutes, and in the baseline condition, 14 minutes.  As shown in Figure 6.4, participants using the experimental system spent 	158??less time on the experimental  tasks  ?Refer to facts?, ?Refer to arguments? and ?Learn how to? than they did when using the baseline system.  However, they spent more time for the tasks ?Learn about background? and ?Keeping up? than those using the baseline system.  A 2?5 ANOVA was conducted with system (experimental/baseline) and experimental task (Learn about background, Refer to facts, Refer to arguments, Learn how to, Keeping up) as the factors.  The interaction effect is statistically significant, F(4,132)=3.869, p=.005, indicating that task has a differential effect depending on the system used.  To examine the interaction further, system differences at each task level were compared by simple effects analysis.  The results are shown in Table 6.9.  The critical t(132)=1.978 at alpha .05 (two tailed).  The contrast for the task ?Refer to facts?, t(132)=-2.58, and the contrast for the task ?Refer to arguments?, t(132)=-3.37, p<.05, are statistically significant.  The data indicate that using the experimental system resulted in significantly shorter completion times for the tasks ?Refer to facts? and ?Refer to arguments?.   Table 6.9: Completion time in minutes Experimental tasks Systems  N Mean Std. Deviation t All B 67 14.396273 5.6487116 -1.984 E 67 12.307219 4.8340890 Learn about background B 15 11.902113 5.1506861 .097 E  15 13.696547 5.0927169 Refer to facts B 14 16.577343 6.1805255 -2.58* E  13 11.550946 4.1280088 Refer to arguments B 15 16.565307 5.6621576 -3.37* E  12 9.959417 3.7229621 Learn how to B 15 14.986460 5.1294467 -1.71 E  14 11.770157 6.4855789 Keeping up B 15 12.298773 4.2275850 0.68 E  14 13.579443 3.8576044     	159??Figure 6.4: Task completion time by task  Figure 6.5: Completion time over time  	160??Figure 6.5 illustrates the means of task completion time over five tasks in sequence.  In terms of order, experimental first represents sessions in which the first two tasks were done using the experimental system and the last three tasks using the baseline system.  Baseline first represents sessions in which the first two tasks were done using the baseline system and the last three tasks using the experimental system.  Half of the participants were in each order.  Regardless of whether participants started with the experimental system or the baseline system, the mean time for the first task was longest; whereas, the mean time for the last task took the least time.  For the first task, the mean was higher when using the experimental system than when using the baseline system.  For the last three tasks the means were lower when using the experimental system than when using the baseline system.  The overall time difference between the two orders was statistically significant, t(69) = 2.917, p < 0.01, though for each task none of the differences was statistically significant.  Since this is a Latin Square design, the same experimental task (i.e., Learn about background) only occurs three times in the same position in the same system, so here task effects are not considered.  The results indicate that the order had some effect on the time to adapt to the system used.  6.4.2 Results for RQ 5.1b  6.4.2.1 Amount of Text Explored Across all five experimental tasks, the overall mean of the amount of text highlighted was lower when using the experimental system than when using the baseline system.  The means of highlighting were lower when using the experimental system for ?Learn about background?, ?Refer to facts? and ?Refer to arguments? whereas higher for ?Learn how to? and ?Keeping up?.  	161??The differences in highlighting between the experimental and baseline systems were significant for ?Refer to facts? and ?Keeping up?. One possible explanation for these differences, is that the greater the amount read, the greater the amount of highlighting, although participants were asked to highlight only relevant text.  To examine whether a relationship exists between the amount of text read and the amount of text highlighted, I used Spearman?s correlation coefficient, as it is suited to non-parametric data.  Positive correlations were found between text read and text highlighted for the experimental system, rs=.557, p<.001, and for the baseline system, rs=.648, p<.001.  Therefore, through analysis of how much text was highlighted in each article component, we may learn how participants explored the articles.  A further analysis of data was carried out comparing the overall text highlighted between systems.  Statistical significance was determined using the non-parametric Wilcoxon signed ranks test.  The results of amount of text highlighted are presented in Table 6.10.  Compared with those using the baseline system, participants using the experimental system highlighted significantly less in components that were not relevant to the task: the  Discussion component for the task ?Learn about background?, the Introduction and Discussion components for ?Refer to facts?,  and the Introduction component for ?Refer to arguments?.  On the other hand, participants highlighted significantly more in the relevant components, such as in the Methods component for the task ?Learn how to?, and in the components of Introduction and Results for ?Keeping up?.  In other words, when using the baseline system participants highlighted a greater amount of less-relevant text.  The means of the amount of text highlighted when using the experimental system were no higher than when using the baseline system with a 	162??few exceptions: the Introduction component for ?Learn about background?, the Discussion component for ?Refer to arguments? and ?Keeping up?.    Table 6.10: Amount of text explored Tasks   Components Systems N Mean Std. Deviation Z statistic Sig. All  All  B  73 7.05 5.40 -.622 .534 E  70 5.74 3.51 Learn about background All  B  14 6.29 3.65 -.624 .532 E  14 6.14 3.84 I B 15 4.87 3.11 -.420 .674 E 15 5.07 2.99 M B 14 .00 .000 .000 1.000E 13 .00 .000 R B 13 .00 .000 .000 1.000E 14 .00 .000 D B 15 1.80 2.01 -2.388 .017*E 13 .31 .63 Refer to facts All  B  15 11.80 6.68 -2.543 .011*E  14 4.79 3.02 I B  14 3.36 2.65 -2.684 .007*E  12 .00 .000 M  B  12 .00 .000 .000 1.000E  14 .00 .000 R  B  15 3.87 3.98 -.801 .423 E  13 2.92 2.14 D  B  15 3.47 2.70 -2.809 .005*E  14 .50 .76 Refer to arguments All  B  15 8.73 5.86 -1.633 .102 E  14 4.71 2.64 I B  15 3.80 3.34 -2.556 .011*E  12 .17 .58 M  B  12 .00 .00 .000 1.000E  15 .00 .00 R  B  13 .00 .00 .000 1.000E  14 .00 .00 D  B  15 3.93 3.39 -.070 .944 E  15 4.13 2.64 Learn how to All  B  14 4.00 3.06 -1.355 .176 E  15 5.73 4.54 I B  14 1.64 1.78 -.310 .757 E  15 1.33 1.54 M  B  12 .50 .67 -2.494 .013*E  14 2.79 2.36 	163??Tasks   Components Systems N Mean Std. Deviation Z statistic Sig. R  B  12 .00 .00 .000 1.000E  12 .00 .00 D  B  14 .64 1.15 -.604 .546 E  14 .43 .76 Keeping up All  B  15 4.20 1.97 -2.714 .007*E   13 7.46 2.76 I B  15 1.33 1.76 -2.149 .032*E  15 3.40 2.95 M  B  15 .00 .000 .000 1.000E  13 .00 .000 R  B  13 .23 .44 -2.023 .043*E  14 1.43 1.91 D  B  15 2.27 1.94 -.307 .759 E  15 2.67 2.32  6.4.2.2 Efficient Reading The transcript data was used to obtain an in-depth understanding of what participants attended to in their usual reading styles and how the signaling of functional units altered their reading styles.  Participants narrated what they were thinking while viewing a replay of the task session.  At the end of the task session in the baseline condition, participants were asked two questions to elicit further thoughts about their interactions with the baseline system: ?how did you look for information in this presentation? and ?how did you identify this paragraph was more useful for answering the question?.  In the baseline condition, a few participants read through almost the whole article and attended to every detail.  Some participants attended more to the first paragraph and the first couple of sentences of each paragraph.  For example, as participant 3 stated, ?I went to the intro paragraph and from there I sort of read like the first sentence of two paragraph for relevant key information.  So from there when I see something that seems relevant then that?s what I would read a bit more thoroughly in that paragraph ??.  Besides key sentences, some participants also attended to major headings and subheadings, ?and then scan look for the titles or so on, and then 	164??start looking for information within those? (P4).  Several participants attended to the last paragraph and last sentence as well for it was perceived as a kind of summary.  Additionally, some participants used keywords stating an idea or clue words expressing a relationship to determine the surrounding text to read, ?looking for certain keywords that I thought would be relevant, assuming that those keywords have explanation around them so I can just look for those reading through whole paragraphs and sentences and terms? (P21).  Most people applied their knowledge of the article structure, i.e., that Introduction and Discussion were important, and read these two components more thoroughly and skipped the middle.  The Methods component was most often neglected.  They started from Introduction because it ?would give me an idea of what the paper is like? (P9), and it ?is always at the beginning so that was easy enough to find? (P12), and they went to the Discussion to find useful information because ?that?s a general summary of everything? (P13).  The Introduction and Discussion were mentioned most frequently by participants for locating relevant information in the baseline condition.  At the end of the interview, the participant was asked ?how did you read differently in these two different presentations?.  As stated in the Section 6.3.2.2.2 above, their responses were coded into categories of positive remarks, negative remarks, and neutral remarks, which were further categorized to see in which respects participants read differently in the experimental system.  Words like ?easier?, ?faster?, ?quicker?, ?simpler?, ?more efficient?, ?save time? etc. frequently appeared in the participants? remarks when they were asked about how they read differently when using the experimental system.  Even in the negative remarks, participants admitted it was quicker when using the experimental system.  In addition to positive remarks on focused reading, positive remarks on two categories, less reading and selective reading, address 	165??the efficiency of reading.  ?Less reading? describes those cases in which participants simply followed the system suggestions and skipped non-suggested information, while ?selective reading? describes cases in which participants actively selected what they needed from the information suggested by the system. Concerning ?less reading?, participants relied on the system to tell them which parts they should read and ignored other paragraphs or sections,  ? ?it cut the bulk of the article down a little bit? (P29) ? ?you click on it and you are referred to where you should be looking? (P26) ? ?able to go from highlighted paragraph to highlighted paragraph rather than kind of hoping to find summary information underneath the key headings? (P20) In selective reading participants selected what to read in a narrowed scope following the system suggestions.  The selection can be from the highlighted sections, ? ?they had the tabs and then they helped me highlight which part and then with those highlights I can extract which was relevant? (P1) ? ?I filtered out what I thought was important within each highlighted section? (P13)  or from the highlighted paragraphs, ? ?I pressed on these ones [highlight buttons] just to see what options I had I guess to read.  And then I guess I decide from myself whether to read it or not, maybe like read the first sentence of each and see if it helps me in answering the question? (P2)  or from the functions of paragraphs, ? ?you can look at results and how it is broken down and you can pick and choose parts of that section? (P27) 	166??As stated above regarding effectiveness, the negative remarks indicate that participants were not accustomed to the experimental system and felt a lack of context.  Three participants were more inclined to follow their usual reading style, despite the added functionalities of the experimental system.  One of them said it was easier to rely on her own stream of thought and another said it was faster if she read in the manner she was used to: using the headings and scanning.  The third one considered the system?s assistance as annoying.  This is also reflected in neutral remarks.  In neutral remarks, two participants said they read the same way as they used to for the first task in the experimental system, but gradually adapted to the experimental system when they did the second task.  6.5 SUMMARY In regard to the research questions 4 and 5, overall, results show that participants using the experimental system were significantly more satisfied with the information obtained, highlighted more relevant text, and more fully answered the comprehension questions.  The use of functional units was effective in enabling people to focus on the specific information and use pieces of relevant information across the article, but not necessarily to move from the more relevant to less relevant information.   Participants using the experimental system also significantly felt more efficient in obtaining the information.  The use of functional units was efficient in enabling people to narrow down their reading by simply following the system suggestions, or by selecting the information suggested by the system.   	167??The signaling of functional units showed varying usefulness with assigned tasks since more effectiveness and efficiency were observed for the tasks requiring use of information scattered across the article.  In all, many of the findings were as expected and hypotheses regarding satisfaction, relevant text highlighted, quality of answers, perceived efficiency and task variations were supported.  A more in-depth discussion of these findings follows in the next chapter.   	168??7 DISCUSSION 7.1 OVERVIEW The aim of this research is to explore how functional units can be employed for locating and consuming relevant information within scholarly journal articles.  This chapter summarizes and discusses the findings of this research in relation to the research questions and hypotheses.  Discussed first in this chapter is the functional unit taxonomy developed in the first phase of the study, which was presented in Chapter 4.  Particularly, it discusses how the individual functional units are related with certain information use tasks, and how a functional unit is related with other functional units in four components.  Then the experimental results presented in Chapter 6 are considered.  The utilization of functional units in the reading outcomes and reading process are discussed in terms of effectiveness and efficiency.  In particular, the discussion focuses on why the experimental system enhanced reading effectiveness and efficiency for some tasks rather than others, and in which aspect of effectiveness and efficiency the experimental system enhanced journal reading.   7.2 THE FUNCTIONAL UNIT TAXONOMY 7.2.1 Summary of Findings The first phase study developed the functional unit taxonomy to map the functions of the information units onto different task types, in response to three research questions.  Results for each of the questions are summarized in this section.  Research Question 1: What are the most common functional units within psychology journal articles?  	169??A set of 41 functional units occurring in psychology journal articles was identified.  These are distributed as follows: 11 functional units in Introduction, 10 in Methods, 7 in Results, and 13 in Discussion.  These common functional units were presented in Table 4.3 (p. 78) of Chapter 4.  Research Question 2: How are functional units related to different tasks requiring use of information in psychology journal articles? To examine the functions of these units, six common information use tasks related to journal articles were identified: Learn about background, Refer to facts, Refer to arguments, Learn how to, Learn about particular and Keeping up.   2.1 How are the IMRD components of a journal article related to different information tasks? A task-related functional unit taxonomy was developed to model the relationship between the 41 functional units in four core components (Introduction, Methods, Results, Discussion) and five of the  information use tasks (Learn about background, Refer to facts, Refer to arguments, Learn how to, Keeping up).  Findings indicate that the Introduction component is primarily useful for two tasks: ?Learn about background? and ?Keeping up?.  The Methods component is the primary component for the task ?Learn how to?, the Results component is primary for ?Refer to facts?, and the Discussion component is primary for ?Refer to arguments?.   2.2 How are the functional units in a component of a journal article related to different information tasks? 	170??Based on user ratings and rankings, functional units were placed in three categories: primary, related, or additional related.  In some cases, a functional unit was the primary functional unit for one task, a related functional unit for another task, and an additional related functional unit for a third task.  Some functional units were found to be more or less useful for several tasks while other functional units were useful only for one task.   Research Question 3: How are functional units related to each other for a particular task requiring use of information in psychology journal articles? RQ 3.1 For a particular information task, which functional unit is first attended to? The following functional units emerged as the primary or most useful for each information use task: ?Learn about background? task review previous research ?Refer to facts? task state findings ?Refer to arguments? task support explanation of results ?Learn how to? task describe materials, describe tasks, outline experimental procedures ?Keeping up? task indicate a gap in previous research  RQ 3.2 For a particular information task, how is a functional unit related to other functional units of the same component? A functional unit is more closely related to other functional units in the same component than to those in a different component.  And a functional unit is more closely related to some than other functional units in the same component.  The primary functional unit is the most important move in a sequence of moves for a task, with the related functional units as a preceding or a subsequent move in the sequence to further enrich the meaning.   	171??RQ 3.3 For a particular information task, how is a functional unit related to other functional units of different components? The additional related functional units in the other three components serve to complement the meaning of the primary functional unit from different aspects, such as a preview, a review, or to provide a view from a different angle.   7.2.2 Discussion of Findings The full taxonomy is presented in Table 4.12 of Chapter 4.  Taxonomy, which normally implies hierarchical classification, is used here since functional units are grouped at the component level, and are further classified into three sub-categories: primary, related, and additional related.  In this way, the taxonomy attempts to map out the whole system, and to show the underlying structure of functional units and their relationships.  This taxonomy of functional units was developed by identifying and validating the functional units in the IMRD components and their relationships with information use tasks in the case of psychology journal research articles.  Identified from well-acknowledged move models and tested with sample articles, and further validated by psychology journal users, the 41 functional units represent those which occur commonly in psychology journal articles.   Research discussing information seeking tasks or information search tasks is profuse, while in the sparse research about information use, addresses it as behavior (Choo et al., 2000; Wilson, 2000), or as activities tackling general information problems (Taylor, 1991).  Information use emerges as important, particularly when interacting with the contents of a document, since this is the last stage in the reading process, preceded by stages of navigating, close reading and comprehending.  This study takes a cognitive approach to information use.  A 	172??set of six information use tasks associated with journal articles were identified.  This set of tasks is not intended to be exhaustive, but rather representative of task types using four components within journal articles.  These task types cover the main characteristics of tasks involved in reading scholarly work.  Related to functional units, these information use tasks act as a lens through which to view how the functions of information units take effect.   Results show that the usefulness of a component and the functional units in a component varied substantially depending on the information use task.  To a component, the extent of usefulness is not the same for five information tasks, and even when a component proves useful for a task, not all functional units within the component are equally useful for that task.  For example, the Introduction component is more useful for the tasks ?Learn about background? and ?Keeping up? than it is for the other three tasks.  In the Introduction component, eleven functional units were validated from the preliminary thirteen.  However, only half of the eleven functional units were rated 4.0-5.0 for each related task.  Only three functional units were found useful for the two tasks, yet two of them were not rated equally for both tasks.  For example, the functional unit ?review previous research? is the primary functional unit for the task ?Learn about background?, but is a related functional unit for ?Keeping up?.  And the functional unit ?indicate a gap in previous research? is the primary functional unit for the task ?Keeping up?, but is a related functional unit for ?Learn about background?.   It is noted that functional units in the Introduction component are not significantly different for the ?Keeping up? task, which implies most units in the Introduction are useful for that task.  So do the functional units in the Methods component for the ?Learn how to? task.  Analysis also shows that some of the 41 functional units are highly useful for most tasks while others only for particular tasks, and some functional units have varying usefulness for different 	173??tasks, while others not.  These indicate that the functional units may contain more general or specific information and thus have different applicability in information use.  Furthermore, for a particular task, a functional unit is more closely associated with certain functional units than with other functional units in the same component and in other components.  For example, consider the functional units clustered around the task ?Learn how to? as illustrated in Figure 7.1: the central category includes primary functional units ?describe materials?, ?describe tasks? and ?outline experimental procedures? within the Methods component.  The related functional units in the same component extend the meaning from the central category, including ?justify methods?, ?present variables?, ?outline data analysis procedures?, etc.  And additional related functional units in the other three components further extend the meaning from the central category, including ?summarize methods? in Introduction, ?describe analysis conducted? in Results, and ?evaluate methodology? in Discussion.  So building on this example, it seems that if we only focus on the three primary functional units in a component, and selectively use the other nine functional units from four components, then the reading experience would be expected to improve.    ????????	174??Figure 7.1: An illustration of functional units in three categories for ?Learn how to? task   Move analysis led by Swales (1990, 2004) focuses on the role of rhetorical structure at a granular level.  Specifically, it attends to the forms and contents of ?moves? and ?steps? in four article components.  The purposes (functions) of these ?moves? and ?steps? have not been fully exploited with respect to reading in the digital environment, which allows the manipulation of the presentation of document contents to facilitate information use.  In the validation study, the functional units that were rated as least useful and dropped from the preliminary set of 52 were those that overlap with functional units in other components.  The functional units that participants judged as most prevalent in a component are the ones that they rated as most useful.   This illustrates the interrelationship between content, form and purpose in texts, as expressed in mature genres, and points to the potential use of this interrelationship to support close reading and information use.  It demonstrates the essence of Relevance Theory, in that the manner in which information is communicated carries presumptions of relevance, since it claims attention. Primary: M: describe materials; describe tasks; outline experimental proceduresRelated: M: justify methods; present variables; outline data analysis procedures; preview methods; describe participants; present reliability/validity Additional related: I: summarize methods; R: describe analysis conducted; D: evaluate methodology	175??This study extends Vaughan and Dillon?s notion (1998) that IMRD components have their own functions, which were identified based on users? conceptions.  This study specifies the functions of types of information within IMRD components from document analysis, and provides experimental support for the importance of generic conventions of four components.  For example, according to Vaughan and Dillon (1998), the functions of Introduction include ?set up author?s study?, ?make an argument?, ?set up past studies as a foil for presenting a new study?, ?set up the background?, and ?justify a current or future study?.   These functions indicated by the Introduction component were corroborated by this study in a refined set of five functional units for ?Learn about background? task (e.g., review previous research, point out contribution of previous research, indicate a gap in previous research, narrow down topic, clarify definition), and six functional units for ?Keeping up? task (e.g., indicate a gap in previous research, provide reason to conduct research, point out contribution of previous research, review previous research, claim importance of topic, introduce present research).  In Vaughan and Dillon?s study (1998), only one function was identified by users for the Methods and two for the Results.  In this study, the sets of functional units in these two components show that they contain specific and detailed information and thus these two components are probably less familiar to the users.   The taxonomy of task-related functional units outlines how functional units are related to each task type, with functional units categorized into primary functional units, related functional units in the same component as primary functional units, and additional related functional units in the other three components.  There is clearly a relationship between functional units and information use tasks, and among a set of functional units for a particular task.  The results imply that depending on the tasks, the scope of our attention in close reading may be as small as a slice 	176??of text within an article component, or as a combined set of functional units within or beyond a single component.  Furthermore, the relevant information for a task can be organized from more central to more peripheral information in the same component, and from more specific information in the primary component to more general information in other components.  This suggests the value of a flexible reading environment for supporting multiple uses of journal articles.  This may be realized by defining the semantic relationships between functional units and prioritizing these functional units in presentation.  The findings on functional units may also be applied on the other side of the scholarly communication process, the authoring stage.  The functional unit taxonomy developed in this study could provide authors with a template about what and in which sequence different types of information should appear in a good writing. Unlike some previous work (Bishop, 1999; Sandusky & Tenopir, 2008) which retrieved texts individually and used them in an isolated way, in this study the bits and pieces of texts relevant to a task were treated as an interconnected network.  In addition to organizing different types of information within document, this network may extend to be able to organize the same type of information across documents.  The organizational framework for functional units presented here represents the path of searching for relevance by Sperber and Wilson (1995).  However, this path of pursuing information from more to less relevant and from specific to general for a particular task is quite different from the common navigational patterns observed in studies of reading (Loizides & Buchanan, 2009), which goes in an opposite direction.     	177??7.3 THE UTILIZATION OF FUNCTIONAL UNITS  7.3.1 Summary of Findings Built on the results of phase one, the second phase study comprised the design and implementation of a prototype system modeling the functional units and their associations with information tasks.  The practical use of functional units in terms of effectiveness and efficiency were tested, in response to two research questions.  Results for each of the questions are summarized in this section.  Research Question 4: Does the signaling of functional units to readers enhance reading effectiveness? RQ 4.1a Does the signaling of functional units help readers to complete tasks more effectively? The experimental system was found to be effective, both in terms of participant perceptions and performance of task completion.  Overall, participants using the experimental system were more satisfied with the information obtained, highlighted more relevant text, and included more concepts in their answers to the comprehension questions.   RQ 4.1b If so, how does the signaling of functional units help readers to complete tasks more effectively? Results indicate that the experimental system was effective in enabling people to focus on specific information needed and to use pieces of relevant information across the article.  Participants did not necessarily move from the more relevant to less relevant information as was 	178??predicted.  A disadvantage of the experimental system was a loss of context, since in some cases participants needed background information before going into detail.    RQ 4.2 Does the impact of functional units on effectiveness vary with reading tasks? The impact of signaling functional units on effectiveness did vary by task.  The experimental system was most effective for the task ?Learn how to?, which was the task for which the participants had the least experience, and which required information scattered in the middle part of an article.   Research Question 5: Does the signaling of functional units to readers enhance reading efficiency? RQ 5.1a Does the signaling of functional units help readers to complete tasks more efficiently? Overall, participants were more efficient using the experimental system in terms of perceptions of their own efficiency.   RQ 5.1b If so, how does the signaling of functional units help readers to complete tasks more efficiently? Comments from participants indicate that the experimental system required less mental effort, so participants were able to narrow down their reading by simply following the system suggestions, or by selecting the information from that suggested by the system.  This allowed them to complete tasks more quickly as well as expending less effort.  However, participants wished that more specific information could be signaled by the system.  	179?? RQ 5.2 Does the impact of functional units on efficiency vary with reading tasks? As with effectiveness, the signaling of functional units was observed to be more useful for the tasks requiring use of information scattered across the article rather than the tasks requiring information concentrated in a single section.  While the experimental system was more effective only for the ?Learn how to? task, it was more efficient for three tasks, ?Refer to facts?, ?Refer to arguments? and ?Learn how to?.  7.3.2 Discussion of Findings  7.3.2.1 Task Ease and Topic Familiarity Each of thirty participants read five articles with a different experimental task for each.  After completing a task, the participants were asked to assess the ease of that task and the familiarity with that article topic.  Given that results indicate variation by task, it is of interest to consider the relative task ease and topic familiarity for the five experimental tasks.  All participants had a similar academic profile (3rd or 4th year psychology undergraduate students), and the demographic data shows that their average expertise was low with respect to search skills, domain knowledge and genre knowledge.  Furthermore, the five articles selected for testing were from a popular psychology journal and did not require high domain knowledge.  Therefore individual differences in domain expertise were not assumed as a factor here.  Overall the comprehension question for the experimental task ?Learn how to? was perceived as the most difficult to answer, and the article topic for ?Refer to facts? task was the least familiar.   Participants? perceptions of task ease may have been influenced by what type of information was required by the task, and where the information was situated within the article.  	180??This relates to the journal article structure in the psychology discipline.  The structure of a typical psychology journal article comprises an untitled Introduction, descriptions of several experiments in the middle, with a separate Methods, Results and Discussion components for each experiment, and finally a General Discussion after the last experiment.  The experimental tasks ?Learn how to?, ?Refer to facts?, and ?Refer to arguments? required the participants to integrate information from the Methods, Results and Discussion components, which occurred in several experiments, whereas information for completing the tasks ?Learn about background? and ?Keeping up?  was found at the two ends of an article.  Given that the task ease and topic familiarity scores for experimental tasks associated with a higher level of information scatter were lower, there appears to be a relationship between these variables.   The level of task ease may also be related to natural reading behavior.  On the demographic questionnaire, participants? responses to identifying useful information show that they relied more on the beginning and ending of articles.  Additionally the level of task ease seems to be related to how often a task is performed in their daily reading.  From participants? responses concerning their purposes for using journal articles, we observe that the task type ?Refer to arguments? was the most common purpose stated, followed by ?Refer to facts? and ?Learn about background?.  ?Keeping up? and ?Learn how to? came last as purposes for using journal articles since these two were more important to researchers who needed to learn the latest developments or to oversee a research project.  Since the information for the experimental task ?Learn how to? was scattered across several experiments reported in an article, and this task was least commonly conducted, it is not surprising that ?Learn how to? was reported to be significantly more difficult than the other four tasks across the systems and in the baseline condition.  However, ?Learn how to? was not 	181??significantly more difficult than others in the experimental condition.  This suggests the trend that experimental system might have an impact on participants? perceptions of task ease for ?Learn how to? in the form of a compensation effect, though no significant differences were found between systems.    After completing a task, each participant rated how familiar he or she was with the topic of the article just read.  The topic for the experimental task ?Refer to facts? was significantly less familiar than the topics for ?Keeping up? and ?Learn about background? across systems and in the experimental condition, but not significantly so in the baseline condition.  This suggests that the topic for ?Refer to facts? sounded more familiar when using the baseline system.  If this is examined together with the highlight data, we may better understand how this happened for the ?Refer to facts? task.  The highlight data shows that overall, participants significantly highlighted approximately twice as many paragraphs when using the baseline system than when using the experimental system, and they highlighted more in the irrelevant components of Introduction and Discussion.  The topic for ?Keeping up? was perceived significantly more familiar than the other three tasks in the experimental condition.  Other than the possible explanation that the topic familiarity was correlated with task ease, it is probably because people took more close reading of text which was not directly related, as shown by the highlight data.  So, the feeling of more familiarity may have been the result of reading more of the article including less relevant contents, no matter in which system.    7.3.2.2 Effectiveness and Efficiency in Reading Outcomes I discuss the results of reading effectiveness and efficiency side by side for a comprehensive view.  Only results with statistically significant differences are discussed.  To 	182??answer the research questions 4.1a ?IF Effective? and 5.1a ?IF Efficient?, a statistical analysis was conducted on subjective measures (i.e., questions 3-7 on post-task questionnaires) and objective measures (i.e., through the relevant text highlighted, quality of answers, and task completion time).  The measures on which the experimental system showed significant differences from the baseline system are summarized in Table 7.1.  Two hypotheses regarding participant perceptions, Hypothesis 3 about satisfaction and Hypothesis 5 about perceived efficiency, were significantly supported by the results.  Another two hypotheses, Hypothesis 6 about relevant text highlighted and Hypothesis 7 about quality of answers, significantly supported by the results were related to task performance.  Hypothesis 3 and Hypothesis 6 which were both confirmed by the results examined the related measures from two perspectives of perceptions and performance.  Overall, the results indicate that the experimental system provided better support for obtaining relevant information.  Table 7.1: Reading outcome measures showing significant differences Categories Measures Tasks Effectiveness Perceptions perceived amount of relevant text read  - perceived extent of relevant text comprehended  Learn how to satisfaction with information obtained  All Learn how to confidence in fully answering question  - Performance amount of relevant text highlighted  All Learn how to Keeping up quality of answers  All Efficiency Perceptions perceived efficiency in obtaining information All Refer to facts Learn how to Performance  completion time by task  Refer to facts Refer to arguments  	183??From among a range of measures of perceptions, two measures were significantly different in the two systems ? ?satisfaction with information obtained? and ?perceived efficiency in obtaining information?.  This is probably because these two measures were easier for participants to assess compared with other measures, such as how much relevant text was read and comprehended, or the confidence with which participants felt they fully answered the question.  The means of all measures were higher when using the experimental system, except for the mean of participants? perception of amount of relevant text read, which was lower, although not significantly so.  Highlight data, an indicator of intensive reading here, shows participants did highlight less text but highlighted more relevant text when using the experimental system than when using the baseline system.  Since people read less in the experimental condition, they may have felt that they read less relevant text. Regarding a range of measures of performance, the differences were significant in highlighting relevant text and in fully answering questions when using the experimental system compared to the baseline system.  People completed all five tasks faster when using the baseline system for the first two tasks and the experimental system for the last three than when using the system in the reverse order.  Overall completion times by task were not significantly different, which can be explained by the significant interaction effects between the experimental tasks and the systems.  The relevant text for two tasks ?Learn about background? and ?Keeping up? fell into a single section.  In this case, the experimental system with the relevant information divided into three categories might direct participants to other parts of the article and thus cost more time.  For the other three tasks, the relevant information was here and there within the article, and the participants did spend less time when using the experimental system than the baseline system, and particularly, spent significantly less time for tasks ?Refer to facts? and ?Refer to arguments?.  	184??The comprehension question for task ?Learn how to? was the most difficult one, so, with the present sample, the experimental system showed the predicted direction in timing, although not significantly so. Hypothesis 10 and Hypothesis 11 which propose that reading effectiveness and efficiency vary with tasks were also supported.   As shown in Table 7.1, the experimental system significantly outperformed the baseline system for the tasks (?Learn how to?, ?Refer to facts?, ?Refer to arguments?) with more difficult comprehension question and less familiar article topic.  The experimental system showed significant advantages for ?Learn how to? in effectiveness, including perceived extent of relevant text comprehended, satisfaction with information obtained and amount of relevant text highlighted.  The significant advantages in efficiency for ?Learn how to? when using the experimental system was the perceived efficiency in obtaining information.  The experimental system showed significant advantages for ?Refer to facts? in both measures of efficiency: perceived efficiency in obtaining information, and task completion time, and for ?Refer to arguments? in one measure of efficiency: task completion time.  The results show the experimental system supported reading for the task ?Learn how to? in both efficiency and effectiveness, whereas it supported reading for tasks ?Refer to facts? and ?Refer to arguments? in efficiency only. Table 7.2 presents the results of amount of text highlighted and amount of relevant text highlighted side by side, so we may deepen understanding of the task effect on system difference in reading effectiveness and efficiency, and why people highlighted more relevant text for the experimental task ?Keeping up?.  It seems the experimental system guided participants to relevant components, unlike the baseline system where the participants headed for Introduction and Discussion components for 	185??whatever task.  For the experimental task with the most difficult comprehension question ?Learn how to?, participants using the experimental system highlighted significantly more in the relevant component Methods.  For the task ?Keeping up? participants using the experimental system highlighted significantly more in the Introduction component and in the Results component.  Though the relevant component was Introduction, they went to the Results component to look for more information, probably because the information in Introduction is too general.  Actually participants highlighted significantly more relevant text for the tasks ?Learn how to? and ?Keeping up?.  Thus the more highlighting in the relevant text for ?Keeping up? is probably caused by more highlighting in the overall text for that task.  Table 7.2: Significant results of text highlighted in the experimental system Tasks Relevant component Differences in relevant component highlighted Differences in overall text highlighted All   More highlighting  Learn about background Introduction   Less highlighting in discussion Refer to facts Results   Less highlighting in all components Less highlighting in introduction Less highlighting in discussion Refer to arguments Discussion   Less highlighting in introduction Learn how to  Methods  More highlighting More highlighting in methods Keeping up Introduction  More highlighting More highlighting in all components More highlighting in introduction More highlighting in results  This study provides evidence that utilizing functional units can help obtain and use the relevant information with less time spent.  The results imply that for complex information tasks 	186??requiring information from different places in the article, the experimental system guided readers to attend more to the information scattered across the article, whereas for the easier comprehension questions where the information was concentrated in the article, the experimental system might distract readers by unnecessarily drawing their attention to other parts of the article.   7.3.2.3 Effectiveness in Reading Process  Effectiveness in the reading process was first examined by perceptions of interface functionalities (including the functional unit indicator and the functional unit selector), as indicated by participants? ratings on post-task questionnaires and responses on the post-study questionnaire.   The responses to open-ended questions show that people liked the functionality primarily because it enabled reading efficiency.  Most participants liked a combined use of interface functionalities since this helped to narrow down reading step by step: the functional unit titles in three boxes helped the reader decide where to go first, clicking on the toggle button directed the reader to highlighted paragraphs, and the paragraph labels on the left margin enabled the reader to narrow down further reading.  Participants liked the highlight function most because it could be used to select information by highlighting paragraphs of varying relevance, or by locating a particular section with a click on the button.  Participants liked the paragraph labels next because they helped them recognize information by indicating the paragraph functions as surrogates or as a second step to navigate when clicking on the highlight buttons.  The functional unit titles in three boxes only played a role in the first step of selection and were not considered as useful as the other two.   	187??A common criticism of these functionalities was that they didn?t signal information at a specific enough level.  There were two reasons for this, based on design considerations.  To make the interface look succinct, the highlight button was made to highlight all functional units in each category, rather than particular functional units in that category.  Additionally, instead of marking sentences within a paragraph, the paragraph was adopted as a functional unit, since it is hard to overlook the surrounding text while reading a few sentences in a paragraph.  The negative side of these considerations was that the highlighting area was broad. The participants liked these features because they were effective in helping organize, select, locate, and recognize information.  Participants suggested that the system should refine information further, link specific information with its context, and manage the texts they highlighted.  This indicates that, on the one hand, most participants wanted to focus on details when engaging in these information tasks and on the other hand, they did not want to ignore the background information so as to better understand the details.  Meanwhile, they wished a more intelligent system to automatically manage the highlighted information for future use.  These indicate that a desired reading environment could support the reading process starting from the orientation of the article and ending with the summarization of the extracted information.  Effectiveness in the reading process was also examined through participants? use of the functional units within the experimental system.  It was assumed that functional units could enable people to focus on the highly relevant information within an article, make use of pieces of relevant information across the article, and move from the most to the least relevant information until providing the desired amount of information.  The use of functional units was investigated by analyzing how people used functional units in three categories across the article, used 	188??functional units in one category from different places in an article, and moved through these functional units to accomplish a task.  For all three of the experimental tasks ?Refer to facts?, ?Refer to arguments? and ?Learn how to? discussed above, the primary functional units were usually used together with related functional units from several studies, whereas they were used less with additional related functional units.  The results show the experimental system enabled people to focus on the highly relevant information in a single component truly relevant to that task, since functional units in the primary and related categories were in the same primary component for that task.   The functional units belonging to a category were used from several studies reported in an article.  For the experimental task ?Refer to facts?, the related functional units were the most used from several studies.  The reason may be that the related functional units were ?summarize results? which were more concise with one or two paragraphs for each study, while the primary functional units were ?state findings? which were more detailed and lengthy.  Two participants exhausted related functional units in four studies after reading the primary functional units in the first study.  One possible reason is that they favored quick summary information from the related functional units.  For the task ?Learn how to? still the related functional units were the most used from several studies reported in the article.  The primary functional units ?tasks? and ?experimental procedures? addressed more specific things than the related functional units ?justify methods?, ?preview methods?, and ?participants?.  The experimental task ?Refer to arguments? was special since its primary functional units were in the General Discussion only, while related functional units were in the General Discussion as well as in the Discussion components of four studies.  Therefore both primary and related functional units were mostly used in the General Discussion component.  The relevant functional units in a category were 	189??used from several studies scattered in the article, showing the experimental system enabled people to use pieces of relevant information across the article.    Overall the signaling of functional units has somewhat directed the participants? attention in reading.  In the experimental condition, two-thirds of participants read category by category for three experimental tasks ?Refer to facts?, ?Refer to arguments? and ?Learn how to?.  Most followed the order from the primary to the related and additional related categories.  However, a few participants started from related or additional related functional units and most probably their decisions were affected by their understanding of the comprehension questions and the functional unit titles in three boxes.  Those who reread consciously usually went back to the primary functional units in the first study.  This indicates that people often need to review important information later.  The reading order of functional units shows that participants did not necessarily start from the most relevant information and move from more relevant to less relevant information, but they often did.  It was not the case that participants read only the suggested information.  Most people made use of the functional units in the primary and related categories, but not all of them exhausted the functional units in that category.  If the functional units were used from only two studies reported in the article, those in the first two studies were most used.  Though some participants also read information not suggested by the system, what they read was usually the first few paragraphs of the article and some paragraphs in the General Discussion component.  This suggests that participants? reading habits still played a role in that the two ends of an article, Introduction and Discussion, attracted more attention.  This also explained why some people did not follow the reading order from most to least relevant as signaled by the system. 	190??The two assumptions were met that the functional units enabled people to focus on the specifically relevant information and use pieces of relevant information across the article.  Rather than targeting the information extraction as in previous studies (Bishop, 1999; Sandusky & Tenopir, 2008), the document components in this study are exploited for information use of the extracted information.  Since cross-referencing of multiple documents is the predominant activity in reading (Alder et al., 1998), the support for connecting and integrating relevant information across the article is important as well.  The assumption that people would move from the most relevant to the least relevant information was not supported.  One reason is that people needed contextual information before going into detail since most read the first few paragraphs of an article.  Another reason is that people needed more summarized information since some turned to the General Discussion.  The primary functional units, though highly related, required context for better understanding and were sometimes too long and detailed for easy use.  Additionally, some participants were not accustomed to jumping from paragraph to paragraph and jumping from functional units in one category to another category.  The participants? remarks were consistent with the results of ?use? and ?move?.  The analysis of interview transcripts shows that, regarding effectiveness, the positive effect was focused reading while the negative effect was lack of context.  With the prototype system, participants were able to read the most relevant component more fully and dig deep into the relevant pieces of information.  The current design of the experimental system was not as effective in providing context information.  That is because unlike a situation where they start reading from the very beginning, people were likely to be brought to a particular paragraph by clicking highlight buttons, and might move from paragraph to paragraph directed by highlights.    	191??7.3.2.4 Efficiency in Reading Process The analysis of interview transcripts shows most people belong to the ?go to two ends? category, where participants read more thoroughly the beginning and the ending of an article.  Analyses of transcripts corroborated the results of highlight data that in the baseline condition, Introduction and Discussion were the two components participants most attended to across all tasks, and the Methods component was the least attended to.  That may be one reason why the Methods task ?Learn how to? was the most difficult to do.  In accordance with Dillon (2004), the findings in the baseline condition indicate that the purpose of focusing on the Introduction and Discussion was to judge the relevance of information in the article, rather than assimilating information in these two components.  Other than the Introduction and Discussion, participants relied more on document cues such as headings, first paragraphs, first sentences, keywords, etc. in the baseline condition, with the purpose of narrowing down reading as the first step.   Efficiency is a major issue for faculty and researchers who are heavy readers.  The demand of completing five reading tasks within 90 minutes in the experimental study simulated the above typical situation.  Compared with the baseline system, the experimental system affected their reading behavior by directing them to the components that really mattered for a particular task rather than the Introduction, the first paragraph of a section or the first sentence of a paragraph, which were key places in the natural reading as indicated by Loizides and Buchanan (2009).  One outstanding example is that the experimental system did help participants to get to the most relevant component, Methods, for the ?Learn how to? task.  From the interview transcripts, the ways people read differently in the experimental system could be categorized as less reading, selective reading and focused reading.  Less reading and selective reading are both related to efficiency.  Less reading was simply reading less by following the system suggestions, 	192??while selective reading was actively reading less by selecting information from the system highlighted sections or paragraphs.  In either case participants were directed to a pool of functional units which were closely related to a particular task.  With the experimental system, people attended to the paragraphs ? the smallest, independent meaningful unit in the current design.   From the analysis of the interview transcripts, the remarks were largely positive.  Generally, people liked the efficiency of the experimental system.  They used a number of words equivalent to efficiency in describing the differences: ?easier?, ?faster?, ?quicker?, ?simpler?, ?more efficient?.  This indicates that efficiency was the major advantage of the experimental system over the baseline system.  Even if it has drawbacks in other respects, the perceived efficiency of the experimental system was acknowledged by almost all.    7.4 SUMMARY This chapter has summarized and discussed the findings of this research in two sections: the development of the functional unit taxonomy, and the utilization of functional units.  With respect to the functional unit taxonomy, results show that a strong relationship exists between functional units within four components and information use task types.  Results further show that the implementation of the functional unit taxonomy did improve reading outcomes and reading process, and the benefits varied with experimental tasks.  The experimental system was significantly more useful for completing the tasks requiring information from several places in the middle of an article.  And furthermore, for particular tasks the system was more useful in regard to the efficiency of reading rather than in effectiveness of reading.  In contrast, for tasks 	193??requiring information from a single section at either end of an article, the experimental system might have negative effects.  Findings in the baseline condition reinforce what is already known about the reading behavior in natural reading.  Findings in the experimental condition provide evidence that the functional units can be utilized to affect reading behavior in a positive way.  Analysis of the reading process reveals that the experimental system was effective in that the participants focused on the relevant functional units and used relevant functional units scattered across the article.  The system also provided a flexible way to move through these functional units.  However, the ?move? did not always start from the primary functional units and progress to those in other categories, since people were in need of context information to make sense of the specific information.  The experimental system was efficient in that it saved the effort of locating information so that the readers were able to spend more time on focused reading.  	194??8 CONCLUSION 8.1 OVERVIEW The goal of this study was to explore how functions of the smallest information units can support the different uses of information from scholarly journal articles.  A functional unit taxonomy was developed that identifies the functions of units within psychology journal articles, and their relevance with respect to information tasks associated with journal articles.  This taxonomy was designed and implemented in a prototype system and showed effects in supporting information use through an experimental study.  In this concluding chapter, I reflect on the practical and theoretical significance of the study, methodological contributions, and the implications for information design.  I also discuss some limitations, and propose recommendations for future research.  8.2 CONTRIBUTIONS OF THE STUDY 8.2.1 Contributions to Document Component Use The few empirical studies on the use of document components (Bishop, 1998, 1999; Bishop et al., 2000; Sandusky & Tenopir, 2008) take every logically distinct subdivision of an article as a component, which include headings, author names, external links, etc. as well as article sections and subsections.  Furthermore, the focus of these studies is on the retrieval of these components individually.  In the functional unit taxonomy developed in this study, first, the functional units are organized in a hierarchical structure, i.e., each functional unit is a portion of one of the four IMRD components embedded in an article, and a functional unit is distinct because of its communicative functions in that component.  Second, the functional units are organized according to their relationship with information use tasks, and therefore the individual 	195??functional units are prioritized and associated with other functional units in the same and different components.  The outcome is a taxonomy in which the functional units form a network-like relationship for a particular task. The application of this taxonomy directs reading in the order from central to peripheral information of the same component, and from specific information in the primary component to general information in other components.  The results of testing show that signaling functional units supports each step in the reading process to some extent.  Starting from the core information supports reading efficiency by letting people spend more time on close reading rather than on navigating through articles for relevant information.  Moving through highly useful information supports reading effectiveness by enabling people to focus on specific information and to integrate pieces of relevant information across articles.  Though functional units were not as effective in aiding comprehension as navigation, close reading and information use, this might be remedied by improved design of connecting highly relevant information with its context information in the article.  Overall findings show that functional units can be organized and presented purposefully for particular information use.   In this study, a component is divided into set of units with distinct functions and the functional units are organized meaningfully into three categories.  This approach attempts to deal with the  problem that has been observed in previous studies using document components (Sandusky & Tenopir, 2008), of identifying the minimum information necessary and determining if an individual component can stand alone.  This research utilizes a novel system to enable a flexible reading environment in which readers may adjust their reading scope and follow implicit associations in the text, in a manner similar to hypertext.  This is especially important for faculty and researchers who need to read a great deal quickly and need to fulfill tasks within a tight 	196??schedule.  It is also useful for novice scholars or those engaged in multidisciplinary research who may not be familiar with scholarly genres in a particular discipline.  Overall the user evaluation demonstrated the value of functional units in enhancing effective and efficient information use.  This study also leads to an understanding of the situations in which functional units provide greater benefits and points to system design issues that can be remedied.  This research carries forward the important work of other scholars on document component use.  Table 8.1 outlines the contributions made by these scholars and the contribution made in this research.    Table 8.1: Contributions made in various studies on document component use  Functional units embedded in components Task-functional unit mapping Identification of set of components Development of a functional unit taxonomy Implementation TestingBishop (1998, 1999), Bishop et al. (2000) - - * - * * Sandusky & Tenopir (2008) - - * - * * Dillon (2004) - - * - - - Vaughan & Dillon (1998) - - * - - - Zhang (2011) * *  * * *  Journal usage is discipline-dependent.  Even within journal publications, article genre may vary, e.g. theory pieces, review articles, data-based research articles, shorter communications (Swales, 2004).  Though the taxonomy is developed from one genre in a 	197??specific domain ? the psychology journal research article ? the research methods and results from this research can be generalized to other genres and different disciplines.  In another genre of a different discipline, the set of functional units might vary, yet it is likely that a relationship still exists between functional units and information use tasks, and that this relationship can be utilized to benefit the readers in the use of journal articles in another domain.   Though genre manifests itself within documents, people tend to share the understanding of genre as categorization of documents.  The functional unit taxonomy may help people to develop a mental model constituting subcategories of document components, which is only possessed by expertise in a domain.  These effects of functional units might be applied to discourse types other than scholarly journals.  8.2.2 Theoretical Contribution Through the lens of Swales? Genre Model and Sperber and Wilson?s Relevance Theory, this research is expected to open up a way to facilitate the information use of journal research articles by focusing on the functions of the smallest information units, and to enrich the existing LIS literature regarding information use approached from Genre Theory and Relevance Theory.   The main contribution of this study to Genre Theory in LIS is that it turns attention to genre at the granular, within-document level.  The current focus on genre in information studies is at the level of the whole document and has been focused primarily on the identification of relevant documents for retrieval or navigation.  This research moves a step further to look at the document components and to explore how to employ functional units within IMRD components for the information use of scholarly journal articles.  On the other hand, the notion of move analysis has long been limited to the pedagogy of academic writing, and has not been the focus 	198??of attention in information studies to date.  This study extends the idea of functional units, originating from ?moves? and ?steps? in analyzing academic papers, to the reading of digital documents.  Unlike a unit referring to a clause or sentence in move analysis, a paragraph was adopted as the functional unit in this study because of the differences in reading and writing.  This design decision resulted in an issue that the information provided was not specific enough for some participants, though it did help to locate specific information within an article.  It also implies that the granular level in annotating article can support different uses of the article.   The main contribution to Relevance Theory in LIS is that this study incorporates the concept of genre into the cognitive processing of relevant information by following a relevance-theoretic comprehension procedure.  Participants did attend more to the highly useful functional units from several studies reported in an article.  What was unexpected was that participants did not follow a consistent pattern from the most relevant to less relevant information.  However, this observed behavior still conforms to Sperber and Wilson?s Relevance-theoretic Comprehension Procedure in that the extent of relevance to a reader varies at different stages of reading.  For example, the background of a concept or the general idea of an article is the most relevant at the beginning of reading, and becomes less and less relevant as more of the article is read.  The primary functional unit as signaled in the current design is the most relevant during the whole reading process but not so at the very beginning.  It suggests that the cognitive processing of relevant information in an article is fluid rarther than static, and that participants? expectations of what would be relevant vary from stage to stage in the reading process, thus conforming to dynamic nature of relevance.    	199??8.2.3 Methodological Contribution The methodological approach adopted in this research makes several contributions to future studies of this kind.  Since it is hard to identify which part of an article is read when conducting reading experiments, a highlighting method was chosen to quantify the amount of relevant text read versus the amount of overall text read.  Online highlighting was effective because it simulates a natural way of reading and annotating, and it provided useful points of reference for participants when they summarized their answers.   Instead of task scenarios, this study employed comprehension questions as a way to measure cognitive changes.  Participants were asked to answer a comprehension question after reading each of the five articles presented to them, and the questions represented the target content for the five information use task types.  With that question in mind, the participant was prompted to perform a particular task.  As one participant noted in the retrospective interview, ?so depending on the question ? if they say to find for the data ? I thought it would be in the study results section.  And if they ask for like implications of the study, I would go for in the general discussion of the article? (P7).   The retrospective interview provided qualitative data for the experimental study for which user evaluations are normally captured by quantitative methods.  In particular, narrating while viewing a replay can elicit participants? comments on their own behaviors without interfering with their performance as may be the case with concurrent verbal protocols.   8.3 DESIGN IMPLICATIONS  For effective and efficient reading, we wish to read the least amount possible, while reading the most relevant information possible.  The idea of functional units provides the 	200??inspiration for extracting the smallest information units, and organizing the associated functional units in a meaningful way.  Such relationships can be modeled and implemented in a prototype journal system.  The current prototype journal system represents one possible implementation of a functional unit taxonomy, and results of the study suggest possibilities for improving the design.  In a new design, the relevant information can be connected with its semantic context or physical context in the article, and furthermore the relevant information can be presented in an even finer granularity.  One implementation issue is how the identification of functional units can be automated since a large amount of articles are needed to be of real use.  This depends on the development of an ontology, the annotator?s mark up, and the software to support the annotation, which are of current focus in the areas of text mining and bioinformatics.    8.4 LIMITATIONS  One limitation is that the prototype system tested in this study was manually made by the researcher.  This study constitutes proof-of-concept at the conceptual level, but much further research and development work needs to be done in order to build a system capable of automatically identifying functional units and determining their associations with a range of tasks.  This is what is needed in order to test the generalizability of the effects of signaling functional units.   Another limitation of this study was the result of the logistics of conducting experimental user studies.  Only one experimental task was used to represent each task type, and one article and one comprehension question were used for each experimental task.  Thus some of the observed differences may have been the result of differences in the articles rather than the task 	201??types.  The system and task differences found need to be tested further with an expanded set of experimental  tasks on a large document collection. It is also true that functional units are more applicable to digital genres reproduced or adapted from the paper world such as books, journals, newspapers, blogs, etc. rather than newly  emerged genres, since functional units are identifiable in a relatively stable rhetorical structure.  So, the results of the study will be more applicable to certain types of reading environments than others.   8.5 FUTURE RESEARCH Based on the current dissertation research, future research is needed in the following areas: to improve the prototype system based on the participants? feedback, to build a real system from the prototype, to further analyze the relationships between functional units and information tasks, and to study how functional units can enhance journal reading for academic scholars in a naturalistic setting. One obvious course for future research is system enhancement.  One major improvement in a new version of the system is to improve the signaling of relevant information according to its functions instead of the category the functional units belong to as in the current version.  Another major improvement is the signaling of background information of key concepts in the article.  The preliminary results of the experimental user study are promising and suggest that users may derive real benefit from such systems.  An important future goal is to build a viable system that can automatically identify relevant functional units and match them with particular 	202??information tasks.  Future research is needed to develop methods for automatic classification and discourse annotation of texts.   Specifying the relationships between functional units and information tasks is also an important issue.  The current set of tasks was used for testing the use of functional units in the study.  An expanded set of tasks could be identified and mapped with functional units.  Also the relationships among a set of functional units for a particular task requires further study. Exciting possibilities for future research lie in examining how faculty members and researchers, who are serious and heavy readers of scholarly journals, interact with journal articles in multiple domains in an updated system in natural reading environments.  The ultimate goal of this study on functional units is to facilitate online reading of a variety of document genres effectively and efficiently.  8.6 SUMMARY The published research on studies of document components and their effects on information use is sparse.  This research studied the functional units within components and its associations with information use tasks from a theoretical perspective, from which a taxonomy was developed, and furthermore, the functional unit taxonomy was empirically tested.  Results show that an individual functional unit has varying relevance to information use tasks, and has varying relevance to other functional units in the same or another component for a particular information use task.  Functional units can be identified, implemented and utilized to benefit readers by supporting their effective and efficient use of scholarly journal articles.   The major contributions of this work are the identification of functions of the smallest information units within journal article components, and the examination of how they can be 	203??utilized in journal reading.  This research suggests that individual functional units can be organized and presented to benefit readers? information use of journal articles.  This is the first major study in experimentally illuminating the significance of functions of types of information within individual document components for information use.  This study represents one approach to providing adequate information as desired in an adaptive and adaptable reading environment.    	204??BIBLIOGRAPHY  Adler, A., et al. (1998). A diary study of work-related reading: Design implications for digital reading devices. In Proceedings of CHI? 98, 241-248.  Agre, P.E. (1998). Designing genres for new media: Social, economic, and political contexts. In S. Jones (Ed.), CyberSociety 2.0: Revisiting CMC and community. Available: http://polaris.gseis.ucla.edu/pagre/genre.html  Ahn, M. (2003). Exploring factors affecting users? link-following decisions and evaluation behavior during web browsing. Doctoral dissertation. University of Pittsburgh.  Andersen, J. (2008). The concept of genre in information studies. Annual Review of Information Science and Technology, 42, 339-367.  Askehave, I., & Nielsen, A.E. (2005). Digital genres: A challenge to traditional genre theory.   Information Technology & People, 18 (2), 120-141.  Askhave, I., & Swales, J.M. (2001). Genre identification and communicative purpose: A problem and a possible solution. Applied Linguistics, 22 (2), 195-212.  Bakhtin, M.M. (1986). The problem of speech genres. In M.M. Bakhtin, Speech genres and other late essays (C. Emerson & M. Holquist Eds.) (pp. 60-102). Austin: University of Texas Press.  Bazerman, C. (2004). Speech acts, genres, and activity systems: How texts organize activity and people. In C. Bazerman & P. Prior (Eds.), What writing does and how it does it: An introduction to analyzing texts and textual practices (pp. 309-339). Mahwah, NJ: Lawrence Erlbaum Associates.  Belkin, N.J. (2010). On the evaluation of interactive information retrieval systems. In The janus  faced scholar: A festschrift in honour of Peter Ingwersen (pp. 13-21). Copenhagen: Det  Informationsvidenskabelige Akademi.  Benyon, D. (2007). Information architecture and navigation design for Web sites. In P. Zaphiris  & S. Kurniawan (Eds.), Human computer interaction research in Web design and  evaluation (pp. 165-184). Hershey, PA: Idea Group Publishing.  Berkenkotter, C., & Huckin, T.N. (1995). Genre knowledge in disciplinary communication: Cognition/culture/power. Hillsdale, NJ: Lawrence Erlbaum Associates.  Bhatia, V.K. (1993). Analysing genre: Language use in professional settings. London: Longman.  Bishop, A.P. (1998). Digital libraries and knowledge disaggregation: The use of journal article components. In Digital Libraries 98: Proceedings of the 3rd ACM Conference on Digital Libraries, 29-39. 	205?? Bishop, A.P. (1999). Document structure and digital libraries: How researchers mobilize information in journal articles. Information Processing & Management, 35, 255-279.  Bishop, A.P., et al. (2000). Digital libraries: Situating use in changing information infrastructure. Journal of the American Society for Information Science, 51 (4), 394-413.  Brett, P. (1994). A genre analysis of the results section of sociology articles. English for Specific Purposes, 13 (1), 47-59.  Breure, L. (2001). Development of the genre concept. Available: http://people.cs.uu.nl/leen/GenreDev/GenreDevelopment.htm  Bruce, I. (2008). Cognitive genre structures in Methods sections of research articles: A corpus  study. Journal of English for Academic Purposes, 7 (1), 38-54.   Bystrom, K., & Hansen, P. (2005). Conceptual framework for tasks in information studies.  Journal of the American Society for Information Science and Technology, 56 (10), 1050- 1061.  Choo, C.W., Detlor, B., & Turnbull, D. (2000). Web work: Information seeking and knowledge  work on the World Wide Web. Dordrecht, Netherlands: Kluwer Academic Publishers.  Choo,	C.W.	(2002).	Information	management	for	the	intelligent	organization:	The	Art	of	scanning	the	environment.	3rd	ed.	Medford,	NJ:	Information	Today.	 Conklin, J. (1987). Hypertext: An introduction and survey. IEEE Computer, 20 (9), 17-41.  Cosijn, E., & Ingwersen, P. (2000). Dimensions of relevance. Information Processing & Management, 36, 533-550.  Crestani, F., Vegas, J., & de la Fuente, P. (2004). A graphical user interface for the retrieval of hierarchically structured documents. Information Processing & Management, 40, 269-289.  Creswell, J.W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches. 3rd ed. Thousand Oaks, CA: SAGE.  Crookes, G. (1986). Towards a validated analysis of scientific text structures. Applied Linguistics, 7 (1), 57-70.  Cross, C., & Oppenheim, C. (2006). A genre analysis of scientific abstracts. Journal of Documentation, 62 (4), 428-446.  Crowston, K., & Kwasnik, B.H. (2003). Can document-genre metadata improve information access to large digital collections? Library Trends, 52 (2), 345-361. 	206??Crowston, K., & Williams, M. (1997). Reproduced and emergent genres of communication on the World Wide Web. In Proceedings of the 30th Hawaii International Conference on System Sciences.  Crowston, K., & Williams, M. (2000). Reproduced and emergent genres of communication on the World Wide Web. The Information Society, 16 (3), 201-215.  Dervin, B. (1992). From the mind?s eye of the user: The sense-making qualitative-quantitative  methodology. In J. Glazier & R. Powell (Eds.), Qualitative research in information  management (pp. 61-84). Englewood, CO: Libraries Unlimited.  Dillon, A. (1992) Reading from paper versus screens: A critical review of the empirical literature.  Ergonomics, 35 (10), 1297-1326.  Dillon, A., & Schaap, D. (1996). Expertise and the perception of shape in information. Journal of  the American Society for Information Science, 47(10), 786-788.  Dillon, A., & Vaughan, M. (1997). ?It?s the journey and the destination?: Shape and the  emergent property of genre in evaluating digital documents. New Review of           Multimedia and Hypermedia, 3, 91-106.  Dillon, A. (1999). TIME - A multi-level framework for the design and evaluation of digital libraries. International Journal of Digital Libraries, 2 (2/3), 170-177.   Dillon, A. (2000). Spatial-semantics: How users derive shape from information space. Journal of the American Society for Information Science, 51(6), 521-528.  Dillon, A., & Gushrowski, B.A. (2000). Genres and the web: Is the personal home page the first uniquely digital genre? Journal of the American Society for Information Science, 51 (2), 202-205.  Dillon, A. (2002). Writing as design: Hypermedia and the shape of information space. In. R.  Bromme & J. Stahl (Eds.), Writing hypertext and learning: Conceptual and empirical  approaches (pp. 63-72). London: Pergamon.  Dillon, A. (2004). Designing usable electronic text. 2nd ed. Boca Raton, FL: CRC Press.  Dillon, A. (2008). Why information has shape. Bulletin of the American Society for               Information Science and Technology, 34 (5), 17-19.  Dubois, B.L. (1997). The biomedical discussion section in context. Greenwich, CT: Ablex.  Edwards, D.M., & Hardman, L. (1999). ?Lost in hyperspace?: Cognitive mapping and              navigation in a hypertext environment. In R. McAleese (Ed.), Hypertext: Theory              into practice (pp. 90-105). Exeter, UK: Intellect.  	207??Escandell-Vidal, V. (2004). Norms and principles: Putting social and cognitive                pragmatics together. In R.M. Reiter & M.E. Placencia (Eds.), Current trends in              the pragmatics of Spanish (pp. 347-371). Amsterdam: John Benjamins.  Esperet, E. (1996). Notes on hypertext, cognition, and language. In J.F. Rouet, J.J.  Levonen, A. Dillon & R.J. Spiro (Eds.), Hypertext and cognition (pp. 149-155). Mahwah,  NJ: Lawrence Erlbaum Associates.  Fleming, J. (1998). Web navigation: Designing the user experience. Sebastopol, CA: O?Reilly.  Foltz, P.W. (1996). Comprehension, coherence, and strategies in hypertext and linear text.  In J.F. Rouet, J.J. Levonen, A. Dillon & R.J. Spiro (Eds.), Hypertext and cognition            (pp. 109-136). Mahwah, NJ: Lawrence Erlbaum Associates.  Freedman, A., & Medway, P. (1994). Locating genre studies: Antecedents and prospects. In A.  Freedman & P. Medway (Eds.), Genre and the new rhetoric (pp. 1-20). London: Taylor  & Francis.  Freund, L. (2008a). Exploiting task-document relations in support of information retrieval in the              workplace. Doctoral dissertation. University of Toronto.  Freund, L. (2008b). Situating relevance through task-genre relationships. Bulletin of the              American Society for Information Science and Technology, 34 (5), 23-26.  Glover, E.J. et al. (2001). Web search ? your way. Communications of the ACM, 44(12), 97-102.  Gonzalez, R.A., & Sanchez, C.F. (2007). The bank company website from a genre perspective.  In S. Posteguillo, M.J. Esteve & M.L. Gea-Valor (Eds.) The texture of Internet:  Netlinguistics in progress (pp. 92-115). Cambridge: Cambridge Scholars Publishing.  Guan, Z.W., et al. (2006). The validity of the stimulated retrospective think-aloud method as  measured by eye tracking. In CHI 2006 Proceedings, 1253-1262.  Harter, S.P. (1992). Psychological relevance and information science. Journal of the         American Society for Information Science, 43 (9), 602-615.  Herring, S.C., et al. (2005). Weblogs as a bridging genre. Information Technology & People, 18 (2), 142-171.  Hinton, P.R., et al. (2004). SPSS explained. London: Routledge.  Holmes, R. (1997). Genre analysis, and the social sciences: An investigation of the structure of research article discussion sections in three disciplines. English for Specific Purposes, 16 (4), 321-337.  	208??Hopkins, A., & Dudley-Evans, T. (1988). A genre-based investigation of the discussion sections in articles and dissertations. English for Specific Purposes, 7, 113-121.  Huck, S.W. (2007). Reading statistics and research. 5th ed. Boston: Pearson.  Jul, S., & Furnas, G.W. (1997). Navigation in electronic worlds: A CHI 97 workshop. SIGCHI Bulletin, 29 (4). Available: http://www.sigchi.org/bulletin/1997.4/jul.html  Kanoksilapatham, B. (2003). A corpus-based investigation of scientific research articles:   Linking move analysis with multidimensional analysis. Doctoral Dissertation.  Georgetown University.  Kanoksilapatham, B. (2005). Rhetorical structure of biochemistry research articles. English for  Specific Purposes, 24 (5), 269-292.  King, D.W., & Tenopir, C. (1999). Using and reading scholarly literature. Annual Review of  Information Science and Technology, 34, 423-477.  Krippendorff, K. (2004). Content analysis: An introduction to its methodology. 2nd ed. Thousand  Oaks, CA: SAGE.  Kwasnik, B.H., Crowston, K. (2005). Genres of digital documents. Information Technology &  People, 18 (2), 76-88.  Kopak, R., Freund, L, & O?Brien, H. (2011). Digital information interaction as semantic  navigation. In A. Foster & P. Rafferty (Eds.), Innovations in information retrieval:  Perspectives for theory and practice.   Laine, P. (2003). Explicitness and interactivity. In ACM International Conference              Proceeding Series, 49, 421-426.   Lewin, B.A., Fine, J., & Young, L. (2001).  Expository discourse: A genre-based approach to              social science research texts. London: Continuum.  Li, Y.L., & Belkin, N.J. (2008). A faceted approach to conceptualizing tasks in information  seeking. Information Processing & Management, 44, 1822-1837.  Loizides, F, & Buchanan, G. (2009). An empirical study of user navigation during document  triage. ECDL 2009, 138-149.   Marshall, C.C. (2009). Reading and writing the electronic book. Morgan & Claypool.  Marza, N.E. (2007). The digital representation of an industrial cluster through its corporate  website image: Online discourse and genre analysis. In S. Posteguillo, M.J. Esteve &  M.L. Gea-Valor (Eds.) The texture of Internet: Netlinguistics in progress (pp. 46-74).  Cambridge: Cambridge Scholars Publishing. 	209?? Miller, C.R. (1994). Genre as social action. In A. Freedman & P. Medway (Eds.), Genre and the  new rhetoric (pp. 23-42). London: Taylor & Francis.  Nwogu, K.N. (1997). The medical research paper: Structure and functions. English for Specific  Purposes, 16 (2), 119-138.  Orlikowski, W.J., & Yates, J. (1994). Genre repertoire: The structuring of communicative  practices in organizations. Administrative Science Quarterly, 39 (4), 541-574.  Pare, A., & Smart, G. (1994). Observing genres in action: Towards a research methodology. In A.  Freedman & P. Medway (Eds.), Genre and the new rhetoric (pp. 146-154). London:  Taylor & Francis.  Pirolli, P., & Card, S. (1999). Information foraging. Psychological Review, 106 (4), 643- 675.  Pirolli, P. (2004). The use of proximal information scent to forage for distal content on the World  Wide Web. In A. Kirlik (Ed.), Working with technology in mind:  Brunswikian resources for cognitive science and engineering. New York:  Oxford University Press.  Posteguillo, S. (1999). The schematic structure of computer science research articles. English for  Specific Purposes, 18 (2), 139-160.  Rosso, M.A. (2005). Using genre to improve web search. Doctoral dissertation. University of   North Carolina at Chapel Hill.  Rosso, M.A. (2008). User-based identification of web genres. Journal of the American Society  for Information Science and Technology, 59 (7), 1053-1072.  Roussinov, D., et al. (2001). Genre based navigation on the web. In Proceedings of the 34th Hawaii International Conference on System Sciences.  Rowlands, I. (2007). Electronic journals and user behavior: A review of recent research. Library  & Information Science Research, 29, 369-396.  Samraj, B. (2002). Introductions in research articles: Variations across disciplines. English for  Specific Purposes, 21, 1-17.  Sandusky, R.J., & Tenopir, C. (2008). Finding and using journal-article components: Impacts of  disaggregation on teaching and research practice. Journal of the American Society for  Information Science and Technology, 59 (6), 970-982.  Saracevic, T. (1996). Relevance reconsidered. In Information science: Integration in    perspectives. Proceedings of the Second Conference on Conceptions of Library  	210??and Information Science (CoLIS 2), 201-218.  Saracevic, T., & Kantor, P.B. (1997). Studying the value of library and information services. Part I. Establishing a theoretical framework. Journal of the American Society for Information Science, 48 (6), 527-542.  Saracevic, T. (2007a). Relevance: A review of the literature and a framework for thinking on the notion in information science. Part II: Nature and manifestations of relevance. Journal of the American Society for Information Science and Technology, 58 (13), 1915-1933.  Saracevic, T. (2007b). Relevance: A review of the literature and a framework for thinking on the notion in information science. Part III: Behavior and effects of relevance. Journal of the American Society for Information Science and Technology, 58 (13), 2126-2144.  Sauperl, A., Klasinc, J., & Luzar, S. (2008). Components of abstracts: Logical structure of scholarly abstracts in pharmacology, sociology, and linguistics and literature. Journal of the American Society for Information Science and Technology, 59 (9), 1420-1432.  Schamber, L., Eisenberg, M.B., & Nilan, M.S. (1990). A re-examination of relevance:              Toward a dynamic, situational definition. Information Processing & Management,              26 (6), 755-776.  Shepherd, M., & Watters, C. (1998). The evolution of cybergenres. In Proceedings of the 31st Annual Hawaii International Conference on System Sciences, 97-109.  Spence, R. (1999). A framework for navigation. International Journal of Human-Computer  Studies, 51, 919-945.  Sperber, D., & Wilson, D. (1995). Relevance: Communication and cognition. 2nd ed. Oxford:  Blackwell.  Sperber, D., & Wilson, D. (1997). Remarks on relevance theory and the social sciences. Multilingual, 16, 145-151.  Spinuzzi, C. (2003). Tracing genres through organizations: A sociocultural approach to  information design. Cambridge, MA: MIT Press.  Storrer, A. (2002). Coherence in text and hypertext. Document Design, 3 (2), 156-168.  Swales, J.M. (1981). Aspects of article introductions. Aston ESP Research Reports No. 1. Birmingham: The University of Aston in Birmingham.  Swales, J.M. (1990). Genre analysis: English in academic and research settings. Cambridge, UK: Cambridge University Press.  	211??Swales, J.M. (2004). Research genres: Explorations and applications. Cambridge, UK: Cambridge University Press.  Swales, J.M., & Feak, C.B. (2004). Academic writing for graduate students: Essential tasks and skills. 2nd ed. Ann Arbor: The University of Michigan Press.  Symonenko, S. (2007). Websites through genre lenses: Recognizing emergent regularities in websites content structure. Doctoral dissertation. Syracuse University.  Taylor, R.S. (1991). Information use environments. Progress in Communication Sciences, 10, 217-255.   Tenopir, C. (2003). Use and users of electronic library resources: An overview and analysis of              recent research studies. Washington, DC: Council on Library and Information Resources.  Tenopir, C., et al. (2009a). Electronic journals and changes in scholarly article seeking and               reading patterns. Aslib Proceedings: New Information Perspectives, 61(1), 5-32.   Tenopir, C., et al. (2009b). Variations in article seeking and reading patterns of academics: what             makes a difference? Library & Information Science Research, 31(3), 139-148.  Thompson, D.K. (1993). Arguing for experimental ?facts? in science: A study of research article             results sections in biochemistry. Written Communication, 10 (1), 106-128.  Thuring, M., Haake, J.M., & Hannemann, J. (1991). What?s Eliza doing in the Chinese room?  Incoherent hyperdocuments - and how to avoid them. In Hypertext ?91 Proceedings, 161- 177.  Thuring, M., Hannemann, J., & Haake, J.M. (1995). Hypermedia and cognition: Designing for  comprehension. Communications of the ACM, 38 (8), 57-66.  Tombros, A, Ruthven, I., & Jose, J.M. (2005). How users assess web pages for information seeking. Journal of the American Society for Information Science and Technology, 56 (4), 327-344.  Toms, E.G., & Campbell, D.G. (1999). Genre as interface metaphor: Exploiting form and function in digital environments. In Proceedings of the 32nd Hawaii International Conference on System Sciences.  Toms, E.G. (2001). Recognizing digital genre. Bulletin of the American Society for Information  Science and Technology, 27 (2), 20-22.  Toms, E., et al. (2005). Searching for relevance in the relevance of search. In F. Crestani & I. Ruthven (Eds), CoLIS 2005, LNCS 350, 59-78.  Tosca, S.P. (2000). A pragmatics of links. Journal of Digital Information, 1 (6).  	212??           Available: http://jodi.tamu.edu/Articles/v01/i06/Pajares/  Unger, C. (2002). Cognitive-pragmatic explanations of socio-pragmatic phenomena: The case of genre. EPICS I Symposium.  Unger, C. (2006). Genre, relevance, and global coherence: The pragmatics of discourse type. New York: Palgrave Macmillan.  Vakkari, P. (1997). Information seeking in context: A challenging metatheory. In Proceedings of an International Conference on Research in Information Needs, Seeking and Use in Different Contexts, 451-464.  Van der Henst, J.B., & Sperber, D. (2004). Testing the cognitive and communicative principles of relevance. In I.A. Noveck & D. Sperber (Eds.), Experimental Pragmatics (pp. 141-171). New York: Palgrave Macmillan.   Vaughan, M.W. (1999). Identifying regularities in users? conceptions of information spaces: Designing for structural genre conventions and mental representations of structure for web-based newspapers. Doctoral dissertation. Indiana University.  Vaughan, M.W., & Dillon, A. (1998). The role of genre in shaping our understanding of digital documents. In Proceedings of 61st Annual Meeting of the American Society for Information Science, 559-566.   Vaughan, M.W., & Dillon, A. (2006). Why structure and genre matter for users of               digital information: A longitudinal experiment with readers of a Web-based               newspaper. International Journal of Human-Computer Studies, 64, 502-526.  Vora, P.R., & Helander, M.G. (1997). Hypertext and its implications for the Internet. In M.G.  Helander, T.K. Landauer & P.V. Prabhu (Eds.), Handbook of human-computer  interaction (pp. 877-914). 2nd ed. Amsterdam: Elsevier.  White, H.D. (2007). Combining bibliometrics, information retrieval, and relevance theory. Part I:  First examples of a synthesis. Journal of the American Society for Information Science  and Technology, 58 (4), 536-559.  Wilson, D. (1994). Relevance and understanding. In G. Brown, et al. (Eds.), Language and   understanding (pp. 35-58). Oxford: Oxford University Press.  Wilson, D. (2001). Preface. In Z.R. He & Y.P. Ran (Eds.), Pragmatics and cognition:  Relevance theory (pp. 1-16). Beijing: Foreign Language Teaching and Research  Press.   Wilson, D., & Sperber, D. (2004). Relevance theory. In L.R. Horn & G. Ward (Eds.), The  handbook of pragmatics (pp. 607-632). Oxford: Blackwell.   	213??Wilson, T.D. (1994). Information needs and uses: Fifty years of progress? In B.C. Vickery (Ed.),  Fifty years of information progress: A journal of documentation review (pp. 15-51).  London: Aslib.  Wilson, T.D. (2000). Human information behavior. Informing Science, 3 (2), 49-55.  Wright, P. (1993). To jump or not to jump: Strategy selection while reading electronic texts. In  C. McKnight, A. Dillon & J. Richardson (Eds.), Hypertext: A psychological perspective.  Chichester, West Sussex: Ellis Horwood. Available:  http://telecaster.lboro.ac.uk/HaPP/chapter6.html  Xu, Y.J. (2007). Relevance judgment in epistemic and hedonic information searches. Journal of  the American Society for Information Science and Technology, 58 (2), 179-189.  Yeung, P.C.K., Freund, L., & Clarke, C.L.A. (2007). X-Site: a workplace search tool for software engineers. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 900.  Yus, F. (2006). Relevance Theory. In K. Brown (Ed.), Encyclopedia of language and linguistics  (pp. 512-519). 2nd ed. Amsterdam: Elsevier.  Yus, F. (2007). Weblogs: Web pages in search of a genre? In S. Posteguillo, M.J. Esteve & M.L.  Gea-Valor (Eds.), The texture of Internet: Netlinguistics in progress (pp. 118-142).  Cambridge: Cambridge Scholars Publishing.    	214??APPENDICES 	215??APPENDIX 1: REPRESENTATIVE MOVE STRUCTURES   Representative move structures of Introduction component Disciplines & Corpora Moves Steps Swales ( 1990)  ?hard?sciences: 16 articles; social sciences: 16 articles; life & health sciences: 16 articles Move 1: Establishing a territory  Step 1 Claiming centrality  and/or Step 2 Making topic generalization(s)  and/or Step 3 Reviewing items of previous research Move 2: Establishing a niche  Step 1A Counter-claiming  or Step 1B Indicating a gap  or Step 1C Question-raising or Step 1D Continuing a tradition Move 3: Occupying the niche  Step 1A Outlining purposes  or Step 1B Announcing present research Step 2 Announcing principal findings Step 3 Indicating RA structure Swales (2004)  Revised CARS model Move 1: Establishing a territory (citations required) Topic generalizations of increasing specificity Move 2: Establishing a niche (citations possible) Step 1A Indicating a gap or Step 1B Adding to what is known Step 2 (optional) Presenting positive justification Move 3: Presenting the present work (citations possible) Step 1 (obligatory) Announcing present research descriptively and/or purposively Step 2 (optional) Presenting RQs or hypotheses Step 3 (optional) Definitional clarifications Step 4 (optional) Summarizing methods Step 5 (PISF) Announcing principal outcomes Step 6 (PISF) Stating the value of the present research Step 7 (PISF) Outlining the structure of the paper Lewin, Fine, & Young (2001)  social science: 12 articles Move 1: Claiming relevance of field Obligatory: a. Asserting relevance of field of which research is a part b. Reporting what is known about phenomena under study Optional: 	216??Disciplines & Corpora Moves Steps a. Making assertions about the research process of others b. Reporting terminology conventions c. Reporting conclusions drawn by previous authors d. Drawing [own] conclusions about the research of others e. Metacomments f. Narrowing parameters of field Move 2: Establishing the gap present research is meant to fill Obligatory: Pointing out deficiencies in the present state of knowledge Optional: a. Positing an ideal way to fill the gap that has just been created b. Mitigating ? pointing out positive contribution of previous research c. Reporting what is known about phenomena under study Move 3: Previewing authors? new accomplishments Obligatory: Stating purpose of present study or contents of article Optional: a. Positing an ideal way to fill the gap that has just been created b. Reporting what is known about phenomena under study c. Justifying hypotheses d. Disclosing whether hypotheses have been confirmed or not e. Summarizing methods f. Presenting hypotheses or research questions Nwogu (1997)  medicine: 30 articles from 5 journals Move 1: Presenting background information (1) Reference to established knowledge in the field (2) Reference to main research problems 	217??Disciplines & Corpora Moves Steps Move 2: Reviewing related research (1) Reference to previous research (2) Reference to limitations of previous research Move 3: Presenting new research (1) Reference to research purpose (2) Reference to main research procedure Kanoksilapatham (2005)  biochemistry: 60 articles from 5 journals Move 1: Announcing the importance of the field Step 1 Claiming the centrality of the topic Step 2 Making  topic generalizations Step 3 Reviewing previous research Move 2: Preparing for the present study Step 1 Indicating a gap Step 2 Raising a question Move 3: Introducing the present study Step 1 Stating purpose(s) Step 2 Describing procedures Step 3 Presenting findings 	218??Representative move structures of Results component Disciplines & Corpora Moves  Steps  Brett (1994)  sociology: 20 articles from 5 journals Metatextual categories Pointer Structure of section Presentation categories Procedural Hypothesis restated Statement of finding Substantiation of finding Non-validation of finding Comment categories Explanation of finding Comparison of finding with literature Evaluation of finding re: hypotheses Further question(s) raised by finding Implications of finding Summarising Thompson (1993)  biochemistry: 16 articles from 1 author; 20 articles from 2 journals  1. Methodological justifications  2. Interpretations of results  3. Evaluative comments on data  4. Citing agreement with preestablished studies  5. Pointing out/explaining discrepancies  6. Admitting interpretative perplexities  7. Calls for further research Nwogu (1997)  medicine: 30 articles from 5 journals Move 1: Indicating consistent observation (1) Highlighting overall observation (2) Indicating specific observations (3) Accounting for observations made Move 2: Indicating non-consistent observations  Kanoksilapatham (2005)  biochemistry: 60 articles from 5 journals Move 1: Stating procedures Step 1 Describing aims and purposes  Step 2 Stating research questions Step 3 Making hypotheses   Step 4 Listing procedures or methodological techniques Move 2: Justifying procedures or methodology Step 1 Citing established knowledge of the procedure   	219??Disciplines & Corpora Moves  Steps  Step 2 Referring to previous research Move 3: Stating results Step 1 Substantiating results   Step 2 Invalidating results Move 4: Stating comments on the results Step 1 Explaining the results   Step 2 Making generalizations or interpretations of the results   Step 3 Evaluating the current findings   Step 4 Stating limitations   Step 5 Summarizing 	220??Representative move structures of Discussion component Disciplines & Corpora Moves Steps Hopkins & Dudley-Evans (1988)  natural science: 12 (MSc biology dissertations & irrigation and drainage conference articles)  1. Background information  2. Statement of result  3. (Un)expected outcome  4. Reference to previous research (Comparison)  5. Explanation of unsatisfactory result  6. Exemplification  7. Deduction   8. Hypothesis  9. Reference to previous research (Support)  10. Recommendation  11. Justification  Holmes (1997)  history: 10 articles; political science: 10 articles; sociology: 10 articles, from 3 journals  1. Background information  2. Statement of result  3. (Un)expected outcome  4. Reference to previous research  5. Explanation of unsatisfactory result  6. Generalization  7. Recommendation  8. Outlining parallel or subsequent developments Lewin, Fine, & Young (2001)  social science: 12 articles Move A: Report accomplishments Pre-head: announce that findings will follow Head: report findings Move B: Evaluate congruence of findings with other criteria Head (one or more): express superiority of present research to past research express consistency with past research express inconsistency with past research express consistency with hypothesis Post-head: give specifications of past research Move C: Offer interpretation Pre-head: announce that interpretation will follow Pre-head: report what is known about general phenomena under study Head: offer hypothesis Post-head: claim support for hypothesis Move D: Ward off counterclaims Head:  	221??Disciplines & Corpora Moves Steps A. raise counterclaim B. respond: dismiss (1) evaluate congruence (2) report findings Post-head: offer conclusion Move E: State implications Pre-head: review methods Pre-head: speculate Head: recommend further research Post-head: justify recommendation Post-head: promise to carry out recommendations Dubois (1997)  biomedical: 20 articles  Result    Literature citation  Common knowledge  Hypothesis  Conclusion (Deduction)  Comment  Materials and Methods  Metatext Nwogu (1997)  medicine: 30 articles from 5 journals Move 1: Highlighting overall research outcome  Move 2: Explaining specific research outcomes (1) Stating a specific outcome (2) Interpreting the outcome (3) Indicating significance of the outcome (4) Contrasting present and previous outcomes (5) Indicating limitations of outcomes Move 3: Stating research conclusions (1) Indicating research implications (2) Promoting further research Kanoksilapatham (2005)  biochemistry: 60 articles from 5 journals Move 1: Contextualizing the study Step 1 Describing established knowledge   Step 2 Presenting generalizations, claims, deductions, or research gaps Move 2: Consolidating results Step 1 Restating methodology (purposes, research questions, hypotheses restated, and procedures)   Step 2 Stating selected findings   Step 3 Referring to previous literature   Step 4 Explaining differences in findings   	222??Disciplines & Corpora Moves Steps Step 5 Making overt claims or generalizations   Step 6 Exemplifying Move 3: Stating limitations of the study Step 1 Limitations about the findings   Step 2 Limitations about the methodology   Step 3 Limitations about the claims made Move 4: Suggesting further research (optional)  	223??APPENDIX 2: READING LEVEL OF JOURNAL ARTICLES USED IN STUDY      Reading level of twelve sample articles in Phase I study  # Article  Journal Publication Year Words Sentences Flesch Reading Ease Level Flesch-Kincaid Grade Level 1 Encoding specificity revisited: The role of semantics Canadian Journal of Experimental Psychology 2001 9420 400 25.19 15.60 2 The discrepancy-attribution hypothesis: I. The heuristic basis of feelings and familiarity Journal of Experimental Psychology: Learning, Memory, and Cognition 2001 10561 553 40.14 12.41 3 Implementation intentions and facilitation of prospective memory Psychological Science 2001 4299 182 34.95 14.25 4 Incidental formation of episodic associations: The importance of sentential context Memory & Cognition 2003 8863 365 13.50 17.41 5 Dissociations between implicit and explicit memory in children: The role of strategic processing and the knowledge base Journal of Experimental Child Psychology 2003 15630 906 10.93 16.02 6 Semantic context influences memory for verbs more than memory for nouns  Memory & Cognition 2004 9627 387 31.50 15.05 7 The capacity of visual short-term memory is set both by visual information load and by number of objects Psychological Science 2004 5035 225 38.78 13.41 	224??# Article  Journal Publication Year Words Sentences Flesch Reading Ease Level Flesch-Kincaid Grade Level 8 The ?one-shot? hypothesis for context storage Journal of Experimental Psychology: Learning, Memory, and Cognition 2005 12887 631 40.45  12.69  9 Posterior parietal cortex activity predicts individual differences in visual short-term memory capacity Cognitive, Affective, & Behavioral Neuroscience 2005 9043 655 38.06 11.38 10 Age-related declines in context maintenance and semantic short-term memory The Quarterly Journal Of Experimental Psychology 2005 11067 619 41.39 11.93 11 An investigation of everyday prospective memory Memory & Cognition 1998 9626 482 34.27 13.44 12 The role of attention during encoding in implicit and explicit memory Journal of Experimental Psychology: Learning, Memory, and Cognition 1998 19095 1167 30.81 13.03    	225??Reading level of five journal articles in Phase II study  Articles Words Sentences Syllables Average Syllables per WordAverage Words per Sentence Flesch Reading Ease Level Flesch-Kincaid Grade Level Relatively fast! Efficiency advantages of comparative thinking 21106 1116 38274 1.81 18.91 34.22 13.18 Long-term memory for the terrorist attack of September 11: Flashbulb memories, event memories, and the factors that influence their retention 14146 763 24269 1.72 18.54 42.88 11.88 Discounting future green: Money versus the environment 11802 584 20828 1.76 20.21 37.02 13.12 Song recognition without identification: Why people cannot ?name that tune? but can recognize it as familiar 14312 706 24825 1.73 20.27 39.52 12.78 Agenda-based regulation of study-time allocation: When agendas override item-based monitoring 14957 728 27091 1.81 20.55 32.75 13.80    	226??APPENDIX 3: VALIDATION SURVEY EMAIL ADVERTISEMENT   Hi,  As part of my dissertation research at the School of Library, Archival and Information Studies, I am conducting a study on identifying types of information for electronic scholarly journal use.  The purpose of validation survey is to validate the relationships between types of information and uses in the case of psychology journal research articles.  Participants will be asked to do two questionnaires online separately: in the first questionnaire, you are asked to rank information uses of electronic scholarly journal articles, and indicate to what extent you agree with different types of information contained in components of Introduction, Methods, Results, and Discussions; in the second questionnaire, you are asked to identify how relevant distinct types of information are for each use scenario.   Each of two questionnaires will take about 30 minutes to complete.  Please consider taking part in this interesting survey and getting $10!  If you would like to participate, to support the research, or because you would like to help out a colleague, please respond to this email address [XXXXXXXX@interchange.ubc.ca].  If you can recommend other graduate students who can participate, please forward this message to that person.  Please do not hesitate to ask any questions about this study.   Thanks for joining in!    Lei Zhang PhD Candidate School of Library, Archival and Information Studies University of British Columbia    	227??APPENDIX 4: VALIDATION SURVEY I: TYPES OF INFORMATION & USES   	228?? 	229?? 	230?? (This is repeated for each of the four sections.) 	231??  	232??APPENDIX 5: VALIDATION SURVEY II: RELATIONSHIPS BETWEEN TYPES OF INFORMATION AND USES    	233??  	234?? 	235?? 	236?? (This is repeated for each of the six scenarios.)      	237??APPENDIX 6: DTD AND SAMPLE XML DOCUMENT  DTD <?xml version="1.0" encoding="UTF-8"?>  <!ELEMENT index (title, paragraph+, paragraph0+, paragraph1+, paragraph2+, continue)> <!ELEMENT continue (#PCDATA)> <!ELEMENT paragraph0 (#PCDATA)> <!ELEMENT paragraph1 (#PCDATA)> <!ELEMENT paragraph2 (#PCDATA)>  <!ELEMENT article (title, author+, affiliation+, abstract, body, notes, references)> <!ELEMENT title (#PCDATA)> <!ELEMENT author (#PCDATA)> <!ELEMENT affiliation (#PCDATA)> <!ELEMENT abstract (#PCDATA)> <!ELEMENT notes (item+)> <!ELEMENT references (item+)> <!ELEMENT item (#PCDATA)>  <!ELEMENT body (introduction+, methods+, results+, discussion+)> <!ELEMENT introduction (section)> <!ELEMENT methods (section)> <!ELEMENT results (section)> <!ELEMENT discussion (section)> <!ELEMENT section (subsection+)>  <!ELEMENT subsection (heading*, heading2*, subheading*, paragraphs+)> 	238??<!ELEMENT heading (#PCDATA)> <!ELEMENT heading2 (#PCDATA)> <!ELEMENT subheading (#PCDATA)> <!ELEMENT paragraphs (id?, link?, function*, subheading2?, paragraph)> <!ELEMENT id (#PCDATA)> <!ELEMENT link (#PCDATA)> <!ATTLIST link to CDATA #REQUIRED> <!ELEMENT function (#PCDATA)> <!ELEMENT subheading2 (#PCDATA)>  <!ELEMENT paragraph (#PCDATA | presup | sup | postsup | preblock | block | postblock)*> <!ELEMENT presup (#PCDATA)> <!ELEMENT sup (#PCDATA)> <!ELEMENT postsup (#PCDATA)> <!ELEMENT preblock (#PCDATA)> <!ELEMENT block (#PCDATA)> <!ELEMENT postblock (#PCDATA)>  XML Document (Illustrated by article for ?Learn how to? task)  <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="task4.1.xslt"?> <!DOCTYPE article SYSTEM "functionalUnit.dtd">  <article>     <title>Discounting Future Green: Money Versus the Environment</title>  <author>David J. Hardisty</author>  <author>Elke U. Weber</author> 	239?? <affiliation>Columbia University</affiliation>  <abstract>In 3 studies, participants made choices between hypothetical financial, environmental, and health gains and losses that took effect either immediately or with a delay of 1 or 10 years. In all 3 domains, choices indicated that gains were discounted more than losses. There were no significant differences in the discounting of monetary and environmental outcomes, but health gains were discounted more and health losses were discounted less than gains or losses in the other 2 domains. Correlations between implicit discount rates for these different choices suggest that discount rates are influenced more by the valence of outcomes (gains vs. losses) than by domain (money, environment, or health). Overall, results indicate that when controlling as many factors as possible, at short to medium delays, environmental outcomes are discounted in a similar way to financial outcomes, which is good news for researchers and policy makers alike.  </abstract> <body>  <introduction>            <section>   <subsection>    <paragraphs>     <id>1</id>     <function>claim importance of topic</function>     <paragraph><![CDATA[The future is less important than the present. This is the story told both by rational, economic models of how we should deal with delayed outcomes and by descriptive, psychological models of how we actually deal with them. This makes sense for many reasons. For example, getting $250 today is generally worth more than getting $300 in 10 years (even adjusting for inflation), because the immediate $250 could be invested in the meantime and would yield more than $300 with accumulated interest after 10 years. Time delay also introduces a host of uncertainties that reduce the value of the promised outcome in a similar way to probabilistic receipt of outcomes in a lottery. You might die before the 10 years have passed, or the institution that was promising the $300 may no longer exist in 10 years? time. Furthermore, psychological factors such as impatience or lack of self-control also play a role; that is, you may want to get the money right away (pure time preference). The rate at which future outcomes are devalued is known as the discount rate.       ]]></paragraph>    </paragraphs> ?    <paragraphs>     <id>13</id>     <function>summarize methods</function>      <paragraph><![CDATA[The present research endeavored to examine domain differences while controlling for these confounding factors as much as possible. The values of environmental goods are often measured (and the implicit discount rate inferred) by ?pricing them out? through contingent valuation 	240??(Mitchell & Carson, 1989), which relies on the perception of respondents that environmental outcomes can be easily valued in and exchanged for dollars (and vice versa). However, this may not be a valid assumption (Frederick, 2006; Gregory, Lichtenstein, & Slovic, 1993; Schkade & Payne, 1994). For example, when asked to assign a monetary value (e.g., their willingness to pay) to some environmental consequence, respondents often express the strength of their attitudes (protecting the environment is important), or express what they consider a fair contribution, rather than communicating the result of a cost?benefit analysis reflecting the magnitude and value of the environmental outcome (Schkade & Payne, 1994). Thus, discount rates assessed through contingent valuation may be very misleading. In contrast, and following the methodology of the health discounting literature, the studies we present here assessed discount rates using within-domain measures.       ]]></paragraph>    </paragraphs>   </subsection>           </section>  </introduction>  <methods>      <section>          <subsection>              <heading>Study 1</heading>                <paragraphs>                    <id>14</id>                    <function>preview methods</function>                    <paragraph><![CDATA[In the first study, we compared discounting of monetary gains and losses with discounting of four environmental scenarios: air quality gains, air quality losses, mass transit gains, and garbage pile-ups (a loss). Choices in all cases involved an immediate option and an option with a 1-year time delay. Efforts were made to control for commonly confounded factors, including timescale, uncertainty, who was affected (although discount rates in hypothetical scenarios for oneself and others may not differ in any case; see Cairns & van der Pol, 1999; Pronin, Olivola, & Kennedy, 2008), and one-time consumption versus a change in consumption streams.                    ]]></paragraph>                </paragraphs> ?             </subsection>     </section>  </methods> 	241?? <results>      <section>         <subsection>             <subheading>Results</subheading>               <paragraphs>                    <id>25</id>                   <function>describe analysis conducted</function>                   <paragraph><![CDATA[As described above, a combination of titration and free-response measures were used to obtain a single indifference point for each scenario. To enable comparisons between scenarios and domains, these indifference points were converted to discount parameters using the hyperbolic discounting formula V = A/(1 + kD), where V = present value, A = future amount, D is the delay (typically in years), and k is a fitted parameter. This equation can be solved for k, the discount parameter that indicates how much someone values future outcomes relative to present outcomes. A k of zero means the present and future are valued equally. Positive values of k indicate that future outcomes are discounted (the more so, the larger k), meaning that the decision maker prefers to receive gains now rather than later or prefers to receive losses later rather than now. Negative values of k, on the other hand, indicate negative discounting, meaning that the decision maker prefers to receive gains later rather than now, or prefers to receive losses now rather than later. We chose this hyperbolic model because of its simplicity, considerable descriptive support (Frederick et al., 2002; Kirby, 1997; Kirby & Marakovic, 1995; Mazur, 1987; Myerson & Green, 1995), and relatively balanced treatment of positive and negative time preference (unlike an exponential discounted utility transformation, which minimizes extreme positive discounting but magnifies extreme negative discounting).                    ]]></paragraph>               </paragraphs> ?              </subsection>      </section>  </results>  <discussion>      <section>          <subsection>              <subheading>Discussion</subheading>                <paragraphs>                    <id>32</id>                    <function>interpret outcome</function> 	242??                   <paragraph><![CDATA[When presented with monetary and environmental gain and loss scenarios that were written to control confounding factors, participants discounted gains substantially more than losses but did not discount environmental outcomes significantly more or less than monetary outcomes. The valence difference was stronger between subjects; participants who were presented with gains first tended to discount all outcomes more overall, likely exhibiting greater discounting for gains and then endeavoring to remain somewhat consistent in their responses to other scenarios. Thus, in support of economic theories, it appears that time preference was similar for monetary and environmental outcomes.                    ]]></paragraph>                </paragraphs> ?               </subsection>      </section>  </discussion> ? <methods>           <section>                  <subsection>                            <heading>Study 3</heading>                <paragraphs>                    <id>55</id>                    <function>relate to prior/next experiments</function>                    <function>preview methods</function>                    <paragraph><![CDATA[Studies 1 and 2 compared discounting of environmental scenarios to discounting of typical financial scenarios, in an effort to see whether insights and findings from existing research may be usefully applied to the environmental domain. However, in doing so, two common differences between environmental and financial scenarios went unaddressed. First, while the money was to be received or paid as a lump sum, the environmental outcomes were to be experienced as a stream of benefits (or losses) spread out over time?a difference that is known to affect intertemporal preferences (Guyse et al., 2002; Hsee et al., 1991; Loewenstein & Sicherman, 1991). Second, while typical research on monetary outcomes has examined short delays (in the range of a few weeks to a year), environmental outcomes are often not realized for many years. Our third study explored these issues, while also better controlling for the subjective value of the outcomes.                    ]]></paragraph>                </paragraphs> 	243??          </subsection> ?                  </subsection>        </section> </methods> <results>        <section>                      <subsection>                           <subheading>Results</subheading>           <paragraphs>                <id>62</id>                <function>state findings</function>                <paragraph><![CDATA[As summarized in Figure 3, participants discounted gains significantly more than losses, across domains and delays. A 2 (domain: within) ? 2 (valence: within) ? 2 (delay: between) ? 2 (order: between) repeated-measures general linear model revealed a main effect of valence, F(1, 141) = 41.1, p < .001, ?p2 = .23, indicating that gains were discounted much more than losses, replicating previous studies. A significant Order ? Domain interaction, F(1, 141) = 5.6, p < .05, ?p2 = .04, and an Order ? Domain ? Valence interaction that approached significance, F(1, 141) = 3.3, p < .1, ?p2 = .02, indicated that participants tended to discount the first scenario they saw significantly more, regardless of whether it was a financial gain or an environmental gain. In other words, participants showed more impatience on the first questions they considered. Although none of the other main effects or interactions had significant effects (all ps > .1), a trend for a three-way interaction between domain, valence, and delay suggested that air quality gains were discounted marginally more than monetary gains (i.e., more impatience for improved air quality than money) at a 1-year delay but marginally less at a 10-year delay, F(1, 141) = 2.3, p = .13, ?p2 = .02. While there was not a main effect of delay on mean discount rates, this does not mean that participants discounted outcomes the same for the two delay conditions. Rather, this indicates that participants were sensitive to delay and that the hyperbolic discounting model captured their pattern of discounting well, just as in previous research on discounting of environmental outcomes (Viscusi et al., 2008). Thus, participants in our study were indifferent on average between 28 days of worse air quality starting immediately, 34 days starting in 1 year, or 58 days starting in 10 years.                 ]]></paragraph>           </paragraphs> ?               </subsection>       </section> </results> 	244??<discussion>           <section>                         <subsection>                             <subheading>Discussion</subheading>                 <paragraphs>                  <id>66</id>                  <function>interpret outcome</function>                  <paragraph><![CDATA[Replicating Studies 1 and 2, valence had a huge effect on discounting rates, while domain had relatively little effect, regardless of delay and regardless of the fact that participants considered streams of money rather than lump sums. As before, correlations were stronger within sign (and cross domain) than within domain (and cross sign).                  ]]></paragraph>                 </paragraphs>           </subsection>           <subsection>                              <heading>General Discussion</heading>                <paragraphs>                    <id>67</id>                    <function>recapitulate present research</function>                    <paragraph><![CDATA[The research in this article on the discounting of environmental outcomes was motivated by a combination of theoretical, policy-oriented, and practical considerations. Whether a government is deciding whether the use of different discount rates for environmental and financial projects expresses the will of its people, or a local power company wants to encourage its customers to weatherize their homes (thus incurring short-term costs but long-term energy savings), it is vital to know whether financial and environmental outcomes are discounted at similar rates on average and whether the same factors found to affect discounting of financial outcomes also affect discounting of environmental outcomes. Understanding of the discounting of environmental outcomes is especially important because issues such as global warming involve very long time horizons.                    ]]></paragraph>                </paragraphs> ?                     </subsection>       </section> 	245??</discussion> </body> <notes> <item><![CDATA[1 This nature and magnitude of exclusions is typical in online research, which has the advantage of a broader range of participants on socioeconomic variables than university lab samples but the disadvantage of lack of supervision of the way in which responses are provided. Excluding data from the careless respondents makes the data cleaner but does not alter the major trends or our conclusions. ]]></item> ? </notes> <references> <item><![CDATA[Baron, J. (2000). Can we use human judgments to determine the discount rate?Risk Analysis, 20, 861?868. ]]></item> ? </references> </article>    	246??APPENDIX 7: USER STUDY EMAIL ADVERTISEMENT   Hi,   I am looking for psychology major students for user study.  You will be asked to read 5 psychology journal research articles in one of two presentations, and to answer a comprehension question for each reading.  You will complete post-task and post-study questionnaires, and narrate your thoughts while reviewing a replay of the session.  The interaction with the system and verbal protocols will be recorded. Your identity will not be revealed.  Compensation: $20 Dates: since January 26, 2010 Time: two hours Location: SLAIS at Barber Learning Center  If you are interested, please email Lei at [XXXXXXXX@interchange.ubc.ca].  In your email, please specify your available dates/times, and academic status (in which year in which degree program).      Thanks,    Lei Zhang  PhD Candidate  School of Library, Archival and Information Studies (SLAIS)  University of British Columbia  	247??APPENDIX 8: USER STUDY CONSENT FORM        THE UNIVERSITY OF BRITISH COLUMBIA                                                                             School of Library, Archival and Information Studies                                                                             Irving K. Barber Learning Centre                                                                             Suite 470 - 1961 East Mall                                                                            Vancouver, BC V6T 1Z1                                                                                                     voice: 604- 822-2404                                                                                                      fax: 604-822-6006                                                                                                     e-mail slais@interchange.ubc.ca                                                                                                      web: www.slais.ubc.ca   Consent Form of Functional Unit - Phase II Study  This project proposes an enhanced electronic journal application by exploiting the functions of types of information within journal research articles.   In Phase I of my dissertation research, I identified and validated the relationships between types of information and uses of psychology journal research articles.  The purpose of this study is to evaluate the effectiveness of a prototype journal system developed from the results of Phase I.  The User Evaluation, which will take approximately two hours, is conducted in a meeting room on campus at agreed upon time.  You are asked to do 5 tasks, each requiring you to interact with a psychology journal research article in one of two different presentations.  For each task, you will answer a comprehension question by highlighting the relevant pieces of text and writing a short summary.  And after each task you will fill out a post-task questionnaire.  The interaction events will be recorded via Morae software, including mouse clicks, timestamp, etc.   By the end of experiment, you will fill out a post-study questionnaire for background information and general comments.  Finally, you will be asked to express thoughts on your interaction with the system while reviewing a replay of the session, which will be audio recorded.    The questionnaires are hosted by a websurvey company located in USA and as such is subject to US laws, in particular, those which allow authorities access to the records of internet service providers.  However, the questionnaires do not ask for any information that may be used to identify you and no connection is made between your data and any log files.  The privacy policy for the websurvey company is available at http://www.surveymonkey.com/Monkey_Privacy.aspx.  No risks are known for this study.  All data collected from participants will be coded, and be stored in a locked filing cabinet or in a password protected computer account.  The identity of participants will not be revealed from any documents that result from this work. 	248?? Once the study is complete, a summary of results will be available upon request by email.  To show appreciation for your participation in this study, you will receive an honorarium in the amount of $20 once you complete the study.   Your participation is entirely voluntary and you may refuse to participate or withdraw from the study at any time.   Your signature indicates that you consent to participate in this study.     ____________________________________________________ Subject Signature     Date    If you have any questions or desire further information with respect to this study, you may contact Lei Zhang at (604) XXX-XXXX or via email at [XXXXXXXX@interchange.ubc.ca].  You may also contact Dr. Rick Kopak, who is the principal investigator, at (604) XXX-XXXX.    If you have any concerns about your treatment or rights as a research subject, you may contact the Research Subject Information Line in the UBC Office of Research Services at 604-822-8598.    	249??APPENDIX 9: POST-TASK QUESTIONNAIRE   Baseline system    Experimental system (The first seven questions are the same as those in the baseline system)    	250??APPENDIX 10: POST-STUDY QUESTIONNAIRE   	251??	252??  	253??APPENDIX 11: ANSWER KEY  Article 1: What is known in the area about the role of comparison in human information processing? 1. The tendency for comparative thinking is so ubiquitous that people even use comparison standards that are presented subliminally, or that are unlikely to provide useful information. 2. Comparative thinking holds efficiency advantages because it involves the spontaneous activation of an information-rich comparison standard, which may be used as a proxy for target information that is unavailable or difficult to obtain. 3. Comparative thinking holds efficiency advantages because it allows one to focus on a subset of potentially judgement-relevant information about the judgemental target.  4. Efficiency advantages of comparative thinking do not necessarily lead to less accurate judgements, but can when relevant information is ignored or information is incorrect. 5. Comparative thinking shows efficiency advantages when the features of target and standard are alignable. 6. Comparative thinking shows efficiency advantages when a given attribute is difficult to evaluate separately.  Article 2: What data further supports the claim that recognition of song fragments is influenced by song familiarity? 1. A song RWI effect occurred both when the test song fragments had had half of their notes deleted randomly and when the fragments had had every other note deleted. 2. A song RWI effect occurred with tonal information that has been separated from its original rhythm information, but only when the notes maintained their original order. 3. A song RWI effect occurred with rhythm information that has been separated from its original tonal information. 4. A song RWI effect occurred when the tempo of the rhythm was altered. 5. RWI effect still emerged from the conditionalized data (i.e., exam only those ratings given to unidentified studied test items corresponding to songs that were identified at study).  Article 3: Why is agenda-based regulation considered to have greater explanatory power as compared to current theories on allocating study time? 1. As predicted by ABR model, experiments show learners prioritize items for study that are more likely to yield higher reward, regardless of item difficulty.  The results cannot be explained by existing theories based on item difficulty. 2. ABR model proposes that agenda drives regulation, and agenda may involve monitoring item difficulty or other factors.  Existing theories are limited as they focus on how monitoring item difficulty drives regulation.  3. Learners can use monitoring and control processes in a highly flexible manner.  ABR can account for this but the existing theories cannot (i.e., Hierarchical model predicts that participants would select difficult items for study; RPL explains selection in speeded tasks).  	254??Article 4:  How are the contextual factors controlled in examining the preference for gains/losses immediately or later in different domains? 1. Timescale was controlled by using the same duration for both types of scenarios. 2. In study 1, participants were asked to consider only personal preference in making a choice to control for possible differences due to who was affected by environmental outcomes. 3. In study 2, the air quality scenarios were designed using a standard, real-world measure of air quality, and participants were recruited from areas with poor air quality to make the scenarios more personally applicable and less abstract. 4. In study 3, considering environmental outcomes are often experienced as a stream over time, financial benefits or losses were counted as $9 a day for 28 days. 5. In study 3, considering environmental outcomes are often not realized for many years, financial benefits or losses were either realized in 1 year or 10 years.  Article 5: What is reported new regarding the different changes in flashbulb memories and event memories over time? 1. No relations were found between five predictors (residency, personal loss or inconvenience, emotionality, media attention, and ensuing conversation) and flashbulb memories, whereas significant relations were found between four of the five (all but emotionality) predictors and event memories. 2. Memory practices, including exposure to media attention and conversation, account for the differences in changes in content for event memories and flashbulb memories. 3. Inaccurate event memories tended to be corrected over time, whereas inconsistent flashbulb memories were repeated over time.  	255??APPENDIX 12: INTERVIEW TRANSCRIPTS   How did you read differently in these two different presentations?  Positive remarks  Focused reading  P1 Then I can have more time to spend on reading in detail.  P5 ? in the other one I tended to read more so I mean mostly I would look at the abstract and introduction and then the discussion mostly.  Whereas I guess for the boxes ? I would click on them to bring me to you know specific information may be within different parts of the text that I wouldn?t usually jump to.  P6 ? it was more difficult because I had no indication of what was more important so I just read the article what I understood like reading it just highlighted it ? I just knew it was a lot easier with the prototype system just look through paragraphs that were highlighted and I just read those more specifically and highlighted the most important lines.  P13 So for these ones I think I spent more time reading and other ones I spent less time.  P14 ? when there was no system suggestion box, I had to take basically a guess ? so reading introduction first and then sort of looking at the methods ? and then going to discussion ? This one as I said I read more because they are divided in sections right, so it?s easier for me to locate them and it?s easier for me to you know read important parts that I need because the highlighted sections actually do tell me like this is important.  P21 This one is a lot more refined maybe, looking specifically at certain aspects ? skipping ones the system not highlighting and reading as long as I do, like kind reading a bit of everything ? [For the other presentation] Kind of a lot of places.  I read the abstract, and the first paragraph you know quite thoroughly.  After that I just kind of searching for specific keywords and sentences and things I thought would be relevant.   P29 ? allowed me to read more in-depth paragraphs, not just the first paragraphs ?  Selective reading  P1 ? the first two was easier because they had the tabs and then they helped me highlight which part and then with those highlights I can extract which was relevant.  But for the last two I didn?t have that so I was doing the initial skimming through and then find my particular thing.  P2 I pressed on these ones [highlight buttons] just to see what options I had I guess to read. And then I guess I decide from myself whether to read it or not.  Maybe like read the first sentence of each and see if it helps me in answering the question.  P13 At least ones with highlighted sections, I found it easier to answer the questions because they gave you the important parts.  And I filtered out what I thought was important within each highlighted section.  And then in the other ones I tried to be more efficient and basically just read the abstract, and introduction, and discussions and conclusions only.  P16 ? for the first two tasks we didn?t really have any to help us ? my strategy was you know just find relevant paragraphs that would be talking about ? in the other tasks where I had the highlighted preference that made it easier to go what I want it.  The 	256??highlighted tasks make it a little bit more efficient to go to what I think is important ? P27 You can sort of do that with the original model as well but it?s a little bit harder, because the results you have to read through all of the general results or all of the studies whatever to get information you want.  But with other model you can look at results and how it is broken down and you can pick and choose parts of that section ?  Less reading  P7 I think without the system I would read more to find what I need.  But with that I can just focus on what I need like it?s faster to find what I need to answer the question. P10 For the one that doesn?t have the highlighted parts, my reading strategies were just to read as much as I could to answer the question, whereas the prototype saved a lot of time.  Because it was highlighted and divided up carefully I guess that it was easy to see and easy to search find color coding. P11 I found that with the highlight button my reading strategy went a lot faster.  In the respect that I was able to use highlight button to read in order, because it told me ok this part refers to the introduction, this part refers to this, makes my life easier and turns out I only focused on what?s highlighted.  Whereas in the ones there was no highlight function it was a lot more difficult for me to structure the article ? it was very hard for me to organize myself and figure out what kind of strategy I should employ to read it. P12 I looked at the introduction and the discussion and I just looked for information based on the headings pretty much.  And here like I used these tools ? I thought the tools were helpful in terms of starting me off or need to be in stuff ? P15 In the other presentation where there were no options I read all of it from up to down in a more detail.  If there was with the system suggestion then I didn?t read as much of the articles.  P18 For that one I relied more on the buttons and the highlights I didn?t go through all the paragraphs, but for this one I skimmed through all the headings, chose the one that I wanted, and then under those headings I read looked through the paragraphs of each headings, while that one I relied on the highlights and didn?t look at other paragraphs or sections. P20 My reading strategy definitely was more efficient in the second of the two presentations because I wasn?t really skimming as much through the whole article and I didn?t I mean I knew already where information that I was looking for probably would be, I was able to go from highlighted paragraph to highlighted paragraph rather than kind of hoping to find summary information underneath the key headings the central headings. P23 So the first one I think I was more efficient.  My strategy because I knew exactly where to look for to get my answers.  But in the second one without the hints, the colors, I think I was more scattered try to look for relevant information.  So I think I took longer.  P25 I found that I relied on the system more when they are suggested there are suggestions given by the systems, and the last three I would like I would read the headings and look at and read the words clearly and then relate to the questions more carefully.  P26 ? you click on it and you are referred to where you should be looking, so it seems easier, seems like my attention knows exactly what I want to do.  I can click something and I can look at what?s highlighted and I can briefly read it and if I feel like it?s answering the question, it?s already done I don?t have to look anywhere else ? where they didn?t have the highlighted component ? I had to actually go through all the 	257??paragraphs, all the different headings pretty, I had to go more thoroughly, because I had to figure out ok where I can find this answer ? So I found that journals that had the highlighted part to it, it was just easier it was quicker like I could write an answer faster ? P27 I definitely didn?t read as much with the prototype model, I read less for that one just because it is organized so I knew exactly what I can skip through ? with those it is right there for you, most of the time you can just skip pass it, you don?t even have to look at it.  P28 ? with the one with the highlighting and the paragraph titles on the left side, I didn?t look at the headings at all I just looked at the paragraphs that were highlighted because I knew that was indicated as being results and discussion ? I didn?t look at the abstract initially, I just skipped that over and went straight for the demarked paragraphs. P29 This one I guess in the end I was looking through the greater paragraphs, but first of all, I just mainly went for the first paragraph ? I clicked on the highlighted headings it?s kind of take me to important information ? time less in searching for it ? it is kind of easier to look for information in other study because important information was already highlighted for me, so it cut the bulk of the article down a little bit ?  P30 ? it was easier because then if I thought a section could answer my question I would just click on highlight and they would tell me which paragraphs I should go to. Instead of reading through the whole thing you can go to different sections.   Negative remarks  Unnatural reading   P8 I would argue that for the faster the first one because the first one had the system assist buttons ? I just found that was system assist was just annoying ? P22 Without the tabs I went from top to bottom and kind of and I skim through and see if information were here ? I found that when the tabs are there, it gets me out of my stream of thought ? For me not having the tabs is usually a bit easier, because I can keep track I can rely on my own stream of thought instead of the computer or the tab thought.   P24 ? the ones when I had nothing I read much quicker and I think that?s because I was using the methods I am used to at home which is using the headings and scanning much quicker.  I found it much easier to skip pass large body of text or ignore them altogether because I wasn?t distracted by the sidebars.  Lacking context  P9 In this one, since there are no hints on the side, I have to actually I haven?t an idea of what the paper is like, so read more of it and see the actual structure of it and then try to figure it out.  So pay more attention to headings pay more attention to introduction ? Whereas in the second one the hint side I have to pay as much attention to the introduction to the meaning I found that even though I found information quicker, I understood information better in the first one with I found on my own ... So in the second way I used the hints mostly I first read the introduction, the abstract I guess and then the first paragraph of introduction to get an idea and then to use hints on the right 	258??side.  P17 ? in the traditional way I usually I think I start from the beginning and so I always read the abstract and introduction.  Whereas in the new methods, I totally sometimes I usually don?t read that part I just go straight to the top hits and only read the highlighted parts.  But I think the traditional way gives me a better understanding of what the whole article is about, and whereas the new way it?s helpful in finding very specific information, but after finding that because I don?t have the background knowledge, it doesn?t really help me understanding what I am trying to answer so.  But if I already have like background knowledge or familiar with the subject it can help me find the relevant information faster.   Neutral remarks   P3 ? my strategy really wasn?t much different, but for the second one [task] ? the first one and the second one even they both have the system suggestions it was actually a bit different ? the first one I didn?t really know what was expected so I was actually still using most of the subheadings, but the second one I sort of got how it works, so I did click on the system suggestion for the highlighting to go straight to the main focus. P4 My last three [task] are probably similar to the way that I read the first one, which is sort of my usual style ? But the second one that was my major strategy was slightly different because I did rely on it more as supposed to skimming some sections looking for headings.  So kind of like I said just I did the same way but maybe a little more direct way highlighting.    P19 Reading strategies in this one, I read the abstract first ? I relied more on the headings of the paragraphs ? I went more in order from the top to the bottom.  Whereas the other one you press the best hits and you go from the purple and you go from the first one to all the way down, you press the next best hits and the yellow one might come before the purple and you go up again and read it through again.  So that one the first one I mean the earlier one that I was recalling was more of a jump from here to there and less sequentialized.  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0071811/manifest

Comment

Related Items