UBC Faculty Research and Publications

Do highlights affect comprehension? Lessons from a user study Dodson, Samuel; Freund, Luanne; Kopak, Rick 2017

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
52383-Dodson_S_et_al_Highlights_effect_comprehension.pdf [ 407.3kB ]
Metadata
JSON: 52383-1.0376432.json
JSON-LD: 52383-1.0376432-ld.json
RDF/XML (Pretty): 52383-1.0376432-rdf.xml
RDF/JSON: 52383-1.0376432-rdf.json
Turtle: 52383-1.0376432-turtle.txt
N-Triples: 52383-1.0376432-rdf-ntriples.txt
Original Record: 52383-1.0376432-source.json
Full Text
52383-1.0376432-fulltext.txt
Citation
52383-1.0376432.ris

Full Text

Do highlights affect comprehension? Lessons from a user studySamuel DodsoniSchoolUniversity of British ColumbiaVancouver, BC, Canadadodsons@mail.ubc.caLuanne FreundiSchoolUniversity of British ColumbiaVancouver, BC, Canadaluanne.freund@ubc.caRick KopakiSchoolUniversity of British ColumbiaVancouver, BC, Canadar.kopak@ubc.caABSTRACTA largely unquestioned assumption in social reading is that pub-licly shared annotations improve reading outcomes. In this study,we explore the specific assumption that relevant and irrelevantpassive highlighting affects comprehension. Participants were di-vided by cognitive style based on their degree of Field Dependence-Independence [19]. We found that irrelevant highlights had signifi-cant negative effects on reading comprehension for Field Indepen-dents (FIs), but not Field Dependents (FDs). This is a surprisingresult because FDs typically rely on external cues to structure andhelp process information, whereas FIs use internal cues. This sug-gests that highlighting cues information but does not structureit.CCS CONCEPTS• Computer systems organization→ Embedded systems; Re-dundancy; Robotics; • Networks→ Network reliability;KEYWORDSannotation, comprehension, highlighting, reading, social readingACM Reference Format:Samuel Dodson, Luanne Freund, and Rick Kopak. 2017. Do highlights affectcomprehension? Lessons from a user study. In CHIIR ’17: 2017 Conference onHuman Information Interaction & Retrieval, March 7–11, 2017, Oslo, Norway.ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3020165.30221581 INTRODUCTIONHighlighting is a common form of text annotation widely usedto facilitate reading, especially when reading to learn. Given theconnection between reading, highlighting, and learning, it was notsurprising that Marshall’s [11] seminal study of annotations foundthat most second-hand textbooks examined were thoroughly high-lighted. The practice of highlighting has carried over from print todigital reading environments, most of which offer tools for high-lighting. Features such as Popular Highlights on the Amazon Kindlego even further by storing and aggregating highlights and display-ing them to readers as a layer of social information complementaryto the original text.Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from permissions@acm.org.CHIIR ’17, March 7–11, 2017, Oslo, Norway© 2017 Copyright held by the owner/author(s). Publication rights licensed to Associa-tion for Computing Machinery.ACM ISBN 978-1-4503-4677-1/17/03. . . $15.00https://doi.org/10.1145/3020165.3022158The underlying assumption of such systems is that both active(made by the reader during the act of reading) and passive (madeby previous readers) highlights have a positive effect on readingoutcomes, such as increased comprehension, retention, and engage-ment. However, very little research exists to support this assump-tion, particularly in the digital context. This paper reports on astudy of passive, or social highlights, and investigates their effecton comprehension, an important outcome of human informationinteraction and a key component of learning.2 PREVIOUS WORKMarshall [11] identified multiple uses of highlights: as signals forfuture use; as place marks for a specific passage to be referencedlater; and, to focus attention on the text, especially when a passageis difficult to read. Of these, the first is most applicable to passivehighlights, which signal to readers themost important or interestingsections of the text. There is some evidence that highlights thatcue key passages increase recall regardless of whether they wereactively created by the reader or passively encountered [5, 9, 10].However, one of the concerns with passive highlights, is that theymay be of poor quality. If we consider that the effectiveness ofhighlights relies upon the von Restorff effect [18], which predictsthat an item that stands out from its background is more likely tobe remembered than items that do not, we have to acknowledgethat poor quality highlights may interfere with readers’ ability tomake sense of, and learn from, highlighted text [14]. Readers maysimply jump from one highlighted passage to the next, withoutassessing the quality or relevance of the highlights, as was foundin [3].Silvers and Kreiner [17] studied the effects of relevant and irrel-evant highlighting on text comprehension in a two-phase studywith three conditions: no highlighting (control), relevant highlight-ing, and irrelevant highlighting. They found that the participantsperformed similarly in the relevant and control conditions, but theirrelevant highlighting had a significant negative effect on com-prehension. The same negative impact of irrelevant highlights wasobserved even after warning participants that the highlights maybe irrelevant. We were intrigued by these results and decided to testthem in a digital context. We also wondered if individual differencesbetween readers may have influenced the results.The way readers interact with information is affected by theircognitive styles [1, 12]. One of the most studied cognitive styles isField Dependence-Independence (FDI) [19] [16, 20]. Messick [12]describes the difference as follows: “The field independent persontends to articulate figures as discrete from their backgrounds andto easily differentiate objects from embedding context, while thefield dependent person tends to experience events globally in anundifferentiated fashion” (p. 5). FDs use external cues to guide theirCHIIR ’17, March 7–11, 2017, Oslo, Norway Samuel Dodson, Luanne Freund, and Rick Kopakinformation processing, while FIs use internal ones. As a result,FDs are more likely to use the existing structure of a field, whereasFIs create their own [21].While FDI was originally a measure of perceptual ability (tostructure or restructure visual fields), it has been found to affectother tasks, such as problem solving [20], cognitive restructuringability [6], and attention to relevant cues [2]. In the context ofreading, FDs are more likely to use the pre-existing structure ofa text while FIs are better at focusing their attention on relevantinformation and ignoring distractions [7]. It is especially difficultfor FDs to focus on the most important information when they arepresented with distracting cues, such as irrelevant highlights [7].When a field is well-structured, however, FDs can perform as wellas FIs [21], suggesting that relevant highlighting could be used asan aid for FDs.Given the prevalence of passive highlights in current social read-ing systems, we decided to test the assumptions that seem to un-derlie their design: that highlights of any kind improve the readingexperience. Our research questions were as follows:• Does the existence of relevant highlights improve text com-prehension?• Does the existence of irrelevant highlights reduce text com-prehension?• Are the effects of highlighting on comprehension influencedby cognitive style, specifically FDI?3 METHODSTo answer these questions, we conducted a within-subjects experi-ment with 29 participants. Following the approach used by Silversand Kreiner [17], participants were asked to read texts in threeconditions: those with irrelevant highlights, those with relevanthighlights, and a control with no highlights. We updated and im-proved the methods used in [17] by using lengthier texts presentedin digital format with no more than 15% of the text highlighted, assuggested in [9]. We designed our own comprehension tests basedon a recognized model of text comprehension [8]. Details of themethods are presented in the following sections.Texts were three general interest articles from Scientific Americanof approximately 3,000 words each and presented in plain HTMLformatwith charts and images removed. In the relevant highlightingcondition passages that contained concepts or facts central to theoverall meaning, or gist of the article were emphasized. Relevanthighlights were created where at least two of the three researchershighlighted the same passage. Irrelevant highlights emphasizedpassages that at least two of the three researchers identified asinformative, but peripheral to the main themes of the text. We thencarefully trimmed the highlights down to include no more than 15%of the text in accordance with the findings of [9, 10].The highlighting condition was the manipulated variable, witheach participant reading one text in each condition. The GroupEmbedded Figures Test (GEFT) [22] was used to measure FieldDependence-Independence. Following Demick [4], participantswho scored at, or below, the median GEFT score were classifiedas Field Dependent (FD) (M=10.33, SD=2.66, N=15) and those whoscored above as Field Independent (FI) (M=10.33, SD=2.66, N=14).The primary dependent variable was comprehension, measuredthrough tests administered after each text was read. Tests questionswere designed in accordance with the Construction-Integrationmodel [8] to assess microstructural and macrostructural compre-hension, and were of three types: multiple choice matching ques-tions; Sentence Verification Technique (SVT) questions [15]; andan open-ended summary question.A convenience sample of 29 undergraduates (14 males, 15 fe-males) was recruited via listservs and posted advertisements. Ses-sions lasting 1.5 hours were run with groups of 10 participantsrandomly seated in a large computer lab. Computers were pre-setto display the articles. Instructions, pre- and post-session question-naires, and comprehension quizzes were provided in print format.Participants completed a consent form, a demographics and read-ing habits questionnaire, and the GEFT (on paper). Then they weretold their task was to read three articles and complete a compre-hension quiz for each. The scenario was to imagine the articleshad been assigned for an upcoming class discussion for which theyhad limited time to prepare. Participants were given five minutesto read each article. This was done to encourage efficient readingstrategies, such as the use of highlights, and to simulate typicalonline reading behaviors [13].The three articles were counterbalanced to reduce ordering ef-fects but all participants experienced the conditions in the sameorder: relevant, control, irrelevant. This was to limit a negativecarry-over effect that we observed in the pilot test when the irrele-vant highlights were assigned first and subsequent highlights wereignored. After reading an article, participants were given sevenminutes to complete the corresponding comprehension quiz. Theywere not allowed to look at the test until they had finished readingthe article and were not able to refer to the article once they startedthe test. After completing the reading tasks, participants completeda post-session questionnaire asking about their experiences andtheir thoughts on the usefulness of the highlights. They received a$20 honorarium for participating.The 3 researchers scored the open-ended summary responsesindependently using a rubric, compared scores, and reached a con-sensus score for each to ensure inter-scorer reliability. Data wereanalyzed using ANOVA and an alpha level of 0.05 was used for allstatistical tests.4 RESULTSComprehension scores for each condition were compared to mea-sure the effects of highlighting on comprehension. Comprehensionwas measured using multiple choice, open-ended summary, andSVT questions. An overall measure, which averaged the scores fromthe three measures, was also used. The highest possible comprehen-sion score for each measure is 100. Differences were found betweenmeasures of comprehension used in the study. Most previous stud-ies have only used multiple choice as a measure of comprehension;these results found differences between the measures of comprehen-sion, which may provide an argument for pairing other measuresof comprehension with multiple choice.Table 1 shows the mean overall comprehension scores for thethree conditions, and are reported for all participants, FIs, and FDs.Across all three groups, the highest means were achieved in thecontrol condition and lowest in the irrelevant.Do highlights affect comprehension? Lessons from a user study CHIIR ’17, March 7–11, 2017, Oslo, NorwayConditionRelevant Irrelevant ControlGroup n M SD M SD M SDall 29 59.13 17.02 60.99 14.09 53.09 12.76FD 15 62.70 15.43 54.30 13.58 64.06 11.94FI 14 55.31 18.36 51.80 12.19 7.71 15.86Table 1: Descriptive statistics of the overall measure.The mean comprehension scores for all participants on the mul-tiple choice and summary questions did not vary significantlyacross the three conditions; however, there was a significant effectof condition on the comprehension scores in the SVT question(F(2,56)=4.360, p=.017). Post hoc tests, using the Bonferroni correc-tion, indicated that SVT scores in the irrelevant condition (M=53.09)were significantly lower than in the control (M=60.99) (p=.006), butthe scores in the relevant condition (M=59.13) did not differ fromthe other conditions. When we combined the comprehension scoresinto an overall measure, there was a borderline significant effect ofcondition (F(2,56)=2.998, p=.058) showing the same trend as for theSVT scores.The mean comprehension scores of the FDs were relativelyconsistent across conditions. One-way within-groups ANOVASshowed no evidence of significant differences between comprehen-sion scores across conditions for any of the measures.For FIs, some differences did arise by condition. One-way within-groups ANOVAs show no evidence of significant differences bycondition for the multiple choice and summary questions, but therewas a significant difference for the SVT question (F(2,26)=3.405,p=.049). A post hoc pairwise t-test, using the Bonferroni correction,indicates that scores in the irrelevant condition were significantlylower than the control (p=.023). Comprehension scores in the rele-vant condition were also lower than the control (p=.053).In a post-session questionnaire participants were asked to de-scribe what they thought of the highlights and whether some weremore useful than others. Reactions to the highlights were mixed.Of the 29 participants, eight indicated that the highlights were veryhelpful, 13 claimed the quality of the highlights varied, while 8found the highlights unhelpful.There was an even distribution of FDs and FIs among these re-sponse groups, suggesting that there was no effect of cognitive styleon attitudes towards the highlighting conditions. Positive responsestended to emphasize that highlights helped with focus and overallunderstanding. P28 reflected that “Reading the highlights helpedto get the gist of what [the] article was talking about but didn’thelp with the little details”. P15 added “I tended to focus on thehighlights.”P21 stated, “I thought the highlights were distributed betweenhelpful and useless”. P14 added, “I found the highlights to be veryhelpful, especially in the first article [the relevant condition], thethird article highlights [the irrelevant condition] made me skim thesurrounding information”. P5 stated, “While skimming, I felt like Ihad to read them [the highlights], which was obnoxious when theyweren’t helpful”.Among those who had a negative response to both highlightingconditions, there was a split between those that claimed to haveignored all the highlights and those that looked at them, but did notfind themuseful. Several participants usedwords such as “annoying”and “distracting” to describe the highlights. Two participants saidthat they do not value highlighting. P10 said “I didn’t really noticethe highlights, because I was focused on reading. I believe highlightsare most helpful, when done by the reader.” P23 added, “I didn’tfind the highlighting useful. If I wasn’t the one to highlight, thehighlight just gets in the way”. Four participants said they ignoredthe highlights. P9 said “did not pay much attention to the highlights,felt they were distracting as they pulled my focus away from thearticle when I was reading.” P4 “noticed them but didn’t reallyanalyze them.”5 DISCUSSIONHow do relevant and irrelevant highlights affect comprehension?The results provide no evidence that relevant highlights improvecomprehension for any of the groups, and some evidence that irrel-evant highlights negatively affect comprehension. Mean compre-hension scores were lowest in the irrelevant highlighting conditionacross all groups in most measures. Statistical testing providedsome evidence to support this trend, indicating that comprehensionscores were significantly lower in one measure for the irrelevantcondition than in the control condition for all participants andFI groups. For FIs, there is even limited evidence that relevanthighlights may be more detrimental to comprehension than nohighlights at all.These results validate the earlier work of Silvers and Kreiner[17], showing no benefit of relevant highlights and some negativeeffect of irrelevant highlights. Furthermore, this study shows thatthe effects of highlighting observed by Silvers and Kreiner extendto longer texts in digital reading environments. Together, theseresults suggest passive highlighting provides little benefit in termsof comprehension for these participants. For most individuals, read-ing texts without highlights results in higher comprehension thanwhen presented with any kind of highlights.A unique contribution of this work was its exploration of theeffects of cognitive style. In examining the effects of relevant and ir-relevant highlights within the FD and FI groups, our results showedthat the comprehension of FDs was not significantly affected by thedifferent conditions, while FIs seemed to be negatively affected byboth irrelevant and relevant highlights. Both FDs and FIs performedbest without highlighting. There was some variation in comprehen-sion between groups and measures; however, mean comprehensionwas generally highest in the control and lowest in the irrelevant con-dition. Differences between relevant-control and irrelevant-controlpairings were insignificant for FDs. While the relevant-control com-parison was also insignificant for FIs, the irrelevant-control pairingwas significant in the SVT measure. This finding suggests thatreaders’ cognitive styles may be an important factor in the under-standing of texts with passive highlighting. Contrary to what wewould expect from the literature, FIs, rather than FDs, were affectedby passive highlights. Given that FDs rely on external structuring toprocess information, we were surprised that the highlighting condi-tions had no effect on FDs’ comprehension. We expected a positiveCHIIR ’17, March 7–11, 2017, Oslo, Norway Samuel Dodson, Luanne Freund, and Rick Kopakeffect of relevant structuring in the form of highlights, and negativeeffects of the control, which did not structure the text, and theirrelevant, in which the text was poorly structured. Similarly, we as-sumed that FIs would be unaffected by highlighting, because theseindividuals use internal rather than external processes to makesense of information. However, the comprehension scores for FDswere similar across conditions. The comprehension scores for FIsindicated that the irrelevant highlights were an impediment. Therewas even limited evidence that the relevant highlights interferedwith comprehension.One explanation for these two surprising results is that FDs andFIs engaged at different levels with the highlights. FDs’ compre-hension was not influenced by the highlights. FDs are known toaccept the presentation of a text as given, so these individuals mayhave expended little or no effort assessing the relevance of thehighlights. It was also found that highlights negatively affected FIs’understanding of the texts. These individuals may have spent greateffort assessing the quality or relevance of the highlights, distract-ing them from their task in this study. Given that these results arecontrary to our expectations, we must question how FDs and FIsinteract with passive annotations. Future research could provideanswers by measuring participants’ visual attention through eyetracking.In general, these results challenge the assumption that passive,or “social” highlights, benefit readers by helping them make senseof, and, learning from, texts. For some readers, passive highlights,whether relevant or irrelevant, may be a form of distraction andannoyance. This does not preclude benefits of highlights on dif-ferent reading processes and outcomes, such as reading efficiency,navigation, re-finding information, and engagement, which maybe the focus of our future research.6 CONCLUSIONThis study furthers our understanding of the effects of highlightingon reading comprehension through three contributions. No previ-ous work on relevant and irrelevant highlights explored the effectsof 1) readers with different cognitive styles, 2) the length of texts, or3) presentation in digital reading environments. This study validatesprevious work [17], by showing that relevant highlights do not pos-itively affect comprehension, while irrelevant highlights may havea negative effect on comprehension. The results of the study arelimited to passive highlights. Future work could use techniquessuch as eye tracking to collect more data on reading processes andconsider measuring different reading outcomes, such as efficiencyand engagement. It would also be useful to compare the effectsof passive highlights with active highlighting. Work built uponthe findings of this study will be significant in the developmentof digital reading environments that may better support readers’understanding and engagement with texts.REFERENCES[1] Lynna J Ausburn and Floyd B Ausburn. 1978. Cognitive styles: some informa-tion and implications for instructional design. Educational Communication andTechnology 26, 4 (1978), 337–354.[2] Elizabeth Berger and Leo Goldberger. 1979. Field dependence and short-termmemory. Perceptual and Motor Skills 49 (1979), 87–96.[3] Ed H Chi, Michelle Gumbrecht, and Lichan Hong. 2007. Visual foraging ofhighlighted text: an eye-tracking study. In Human-Computer Interaction: HCIIntelligent Multimodal Interaction Environments. Springer, Berlin, Germany, 589–598.[4] Jack Demick. 2014. Group Embedded Figures Test: manual. Technical Report.Mind Garden, Inc., Menlo Park, CA.[5] Robert L Fowler and Anne S Barker. 1974. Effectiveness of highlighting forretention of text material. Journal of Applied Psychology 59, 3 (1974), 358–364.[6] Donald R Goodenough. 1976. The role of individual differences in field depen-dence as a factor in learning and memory. Psychological Bulletin 83, 4 (1976),675–694.[7] J Kent-Davis and Kathryn F Cochran. 1989. An information processing view offield dependence-independence. Early Child Development and Care 51, 1 (1989),31–47.[8] Walter Kintsch. 1998. Comprehension: a paradigm for cognition. CambridgeUniversity Press, Cambridge, United Kingdom.[9] Robert F Lorch, Jr. 1989. Text-signaling devices and their effects on reading andmemory processes. Educational Psychology Review 1, 3 (1989), 209–234.[10] Robert F Lorch, Jr, Elizabeth Pugzles Lorch, and Madeline A Klusewitz. 1995.Effects of typographical cues on reading and recall of text. Contemporary Educa-tional Psychology 20, 1 (1995), 51–64.[11] Catherine C Marshall. 1997. Annotation: from paper books to the digital library.In Proceedings of the Second ACM International Conference on Digital Libraries.ACM, New York, NY, 131–140.[12] Samuel Messick. 1976. Individuality in learning. Jossey-Bass Publishers, SanFrancisco, CA.[13] Jakob Nielsen. 1997. How users read on the web. (Oct. 1997). https://www.nngroup.com/articles/how-users-read-on-the-web/[14] Sherrie L Nist and Mark C Hogrebe. 1987. The role of underlining and annotatingin remembering textual information. Literacy Research and Instruction 27, 1 (1987),12–25.[15] James M Royer, Barbara A Greene, and Gale M Sinatra. 1987. The SentenceVerification Technique: a practical procedure for testing comprehension. Journalof Reading (1987), 414–422.[16] Stephanie Shipman and Virginia C Shipman. 1985. Cognitive styles: some con-ceptual, methodological, and applied issues. Review of Research in Education(1985), 229–291.[17] Vicki L Silvers and David S Kreiner. 1997. The effects of pre-existing inappropriatehighlighting on reading comprehension. Literacy Research and Instruction 36, 3(1997), 217–223.[18] Hedwig von Restorff. 1933. Über die Wirkung von Bereichsbildungen im Spuren-feld. Psychologische Forschung 18, 1 (1933), 299–342.[19] Herman A Witkin, R B Dyk, H F Fattuson, Donald R Goodenough, and S A Karp.1962. Psychological differentiation: studies of development. Wiley, New York, NY.[20] Herman A Witkin and Donald R Goodenough. 1981. Cognitive styles: essenceand origins field dependence and field independence. Psychological issues 51(1981), 1–141.[21] Herman A Witkin, Donald R Goodenough, and Philip K Oltman. 1979. Psycho-logical differentiation: current status. Journal of Personality and Social Psychology37, 7 (1979), 1127–1145.[22] Herman A Witkin, Philip K Oltman, Evelyn Raskin, and Stephen A Karp. 1971.A Manual for the embedded figures tests. Consulting Psychologists Press, PaloAlto, CA.

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.52383.1-0376432/manifest

Comment

Related Items