UBC Faculty Research and Publications

Active Viewing : A Study of Video Highlighting in the Classroom Dodson, Samuel; Roll, Ido; Fong, Matthew; Yoon, Dongwook; Harandi, Negar M.; Fels, Sidney 2018

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata


52383-Dodson_S_et_al_Active_viewing_study.pdf [ 775.97kB ]
JSON: 52383-1.0376620.json
JSON-LD: 52383-1.0376620-ld.json
RDF/XML (Pretty): 52383-1.0376620-rdf.xml
RDF/JSON: 52383-1.0376620-rdf.json
Turtle: 52383-1.0376620-turtle.txt
N-Triples: 52383-1.0376620-rdf-ntriples.txt
Original Record: 52383-1.0376620-source.json
Full Text

Full Text

Active Viewing: A Study of Video Highlighting in the ClassroomSamuel DodsonUniversity of British Columbiadodsons@mail.ubc.caIdo RollUniversity of British Columbiaido.roll@ubc.caMatthew FongUniversity of British Columbiamfong@ece.ubc.caDongwook YoonUniversity of British Columbiayoon@cs.ubc.caNegar M. HarandiUniversity of British Columbianegarm@ece.ubc.caSidney FelsUniversity of British Columbiassfels@ece.ubc.caABSTRACTVideo is an increasingly popular medium for education. Motivatedby the problem of video as a one-way medium, this paper inves-tigates the ways in which learners’ active interaction with videomaterials contributes to active learning. In this study, we exam-ine active viewing behaviors, specifically seeking and highlightingwithin videos, which may suggest greater levels of participationand learning. We deployed a system designed for active viewingto an undergraduate class for a semester. The analysis of onlineactivity traces and interview data provided novel findings on videohighlighting behavior in educational contexts.CCS CONCEPTS• Information systems→Video search; •Applied computing→ Interactive learning environments; Annotation;KEYWORDSactive viewing, video search, learning, highlightingACM Reference Format:Samuel Dodson, Ido Roll, Matthew Fong, Dongwook Yoon, Negar M. Ha-randi, and Sidney Fels. 2018. Active Viewing: A Study of Video Highlightingin the Classroom. In CHIIR ’18: 2018 Conference on Human Information Inter-action & Retrieval, March 11–15, 2018, New Brunswick, NJ, USA. ACM, NewYork, NY, USA, 4 pages. https://doi.org/10.1145/3176349.31768891 INTRODUCTIONVideo is an increasingly popular medium for education. Video offersmany advantages, such as the opportunity for self-paced learning.Given the benefits of video based learning, many blended, flipped, orhybrid courses employ video lectures. Video is not perfect, however.Watching video often creates a passive learning experience forstudents. Motivated by the problem of video as a one-way medium,this paper investigates the ways in which learner’s interaction withvideo materials contribute to active learning.Theories of active learning [e.g., 4] argue that effective learningtakes place while students engage and interact with the educationalPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from permissions@acm.org.CHIIR ’18, March 11–15, 2018, New Brunswick, NJ, USA© 2018 Copyright held by the owner/author(s). Publication rights licensed to Associa-tion for Computing Machinery.ACM ISBN 978-1-4503-4925-3/18/03. . . $15.00https://doi.org/10.1145/3176349.3176889materials, such as textbooks and video lectures. When studyingreading, nonlinear navigating and annotating can be used as indi-cators of active reading [1]. For evaluating video viewing, we exam-ine active viewing behaviors, specifically seeking and highlightingwithin videos, which may suggest greater levels of participation andlearning. By “seeking” we mean moving to a new part of the video.Previous studies have shown that video navigation behaviors areassociated with active information seeking and engaged learning[5, 9]. Nonetheless, it is largely unknown how video highlightingconstitutes an active viewing experience.Recently, Fong et al. [6] presented ViDeX, a system that sup-ports video highlighting. Although the design of the system wasassessed, the ways in which active viewing features are used wasnot examined. Consequently, we are left with the following re-search questions: (i) How are video seeking activities associatedwith highlighting behaviors? (ii) How do active viewing behaviorscontribute to learning? (iii) What are the appropriate affordancesof video interfaces for engagement?To answer these questions, we deployed ViDeX to an under-graduate class for a semester. The analysis of online activity tracesand interview data provided novel findings on video highlightingbehavior in educational contexts. For instance, the type of content(i.e., textual or visual) was an important determinant of highlightingbehavior. The users of the highlighting features relied more oftenon semantic cues of the Transcript than visual cues of the Filmstrip.2 RELEVANTWORKFor online learning to be effective, students must have accessto tools that facilitate active learning. For example, in the videomedium, this could be done by supporting aggregation and visual-ization of viewer interactions to facilitate navigation [7, 9]. In othermediums, a common way that learners interact with information isby commenting, highlighting, tagging, or otherwise annotating. An-notation is a type of information interaction that has been found tohave a positive relationship with learning outcomes, such as recall,comprehension, and engagement [2, 3, 15]. Consequently, manydigital reading environments have been designed to support activereading by providing annotation [e.g., 11, 17]. Tashman and Ed-wards [16] define active reading as “reading activities that involvemore interaction with the reading media than simple sequentialadvancement through the text.”While annotation is a well-established information practice intext environments, it is less common when viewing video. Thismay be because there are few video viewing systems that enablethis functionality, although there are exceptions [e.g., 10]. The lackCHIIR ’18, March 11–15, 2018, New Brunswick, NJ, USA S. Dodson, I. Roll, M. Fong, D. Yoon, N. M. Harandi, and S. Felsof systems that allow for annotation may be due to the difficultydesigning systems that enable video annotation in a user-friendlyway. Hayles [8] discusses the challenge of translating informationpractices for learning activities from one medium to another, argu-ing that some functionality of the particular practice is both lostand gained in the process. As a result, the design of video basedlearning systems must not simply borrow practices from othermedia without considering the differences between media.While active reading is specific to text, we can learn from thedevelopments of digital reading environments to develop new ap-proaches to encourage active viewing. Highlighting is one of themost popular types of annotation in text environments [13]. Thisis, perhaps, because of the low cognitive load required to empha-size content, which does not distract from the individual’s primarytask. The popularity of highlighting may also be due to learners’assumption that highlighting has positive effects on their readingprocesses and outcomes. By supporting highlighting in video, wecan attempt to translate a common text based practice and test itseffectiveness in video based learning environments.3 VIDEXViDeX is a system designed for active viewing through contentinteraction and personalization. Encouraging information inter-actions similar to the more familiar practice of annotating text,ViDeX aims to make interacting with video intuitive. ViDeX cur-rently supports the ability to highlight intervals of video and text.We created ViDeX to explore the effectiveness of highlighting video.We recognize that annotation is an important way in which manyindividuals learn and interact with information in other media, andsuspect that viewers may benefit from similar tools. While com-menting exists in many video viewing systems, highlighting is aunique feature.Figure 1: A screenshot of ViDeX, displaying the Transcript(far left) and Filmstrip (bottom).The ViDeX interface is comprised of three elements: the Player,the Filmstrip, and the Transcript. The Filmstrip provides users witha visual summary of the video, allowing users to skim the videocontent. The Filmstrip itself is a sequence of thumbnails, where eachthumbnail represents a segment of the video. By placing the cursoracross the width of each thumbnail, the picture in the thumbnailchanges according to the linear mapping between the width of thethumbnail and the part of the video it represents. This allows theuser to preview the visual content that the video represents.To highlight, users must either drag and select across the Film-strip or Transcript to first select a span of content, and subsequentlyselect a highlight color using the toolbar above the Player. High-lighting creates a strip of color along the top edge of the Filmstripin addition to the corresponding text in the Transcript.The Transcript brings the user a textual representation of thevideo. Each word in the Transcript is associated with a time inthe video. Consequently, selections and highlights in either theFilmstrip or the Transcript are synchronized. Any actions made inone widget transfers to the other.4 METHODSWe evaluated engagement with ViDeX using a mixed-method ap-proach, employing log data analysis and interviews, over the courseof a semester in an undergraduate chemistry class. Quantitativelog data were used to identify patterns of information interaction.Qualitative interview data were used to better understand and ex-plain these patterns. Students enrolled in the class were providedwith videos related to lectures and labs. The videos ranged in lengthfrom 3.93 to 12.92 minutes (M=8.73 minutes, SD=2.94 minutes). Oneof the authors visited the class at the beginning of the semester tointroduce students to ViDeX and invite them to participate in thestudy by using the system to watch videos assigned by their instruc-tor. Using ViDeX was not a class requirement nor was watchingthe videos. The videos were also available on YouTube, so studentscould watch the content without participating in the study. Partici-pants were not compensated for using ViDeX.All relevant system events were logged, including video events(such as loading, playing, and seeking) and highlighting events(such as color, source, and span of each highlight). All data wereanonymously logged. To allow students time to familiarize them-selves with the system and its novel features without affecting ouranalysis of typical usage, we analyzed data starting a month afterthe start of the semester. By the end of the semester, 6,995 systemevents from 28 participants were logged. A limitation of the studywas the relatively small sample size. The log data were analyzed,using R, with an exploratory data analysis approach. We treatedsystem events as independent in our data analysis.In addition to logging system events, we also collected qualitativedata through interviews with students from the class at the endof the semester. One of the authors recruited six participants forone-on-one and small group semi-structured interviews of aboutten minutes each. The interviews explored participants’ typical useof ViDeX and their learning practices with and without ViDeX.Interviewees were paid a $15.00 honorarium for their time.5 RESULTS & DISCUSSIONIn this section, we explore students’ viewing and highlighting be-haviors. We supplement the quantitative analysis with feedbackwe received from interviewees about their typical use of ViDeXthroughout the semester.5.1 HighlightingOf the 28 participants whose interactions with the system and con-tent were logged, five (18%) highlighted at least once. In total, theseActive Viewing: A Study of Video Highlighting in the Classroom CHIIR ’18, March 11–15, 2018, New Brunswick, NJ, USAfive participants created 51 highlights. All but one of the partici-pants highlighted in more than one login session. The highlightsvaried in color, length, and location within the videos. The medianlength of highlight spans was 6.23 seconds (M=11.55, SD=13.15), andranged from 0.16 to 54.69 seconds. Most highlights (n=29, 56.86%)were less than ten seconds and only 6 (11.76%) were longer thanthirty seconds. Unlike previous studies of text highlighting [14], wedid not find that some participants engaged in heavy highlightingof the content (so called “happy highlighters”).Figure 2: Highlight length in seconds showing that 56.86% ofhighlights were less than ten seconds and only 11.76% werelonger than thirty seconds.To highlight, the user selects content by clicking and draggingcontent via either the Filmstrip or the Transcript then chooses ahighlight color using the toolbar above the Player. Few selectionsbecame highlights. While students made 942 selections, only 51highlights were created. One reason for this may be the interfaceitself. The select-and-highlight sequence may not have been veryintuitive or visible to most users. At the end of the semester, fourof the six interviewees explained they did not know they couldhighlight using ViDeX, which is surprising given that ViDeX wasdemoed at the beginning of the semester. Highlighting may havebeen used more if students had known about it, especially consid-ering all interviewees reported taking notes. The four intervieweesthat did not highlight explained that they either took notes using aword processor, with a paper notebook, or by taking screenshots.Of the highlights that were created, many appeared to be usedas bookmarks. 56.86% (n=29) of the highlights were less than tenseconds. This suggests that students were placing bookmarks in thevideo more than selecting spans of content, such as highlighting fullsentences or paragraphs in the Transcript. One of the intervieweesexplained that he used highlights as bookmarks for spans of videoto watch later. He would replay the bookmarked video to learnmore about a keyword. Using highlights exclusively as bookmarksis somewhat surprising, since highlights have been found to haveadditional functions in text [e.g., 12].Given that students mainly highlighted content using the Tran-script, why did they use the highlights as bookmarks and not forother types of functions that are common when annotating text?This tendency may suggest that students were indeed strategicabout their behavior. Even when highlighting text, students real-ized that this activity is done in the context of video. A bookmarkhighlight is sufficient to locate the area of interest. By clicking andjumping to a highlight, they were able to bring the playhead tothe area of interests, and play the relevant segment of video. Themajority (n=26, 50.98% of highlights were also jumped to by click-ing within the highlighted span of content. The actual number ofhighlights that were jumped to is likely higher, because studentsmay have clicked just before the highlighted span, especially forshort highlights that are difficult to click. Indeed, 86.27% highlightedspans were viewed again.Notably, while most selections were made using the Filmstrip(n=764, 81.00%), all but two highlights were created with the Tran-script. Students’ preference to highlight using the Transcript may beexplained in a similar way: highlighting was used mainly to book-mark keywords, rather than emphasize the full extent of visualcontent of interest. As highlighting was based on textual informa-tion, the Transcript was the natural place for highlighting.Another important aspect of highlights is their location rela-tive to the playhead position, which represents the current playingframe of video content. Approximately two-fifths of the highlightswere created within 10 seconds of the playhead (see Figure 3). High-lights were created behind (n=8, 15.69%) and ahead (n=23, 45.10%) ofthe playhead. The behavior of highlighting close to and far from theplayhead was consistent across participants. Highlighting aheadof the playhead suggests that students may have been skimmingthe text to identify important content to highlight and guide theirfuture information interaction. This pattern, again, is consistentwith the role of highlights as bookmarks described above.Figure 3: Highlight relative to playhead position. 39.22% ofhighlights were created within 10 seconds of the playhead.How information is communicated through video may affecthighlighting and seeking. An interviewee explained that he wouldhave highlightedmore frequently if the videos hadmore “showhow”rather than “tell how” information.When communicating embodiedknowledge, the Transcript would likely contain less textual andmore visual information, requiring the student to watch the videoto fully understand the context of what is being communicated.5.2 SeekingStudents used the Filmstrip and Transcript to seek to informationdifferently. The Filmstrip was used for navigation of visual content,such as seeking through the video to find a specific PowerPointslide, whereas the Transcript was used for scanning for keywordsand highlighting. Given that the content was video lectures, it isnot surprising that 76.6% of all seeks were made using the Filmstrip.Interviewees explained that the Filmstrip was useful for searchingfor visual content. The interviewees, for example, explained theywould seek through the video for specific PowerPoint slides andtransitions to new slides. Once they found the slide they sought,the interviewees explained they would copy the information usinga word processor, a paper notebook, or by taking screenshotsIt seems that the two user interface elements were used for dif-ferent types of information seeking, suggesting that seeking videoCHIIR ’18, March 11–15, 2018, New Brunswick, NJ, USA S. Dodson, I. Roll, M. Fong, D. Yoon, N. M. Harandi, and S. Felsand text support different information needs. The median seek dis-tance using the Filmstrip was 18.27 seconds, whereas the medianseek using the Transcript was 108.03 seconds. The Filmstrip seeksmay have been used to fine-tune the current playhead location: forexample, seeking backwards to repeat the last few spoken words orjumping forwards to search for a transition to a new PowerPointslide. In contrast, Transcript seeks were often larger and occasion-ally made across the entire length of the video. This may have beenthe result of finding important textual information while ignoringthe current position of the playhead. Indeed, the Transcript allowedstudents to navigate and highlight the content without watchingit. An interviewee explained he would scan the Transcript fromtop to bottom for keywords to identify which segments of videoto watch. Unlike the Filmstrip, the Transcript was used for bothnavigation and highlighting. Thus, it comes as little surprise thatwhen jumping to highlights, the Transcript was used more often(79.7%) than the Filmstrip (20.3%).Figure 4: Jumping direction and distance by user interfaceelement. The Filmstrip was used for shorter jumps than theTranscript.6 CONCLUSIONThe results suggest that students made strategic use of the textualand visual user interface elements.We see twomain differenceswithregard to students’ interaction with textual information (using theTranscript) and visual information (using the Filmstrip). Most seekswere made using the Filmstrip. This is likely because of the rich cuesaided seeking to visual changes in the video, such as PowerPointslide changes, and since students are used to seeking video usingthe horizontal bar beneath it in other video viewing environments,such as YouTube. Highlighting, on the other hand, was conductedprimarily via the Transcript. Similarly, when jumping to highlights,the Transcript was used most often.A more careful analysis suggests a specific use for highlights —that of bookmarks for future watching, based on textual keywords.This is suggested by seeing that students (i) highlighted very shortspans, (ii) did so using the Transcript, and (iii) highlighted far awayfrom the current location of the playhead. This interpretation issupported by students’ interviews. One explanation for these pat-terns is that students brought in their work habits, using the toolsin familiar ways and not making use of the linked nature of thesetwo elements in ViDeX. It is possible that better integration of thetwo elements, such as a vertical Filmstrip adjacent to the Transcript,would help learners see the connection between the two. However,another possibility is that this behavior demonstrates a desired andadaptive behavior. Students knew that they could use both elementsfor seeking, and they were strategic about which element to use.The different patterns may make sense, given the different goals ofseeking. It is important to remember that the videos used in thisstudy were highly verbal, which invites this kind of behavior. In thatcase, the fact that ViDeX supported different uses of its elementsto achieve different goals may suggest a useful implementation.To summarize, we identified strategic use of textual and visualuser interface elements for different types of information inter-actions (short and long seeking, jumping to highlights, and high-lighting). Specifically, we found that highlighting of video is verydifferent from highlighting text. For one, it does not seem to be veryintuitive. Also, these highlights tend to be shorter, while typicaltext highlights often select phrases, sentences, or paragraphs. Videohighlights also appeared to serve only one function, unlike texthighlights. Further research and support for learners are needed tobetter understand how to make video highlighting a more effectivetool for interaction with video content.REFERENCES[1] Mortimer J Adler and Charles Van Doren. 1972. How to read a book. Simon &Schuster, New York, NY.[2] Thomas H Anderson and Bonnie B Armbruster. 1984. Studying. In Handbook ofReading Research. Pearson, New York, NY, 657–679.[3] David C Caverly and Vincent P Orlando. 1991. Textbook study strategies. InResearch and Advanced Technology for Digital Libraries. International ReadingAssociation, Newark, DE, 86–165.[4] Michelene T H Chi and RuthWylie. 2014. The ICAP framework: linking cognitiveengagement to active learning outcomes. Educational Psychologist 49, 4 (2014),219–243.[5] Suzanne L. Dazo, Nicholas R. Stepanek, Robert Fulkerson, and Brian Dorn. 2016.An empirical analysis of video viewing behaviors in flipped CS1 courses. ACMInroads 7, 4 (Nov. 2016), 99–105.[6] Matthew Fong, Gregor Miller, Xueqin Zhang, Ido Roll, Christina Hendricks, andSidney Fels. 2016. An investigation of textbook-style highlighting for video.. InGraphics Interface. 201–208.[7] Philip J. Guo, Juho Kim, and Rob Rubin. 2014. How video production affectsstudent engagement: an empirical study of MOOC videos. In Proceedings of theFirst ACM Conference on Learning @ Scale Conference (L@S ’14). ACM, New York,NY, 41–50.[8] N Katherine Hayles. 2005. My mother was a computer: digital subjects and literarytexts. University of Chicago Press, Chicago, IL.[9] Juho Kim, Philip J. Guo, Carrie J. Cai, Shang-Wen (Daniel) Li, Krzysztof Z. Gajos,and Robert C. Miller. 2014. Data-driven interaction techniques for improvingnavigation of educational videos. In Proceedings of the 27th Annual ACM Sympo-sium on User Interface Software and Technology (UIST ’14). ACM, New York, NY,563–572.[10] Scott LeeTiernan and Jonathan Grudin. 2001. Fostering engagement in asyn-chronous learning through collaborative multimedia annotation. In INTERACT.472–479.[11] Chunyuan Liao, François Guimbretière, and Ken Hinckley. 2005. PapierCraft: acommand system for interactive paper. In Proceedings of the 18th Annual ACMSymposium on User Interface Software and Technology. ACM, New York, NY,241–244.[12] Catherine C Marshall. 2009. Reading and writing the electronic book. Morgan &Claypool Publishers, San Rafael, CA.[13] Ilia A Ovsiannikov, Michael A Arbib, and Thomas H McNeill. 1999. Annotationtechnology. International Journal of Human-Computer Studies 50, 4 (1999), 329–362.[14] Frank Shipman, Morgan Price, Catherine C Marshall, and Gene Golovchinsky.2003. Identifying useful passages in documents based on annotation patterns. InResearch and Advanced Technology for Digital Libraries. Springer, Berlin, Germany,101–112.[15] Michele L Simpson and Sherrie L Nist. 1990. Textbook annotation: an effectiveand efficient study strategy for college students. Journal of Reading 34, 2 (1990),122–129.[16] Craig Tashman and W Keith Edwards. 2011. Active reading and its discontents:the situations, problems and ideas of readers. In Proceedings of the SIGCHI Confer-ence on Human Factors in Computing Systems. ACM, New York, NY, 2927–2936.[17] Craig Tashman and W Keith Edwards. 2011. LiquidText: a flexible, multitouchenvironment to support active reading. In Proceedings of the SIGCHI Conferenceon Human Factors in Computing Systems. ACM, New York, NY, 3285–3294.


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items