UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Leveraging students' handwritten notes to link to watched instructional videos Jaddigadde Srinivasa, Ranjitha

Abstract

Handwritten note-taking with pen and paper is still the preferred medium to achieve information seeking and comprehension from diverse learning objects. But students, especially in video-based learning settings, exercise laborious practices to re-find the corresponding video context when reviewing notes. We propose the orchestration of students' handwritten notebook content as interoperable links to retrieve previously watched instructional videos. This work articulates the research objectives in two phases. In phase 1, we analyzed the characteristic features of notebook content of watched videos. And, in phase 2, we investigated student expectations and requirements of the proposed video retrieval system. Analysis of quality handwritten notebook samples and the related video materials in a lab study with ten engineering students revealed distinctive characteristic representations of note content such as text, formula, figures, and chiefly a hybrid of all the 3. A box plot interpretation of notes and the watched video content confirmed that at least 75% of the identified note samples demonstrated a verbatim overlap of 50% or more with the related video content, hinting at its potential use as a query artifact. Additionally, the video references to collected note samples exhibited referencing at three temporal levels: point, interval, and whole video. A 12-student lab study indicated higher satisfaction for video matches returned at the `interval' level and showcased students' existing workarounds for linking back to videos. Overall, students rated a positive Mean score for the system's usability to re-find note-specific video context. A medium-fidelity prototype was built, using off-the-shelf computer vision algorithms, to deduce technology requirements associated with the proposed approach. When tested on the 181 identified note samples, the prototype system matched 77.5% of the samples to corresponding watched videos. The proposed method worked exceptionally well to find suitable videos for textual notes --- yielding a 98% accuracy. The note content overlap with the video results further highlights the fragmented nature of the evaluated accuracy across all three temporal levels. Overall, the presented work ascertains the prospect of augmenting prevalent Personalized learning (PL) strategies, such as handwriting notes for future reference, to easily re-find and connect to the watched videos.

Item Citations and Data

License

Attribution-NonCommercial-NoDerivatives 4.0 International

Usage Statistics