Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Improve classroom interaction and collaboration using i>clicker Shi, Junhao 2013

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2013_fall_shi_junhao.pdf [ 4.83MB ]
Metadata
JSON: 24-1.0052178.json
JSON-LD: 24-1.0052178-ld.json
RDF/XML (Pretty): 24-1.0052178-rdf.xml
RDF/JSON: 24-1.0052178-rdf.json
Turtle: 24-1.0052178-turtle.txt
N-Triples: 24-1.0052178-rdf-ntriples.txt
Original Record: 24-1.0052178-source.json
Full Text
24-1.0052178-fulltext.txt
Citation
24-1.0052178.ris

Full Text

Improve Classroom Interaction andCollaboration using i>clickerbyJunhao ShiB.Eng., Zhejiang University, 2011A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinThe Faculty of Graduate and Postdoctoral Studies(Computer Science)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)October 2013c? Junhao Shi 2013AbstractThe i>clicker student response system is used to answer multiple-choice ques-tions in university classrooms across North America. We investigated how existingi>clicker remotes could be used to improve classroom interaction and collaborationby developing and using custom software applications, each targeted at a differentaspect of classroom interaction, that augment basic i>clicker capability.Java-based software was written to replace the vendor-provided driver for thei>clicker base station that controls initialization, starting voting, requesting votes,stopping voting, and updating the LCD display on the base station. WebClickerextends voting to commonly-used digital devices (cell phone, smart phone, tablet,and laptop) using a cloud-based architecture that forwards votes to a client appli-cation.Three client applications were developed. Each connects to either the Java-based driver or WebClicker to obtain votes. These extended the power of the stan-dard i>clicker software. It supports most existing features, such as multiple-choicequestions, but additionally features per-group visualization of voting outcomes,state-specific interpretation of individual student?s votes, and other features not inthe vendor-provided software. Clic^in provides additional pedagogical support sostudents can practice newly-obtained skills in the class. It embeds ?gamelets? thathave content-specific behavior that can be played individually, or by an entire class,or in parallel by groups to support concept demonstration, class-wide participationand group competition. Finally, Selection Tool allows students to control projectedmaterial in the classroom through slide navigation and content highlighting.Two usability experiments were conducted. One investigated cognitive loadwhen using an i>clicker remote to interact with a gamelet that illustrates binarysearch tree insertion. The remote was slower and more error-prone than a mouse-based interface, but the difference is probably acceptable in a classroom setting.Both interaction time and error-rate decreased as participants gained practice. Asecond experiment compared Selection Tool with a mouse for content highlighting.Again the i>clicker was slower and more error-prone than a mouse, and it wasdifficult to correctly highlight smaller targets, but the ability to use an i>clicker forthis task shows promise.iiPrefaceAll the research work in this thesis was carried out under the supervision of Dr.Kellogg S. Booth. Ethics approval for experimentation with human subjects wasgranted by UBC Behavior Research Ethics Board (BREB ID H11-01756).I am the primary researcher of all work in this thesis. Scott Newson was a co-participant in the development of the i>clicker base station driver that is describedin Chapter 2. He was also the main contributor to the initial version of Clic^inthat is described in Chapter 5. Laurence Baclin participated in the determination ofthe communication protocol between the i>clicker base station and a PC. OrkhanMuradov co-developed WebClicker++ that is described in Chapter 3.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Student Response Systems . . . . . . . . . . . . . . . . . . . . . 21.2 Research Contributions . . . . . . . . . . . . . . . . . . . . . . . 41.3 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . 52 Reverse Engineering the Base Station Driver . . . . . . . . . . . . . 82.1 Understanding the Communication Protocol . . . . . . . . . . . . 92.2 Base Station Driver Development . . . . . . . . . . . . . . . . . 163 WebClicker: From Remotes to the Cloud . . . . . . . . . . . . . . . 183.1 Client Devices Support . . . . . . . . . . . . . . . . . . . . . . . 183.2 Managing the Vote Flow in the Cloud . . . . . . . . . . . . . . . 214 Clicker++: Reproducting and Improving the i>clicker Software . . 244.1 Basic Clicker++ Functionality . . . . . . . . . . . . . . . . . . . 244.2 Improved Histograms . . . . . . . . . . . . . . . . . . . . . . . 254.3 Specifying Groups . . . . . . . . . . . . . . . . . . . . . . . . . 264.4 Classroom Beta Testing . . . . . . . . . . . . . . . . . . . . . . 27ivTable of Contents5 Clic^in: Motivation, Architecture and Design . . . . . . . . . . . . 295.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295.2 Architecture Overview . . . . . . . . . . . . . . . . . . . . . . . 305.3 Design Specification . . . . . . . . . . . . . . . . . . . . . . . . 316 Interacting with a Gamlet . . . . . . . . . . . . . . . . . . . . . . . 336.1 Binary Search Tree Gamelet . . . . . . . . . . . . . . . . . . . . 336.2 Interaction Modes . . . . . . . . . . . . . . . . . . . . . . . . . 336.3 Splitter, Filter, and Thresholder . . . . . . . . . . . . . . . . . . 346.4 Examples of Other Gamelets . . . . . . . . . . . . . . . . . . . . 367 Cognitive Load in Gamelets . . . . . . . . . . . . . . . . . . . . . . 417.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427.2 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457.3 Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467.4 Task and Procedure . . . . . . . . . . . . . . . . . . . . . . . . . 467.5 Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487.6 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . 497.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588 Selection Tool: Slide Navigation and Content Highlighting using a Re-mote . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608.2 Overview of the Selection Tool . . . . . . . . . . . . . . . . . . 618.3 Design of the Highlighting Tool . . . . . . . . . . . . . . . . . . 639 Performance Assessment of the Content Highlighting Tool . . . . . 659.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659.2 Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669.3 Task and Procedure . . . . . . . . . . . . . . . . . . . . . . . . . 669.4 Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679.5 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . 709.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7510 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . 77Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83vTable of ContentsA Communication Protocol Between i>clicker Base Station (Old) and PC 84A.1 Initializing Base Station . . . . . . . . . . . . . . . . . . . . . . 85A.2 Starting Voting . . . . . . . . . . . . . . . . . . . . . . . . . . . 87A.3 Requesting Vote . . . . . . . . . . . . . . . . . . . . . . . . . . 88A.4 Stopping Voting . . . . . . . . . . . . . . . . . . . . . . . . . . 91A.5 Updating LCD . . . . . . . . . . . . . . . . . . . . . . . . . . . 92B Communication Protocol Between i>clicker Base Station (New) andPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94B.1 Initializing Base Station . . . . . . . . . . . . . . . . . . . . . . 95B.2 Starting Voting . . . . . . . . . . . . . . . . . . . . . . . . . . . 101B.3 Requesting Vote . . . . . . . . . . . . . . . . . . . . . . . . . . 102B.4 Stopping Voting . . . . . . . . . . . . . . . . . . . . . . . . . . 104B.5 Updating LCD . . . . . . . . . . . . . . . . . . . . . . . . . . . 107C Participant Survey: Comparison of the Cognitive Load of DifferentGamelet Interaction Techniques . . . . . . . . . . . . . . . . . . . . 109D Participant Survey: Comparison the Cognitive Load of Different GameletInteraction Technique (Follow-up Study) . . . . . . . . . . . . . . . 113E Participant Survey: Comparing the Performance of Highlighting usingi>clicker Remote and Mouse . . . . . . . . . . . . . . . . . . . . . . 116viList of Tables2.1 A BSR Y packet sent by the base station . . . . . . . . . . . . . . 157.1 Break-down of results in each step . . . . . . . . . . . . . . . . . 53A.1 Packets for the old base station . . . . . . . . . . . . . . . . . . . 84A.2 PCC 1 - Set the frequency of the base station . . . . . . . . . . . 85A.3 PCC 2 - Set instructor?s remote ID . . . . . . . . . . . . . . . . . 86A.4 PCC 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86A.5 BSA 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86A.6 PCC 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87A.7 BSA 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87A.8 PCC 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87A.9 BSA 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88A.10 PCC 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88A.11 BSA 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88A.12 PCC 7 - Request new votes from the base station . . . . . . . . . 89A.13 BSA 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89A.14 BSR 7 - New votes from the base station . . . . . . . . . . . . . . 90A.15 PCC 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91A.16 BSA 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91A.17 PCC 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91A.18 BSA 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92A.19 BSR 9 - Vote count summary from base station . . . . . . . . . . 92A.20 PCC 10 - Set first line of base station LCD . . . . . . . . . . . . . 93A.21 PCC 11 - Set second line of base station LCD . . . . . . . . . . . 93B.1 Packets for the new base station . . . . . . . . . . . . . . . . . . 94B.2 PCC 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95B.3 BSA 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95B.4 PCC 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96B.5 PCC 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96B.6 PCC 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96viiList of TablesB.7 PCC 5 - Set instructor?s remote ID . . . . . . . . . . . . . . . . . 97B.8 PCC 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97B.9 BSR 6 - Base station firmware and frequency . . . . . . . . . . . 98B.10 BSR X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98B.11 PCC 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98B.12 PCC 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99B.13 PCC 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99B.14 BSR 9 - Summary of the previous voting session . . . . . . . . . 100B.15 PCC 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100B.16 PCC 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101B.17 PCC 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101B.18 PCC 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101B.19 PCC 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102B.20 PCC 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102B.21 PCC 16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102B.22 BSR Y - First new vote from the base station . . . . . . . . . . . 103B.23 BSR Z - Follow-up new vote from the base station . . . . . . . . . 104B.24 PCC 17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104B.25 BSR 17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105B.26 PCC 18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105B.27 PCC 19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105B.28 PCC 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106B.29 PCC 21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106B.30 BSR 21 - Vote count summary from base station . . . . . . . . . . 107B.31 PCC 22 - Set first line of base station LCD . . . . . . . . . . . . . 108B.32 PCC 23 - Set second line of base station LCD . . . . . . . . . . . 108viiiList of Figures1.1 Components of the i>clicker SRS . . . . . . . . . . . . . . . . . . 41.2 Base station driver, WebCicker, and client applications . . . . . . 73.1 Laptop Service interface . . . . . . . . . . . . . . . . . . . . . . 203.2 Overview of WebClicker architecture . . . . . . . . . . . . . . . . 234.1 Histogram by lab group . . . . . . . . . . . . . . . . . . . . . . . 275.1 Overview of Clic^in architecture . . . . . . . . . . . . . . . . . . 326.1 A gamelet for concept demonstration . . . . . . . . . . . . . . . . 346.2 A gamelet for class-wide participation . . . . . . . . . . . . . . . 356.3 A gamelet for group competition. . . . . . . . . . . . . . . . . . . 356.4 Lifetime of a vote in a concept demonstration scenario . . . . . . 376.5 Lifetime of a vote in a class-wide participation scenario . . . . . . 376.6 Lifetime of a vote in a group competition scenario . . . . . . . . . 386.7 A linked list gamelet . . . . . . . . . . . . . . . . . . . . . . . . 396.8 Procedure of linked list insertion . . . . . . . . . . . . . . . . . . 407.1 A slide showing a multiple-choice question . . . . . . . . . . . . 427.2 Graphics interface of a BST gamelet . . . . . . . . . . . . . . . . 447.3 Error rate for each interaction technique and step combination . . 517.4 Interaction time for each interaction technique/step combination . 547.5 Error rate from the first tree to the last (fifth) tree . . . . . . . . . 557.6 Interaction time from the first tree to the last (fifth) tree . . . . . . 567.7 Error rate from the first tree to the last (sixteenth) tree . . . . . . . 567.8 Interaction time from the first tree to the last (sixteenth) tree . . . 578.1 A slide that has a mistake . . . . . . . . . . . . . . . . . . . . . . 628.2 Content highlighting process . . . . . . . . . . . . . . . . . . . . 649.1 Examples of unsuccessful highlighting trials . . . . . . . . . . . . 689.2 Part of the highlighted area inside the target . . . . . . . . . . . . 69ixList of Figures9.3 Error rate for each device and size combination . . . . . . . . . . 719.4 Highlighting time for each device and size combination . . . . . . 729.5 Interaction time for each experience level and size combination . . 739.6 Coverage for different target sizes . . . . . . . . . . . . . . . . . 749.7 Per-click time from the first to sixth click . . . . . . . . . . . . . 75xAcknowledgementsFirst, I would like to thank my supervisor, Dr. Kellogg. S. Booth, for his support,inspiration and guidance during my two-year M.Sc. program. Despite his busyschedule, he always tried his best to discuss the research project with me and offerme advice. I was always fascinated by his wide knowledge base, his unique per-spective, and his problem-solving skills. Also, I would like to thank Dr. RomeoChua, from the School of Kinesiology, for his time and effort as my second reader.The feedback he provided was extremely valuable.I shall never forget my parents in any circumstance. Despite that they are livingat the other side of the Ocean, their understanding, confidence, encouragement andsupport is essential to me. I am proud of them. I am lucky enough to have somany good friends with me, and they are also an important part of my life here inVancouver.The research reported in this thesis was supported by the Network Centersof Excellence Program (NCE) through the Graphics, Animation, and New Media(GRAND) network, by the Natural Sciences and Engineering Research Council ofCanada (NSERC) through the Discovery Grant program, and by the Canada Foun-dation for Innovation (CFI), The University of British Columbia, and the Institutefor Computing, Information and Cognitive Systems (ICICS) that provided researchinfrastructure. I would also like to thank all the participants for their precious timeand valuable feedback.xiChapter 1IntroductionIn this thesis we explore how to improve classroom interaction and collaborationusing the existing commercial i>clicker [5] hardware. These are a form of in-teractive student response system (SRS) that have recently been introduced intoclassooom lectures as one of many efforts to innovate the classroom experience.Lectures have been for many years a popular form of educational experience atmany universities. Most of the time, an instructor presents material related to thecourse, while students listen to the instructor. Once in a while, students raise theirhands and ask questions when they are confused or need further clarification. Dueto time constraints, however, communication between an instructor and students isoften limited, especially for classes having large enrollments.It can be important for an instructor to know whether students understand thematerial being presented so that he/she can better control the pace of the lecture.Yet this information is not available if the class as a whole cannot provide usefulinput. Additionally, a lecture encouraging interaction and collaboration would im-prove sense of involvement and participation among students. Thus, it is importantto improve interaction and collaboration from the perspective of both instructorsand students.One way to achieve this goal is to adopt a student response system (SRS),which is very popular nowadays in some universities across North America. AtThe University of British Columbia the i>clicker student response system has beenadopted and deployed in most classrooms and a number of courses use i>clickersduring lectures. There are a few standard ways in which i>clickers are used, but allhave somewhat limited interactivity and most are based on simply polling studentsopinion or knowledge by multiple-choice voting.In our research, we looked for opportunities to develop more interactive ap-plications based on the existing i>clicker system infrastructure. Deploying theseapplications does not require extra hardware, which makes it easier for instructorsto adopt them in a real class. All that is required is the new software that we havedeveloped.After implementing new and largely platform-independent driver software forthe i>clicker hardware, we developed a set of applications that used the driver tosupport in-class activities that engaged students by letting them interact with ma-11.1. Student Response Systemsterial presented in lecture using i>clickers. One application provides similar func-tionality to the standard vendor-provided software. Another application supportadditional activities through ?gamelets? that provide a game-like way to interactwith visual representations of information and activities related to course mate-rial, and a more generalized notion of classroom presentation. A third applicationadds functionality to allow students to control lecture slides and manipulate on-screen content. All of the applications were implemented in Java to provide cross-platform compatibility. They are currently robust prototypes that cab eventuallybe integrated into a full-function system to support a variety of novel classroomlearning activities.In addition to the base station driver and the new applications for i>clickers,a cloud-based server was developed that support other personal devices, such aslaptops, smart phones, and basic cellphones using SMS, to be used in addition toor in lieu of i>clickers. The prototype client applications we developed will workeither directly on the i>clicker hardware infrastucture using an instructor?s laptop,or using a cloud-based server that requires only a standard web browswer for theinstructor.In this chapter, we provide a brief introduction to student response systems(SRS) and how they have been traditionally used in classrooms, followed by asummary of our research contributions. We then outline the material presented inthe remainder of this thesis.1.1 Student Response SystemsSRSs have been in existence for a few decades [27]. They are widely used in class-rooms to allow students to provide immediate feedback so that both instructors andstudents can assess how well students understand the course material. Althoughthere are quite a few commercial products in the market [2][6][3], many of themtake a similar approach. Every student owns a remote that can transmit to a basestation wirelessly. Studnets use their remotes to ?vote? by pressing one of the keyson the remote to indicate their answers to questions posed by an instructor or toregister their preferences or opinions when presented with multiple options. Con-nected to a classroom computer via a USB cable, the base station sends all the votesit receives from remotes to the computer. Software running on the computer thenprocesses the votes and logs them, and optionally displays them, most commonlyusing either a histogram or a pie chart to summarize responses from the students.There has been quite a lot discussion in the research community that suggeststhat use of a SRS has positive effects on the learning process. Many researchersfound that students who used a SRS in class showed higher levels of attendance,21.1. Student Response Systemsparticipation, engagement, and motivation [25][39] [31][22][24]. Additionally, itwas shown that using a SRS helped students to better understand the course mate-rial in lectures [37][31][18]. However, researchers have not reached a consensusregarding whether using a SRS is helpful in terms of improving long-term learn-ing. For example, some studies did find improvements of final exam scores amongthose who used SRS in class [25][37][22], but others did not [31][28][18].There have been research articles pointing out that using a SRS is itself notenough to improve learning. Researchers have argued that many other factors comeinto play as well. For example, O?Donoghue and O?Steen believe that lecturersshould develop approaches to using a SRS so that they could be used to align withstudents? personalities or in a discipline-specific context [35] . Len discussed howdifferent reward structures could motivate interaction when using SRS [30]. Treesand Jackson focused on how students? characteristics and course design choiceswere related to SRS contribution and students? involvement [40]. Crossgrove andCurran suggested that differences between courses targeting science majors andnon-majors were important when considering using SRS [17].Another research direction has investigated the effectiveness of applying SRSsin different topic areas or with different types of users. Researchers had examinedSRS usage in various disciplines, such as general chemistry [25], biology [37],medical science [32], psychology [39], physiology [22], accounting [31], etc. Al-though SRS usage has mostly been studied in post-secondary institutions, Penuelet al. provided a survey describing how SRSs have been applied in elementary andsecondary institutions [36]. Blood reported results of how SRS usage affected par-ticipation and learning of students with emotional and behavioral disorders [16].The popularity of SRSs and the rich body of research that has already beenconducted on them make us believe that SRSs could be a good platform on whichto further develop classroom technology. Our own institution, The University ofBritish Columbia (UBC), has adopted SRS (i>clicker) to support classroom inter-action and collaboration. Many large classrooms in the campus are configured withi>clicker base stations. Student can buy an i>clicker remote at the UBC Book-store. Figure 1.1 shows both the base station and the remotes currently used inUBC. These were also used in our research. Almost always, the i>clicker systemis used for completing multiple-choice questions so that both the instructor and thestudents are able to evaluate student?s performance on the fly. Our research focuseson how to augment the basic multiple-choice-answer functionalitty of i>clickers toprovide a variety of novel interaction paradigms in a classroom setting. We leavefor others the on-going investigation of how well the basic functionality promo-motes learning. We hope that in future others will also look at the new functionaltywe have introduced.31.2. Research ContributionsFigure 1.1: The components of the i>clicker SRS, a base station and the two typesof remotes used in this research project. The blue remote is for an instructor, thewhite remote is for students.1.2 Research ContributionsThis thesis describes a set of research activities done under the framework of aug-menting classroom technology using existing i>clicker hardware. The goal of theresearch is to fully explore the power of the i>clicker hardware and to expandclassroom interaction and collaboration beyond simple multiple-choice questions.Below we summarize our research contributions by briefly describing each com-ponent of our work. Figure 1.2 is an overview of different components.Our first step was to develop new base station driver software so that applica-tion developers could in the future create customized client applications based oni>clicker technology. Both the reverse engineering process that was followed andthe communication protocols between remotes and the base station, and betweenthe base station and computer, are described in detail. How the driver is used inclient applications is also briefly described.A later addition, WebClicker, was developed to support commonly used dig-41.3. Outline of the Thesisital devices in the voting process, such as cell phones, smart phones, tablets, andlaptops. We designed and implemented a cloud-based architecture, as well as fourweb services, each targeted at one type of device. This generalizes the i>clickerhardware and provides a path for future migration to using more generally availablehardware.To test our ideas about how classroom engagement could be ehanced by in-troducing more interactivity into i>clicker usage, three client applications basedon the i>clicker hardware and base station driver software were developed. Theseapplications can be connected to either the base station driver or to the WebClickerinfrastructure to obtain input from users. Clicker++ is a reimplemented versionof i>clicker software with some design defects fixed. Clic^in is an applicationthat allows students to practice their newly acquired knowledge in the classroomthrough new types of activities that we call ?gamelets? that use i>clickers to guidethe content using game-like mechanics. The Selection Tool provides an interactionparadigm that enables students to navigate through the slides being shown in theclassrom and to highlight content on the slide, thus achieving more efficient com-munication among members in the class. All of the software has been tested inclassroom situations, although the individual applications are still prototypes andhave yet to be fully integrated into a package suitable for widespread deployment.To assess some aspects of the new functionality, two lab studies were carriedout. The results are presented, along with a description of the studies and theirgoals. The first study examined the cognitive load of interacting with a gameletusing an i>clicker remote. Gamelets are the main functional modules of Clic^in,one of the prototypes we developed. The second study compared the highlightingparadigm designed for the Selection Tool with a more traditional paradigm using amouse to gain a better idea of an i>clicker remote?s performance in this task.1.3 Outline of the ThesisIn Chapter 2 we describe the approach we used to build a base station driver, whichenables developers to build client applications using existing i>clicker hardware.Chapter 3 describes WebClicker, a web application that allows students to voteusing not only remote, but other commonly used digital device as well. Chap-ters 4 through 9 talk about three client applications we designed to improve class-room interaction and collaboration. Chapter 4 presents Clicker++, a reproductionof the i>clicker software that fixes several design defects of the original version.Chapter 5 introduces Clic^in, a presentation application that supports a sequenceof activities, which can be either static slides or interactive gamelets. Chapter 6provides a detailed description of the gamelet concept, which makes different in-51.3. Outline of the Thesisteraction modes available, such as concept demonstration, class-wide participation,and group competition. Using a binary search tree gamelet as an example, Chapter7 reports on an experiment investigating the cognitive load of interacting with agamelet using an i>clicker remote. Chapter 8 demonstrates how the Selection Toolcan be used to allow students to navigate slides and highlight content on a slideusing an i>clicker remote, and Chapter 9 evaluates the interaction speed and errorrate of the content highlighting tool, comparing it with a mouse. In Chapter 10 wesummarize our research contributions and outline future work in this area.61.3.OutlineoftheThesisFigure 1.2: Base station driver, WebClicker and client applications: Clicker++, Clic^in and Selection Tool.7Chapter 2Reverse Engineering the BaseStation DriverOne important reason for choosing i>clicker as the hardware platform for improvedclassroom interaction is that it has been adopted at UBC, which means that a highpercentage of students own an i>clicker remote and are familiar with the technol-ogy, and that there is existing hardware support installed in most classrooms oncampus. The advantages are two-fold. First, easy access to hardware in class-rooms makes it convenient to test applications with real users because no specialarrangements need to be made. Second, it means that new applications developedin the research can be deployed for general use in any classroom by any instructorwithout the need for any new hardware or software except for a simple web-baseddownload of software to the instructor?s laptop. This reduces costs both for theuniversity and for students.Unfortunately, the standard software that is provided for i>clicker does notprovide all of the functionality that we need to implement some of the new ap-plications we designed. In order to achieve our research goals, a way had to befound to access and process votes collected by the base station in real-time to drivethe applications, not just collect votes for subsequent off-line processing, which iswhat the standard i>clicker software does.Prior research in our lab had built a prototype that modified the source code forthe vendor-supplied software to support some of the additional functionality. Thatserved as a proof of concept, but it was not extensible, both because the prototypeused the ?trick? of bypassing most of the actual vendor software by sending thelow-level data to another process via a socket connection, but also because sourcecode for subsequent versions of the vendor software that supported newer hardwarewere not available to us. So we had to first develop our own software driver onwhich to build the applications we had designed.This chapter describes how an i>clicker remote communicates with a base sta-tion, how the communication protocol between the base station and a computer wasdetermined through a reverse engineering process, and how the base station driverwas then augmented, using that existing protocol, to provide support for advancedapplications we had designed.82.1. Understanding the Communication Protocol2.1 Understanding the Communication ProtocolWe provide in the next subsections an overview of the basic operation of the hard-ware and a high-level description of how the communication protocols between theremotes and a base station and between a base station and a PC work, but not all ofthe details. A full description of the packet format for the old ?black? base stationis provided in Appendix A and Appendix B has a full description of the packetformat for the new ?white? base station.Throughout this section, PC refers not only to the physical computer that con-nects to the base station, but also to the drivers and applications installed on thecomputer that communicate with the base station, and signifies either a WindowsPC or an Apple Macintosh.2.1.1 Hardware description for a base stationThe base station is connected to a PC by a USB cable. The base station has anLCD screen, which can display a total of 32 characters, evenly split into two rows.The base station can either be in ?accepting? or ?not-accepting? mode. It processesstudents? votes only when it is in ?accepting? mode. However, instructor?s votescan be received in both modes.At the hardware level, the base station is always receiving transmissions fromevery remote, but the firmware ignores any but those from the instructor?s remoteunless it is in ?accepting? mode. The base station ?knows? which remote belongsto the instructor because software running on the PC establishes this as part of theinitialization sequence when it connects to the base station.2.1.2 Hardware description for a remoteA remote has six keys and three lights. Among the six keys, the first five are votechoices (A to E), and the last one is used to turn power on and off for the remote.The last key also has a special role in setting the channel frequency for a remote,which will be described later. All lights are off when the remote is turned off. Whenit is turned on, different lights use different colors to indicate different status.The first light is labeled with ?POWER?. It will light up in blue when theremote is turned on. Function of the first light when a user changes the channelfrequency will be described later in this subsection.The second light on a remote is labeled with ?LOW BATTERY?. Usually it isoff and it will blink red as an indicator of low battery.The third light on a remote is labeled with ?VOTE STATUS?. It indicateswhether a vote has been successfully received by the base station. Once the base92.1. Understanding the Communication Protocolstation receives the vote, it will send an acknowledgement back. A long blink ingreen of the third light indicating the acknowledgement is received by the basestation. Otherwise the user will see four short blinks in red, indicating that no ac-knowledgement is received. The third light is also used when the user changes theremote?s channel frequency, as described later.Failure to receive acknowledgement can occur because the base station is notconnected to a PC, or because the base station and the remote do not share thesame channel frequency, or because the base station is in ?not-accepting? mode,or because there is a collision with another remote that is also transmitting at thesame time. In all of these cases, no acknowledgement will be generated by the basestation and the remote will eventually time out.A user changes the channel frequency by pressing and holding the power keyon the remote for a few seconds until the ?POWER? light begins blinking. The userthen enters the two-key frequency code corresponding to one of the 16 two-lettercombinations of the letter A-D. The ?POWER? light will keep blinking in blue untilthe two keys have been pressed. It quits blinking only if the new frequency code issuccessful in sending the remote?s ID and a base station sharing the same channelfrequency sends an acknowledgement back on the second frequency. Otherwisethe ?POWER? light keeps blinking. After the first frequency code is entered, thethird light will light up and stay in orange. After the second frequency code isentered, it will blink once in green if the acknowledgement is received. Otherwisefour short blinks in red will be witnessed.2.1.3 Communication between Remotes and a Base StationCommunication between remotes and a base station takes place wirelessly. Eachremote has a unique 32-bit remote ID that is comprised of a unique 24-bit IDfollowed by an 8-bit checksum that is the exclusive OR of the three 8-bit bytesthat make up the 24-bit ID. For a remote whose ID is 0x058549C9, the check-sum, which is represented by the last two characters 0xC9, is the exclusive OR of0x05, 0x85 and 0x49, namely, 0xC9 = 0x05 ? 0x85 ? 0x49. Early versions of thei>clicker system used only the low-order 21 bits of the 24-bit ID, but with the same8-bit checksum calculation.The 32-bit ID is written in hexadecimal on a sticker that is attached to the backof the remote, and also on a second sticker that is inside the remote. The innersticker is only accessible by removing the screws from the casing of the remote.Normally this is not done by a user, but for many remotes in the earlier shipmentsthat were sold at the UBC Bookstore, the outer stickers had their ID numbers wearoff, so it was helpful that there is another inner sticker that also has this information.Even though the 32-bit remote ID itself could support error detection, the error102.1. Understanding the Communication Protocoldetection is never carried out during the transmission process. When a remotesends a vote to the base station, it includes the first 24 bits of the ID and the keypressed, but not the checksum. The base station on the receiver side recovers the32-bit ID by carrying out the exclusive OR operation on the 24-bit ID it receivedand appending it as the final 8 bits of a 32-bit ID. Thus, the base station has no wayto determine whether the 24-bit ID is correct. Similarly, in the USB communicationbetween the base station and the PC, only the first 24-bits of the ID and the keypressed is transmitted. Thus, there is no way to detect errors during the wiredtransmission as well.The exclusive OR operation could help detecting errors in some cases, but notalways. For example, both 0x058549 and 0x058448 produce the same checksum0xC9, so there is no way to achieve error correction in this case. For example, ifthe base station receives a 32-bit ID 0x058349C9, and it found that the checksumcomputed from the first 24 bits is 0xCF, instead of 0xC9, the base station cannot tellwhich bits are incorrect because more than one 24-bit IDs can generate the samechecksum, which is the case for 0x058549 and 0x05834F. It is also possible thatthe checksum itself is contaminated by one or more errors. These are well knowndeficiencies of exclusive OR when used as a checksum. More robust error detectionand error correction codes could be employed [21] [38][26], but this would requirea firmware change to the base station and the remotes, as well as a software changeon the PC because current communication protocol only allows the base station totransmit the first 24 bits of a remote?s ID, not the 8 bits of the checksum.Communication is asymmetric. Remotes send information to the base stationin parallel with each other, using a single shared frequency. This means there is apossibility of collision when more than one remote transmits at the same time. Thebase station serially broadcasts its acknowledgements on a different frequency, sothere is no possibility of collision for acknowledgements being sent to a remote.We were not able to determine what happens if there is a collision, but presum-ably this results in the base station receiving a corrupted packet and most probablyit sends an acknowledgement that does not correspond to any of the remote IDswhose transmissions were part of the collision, resulting in all of the remotes tim-ing out.Based on the scant evidence we collected about how long it takes for a largenumber of close-to-simultaneous votes to be collected, we suspect that when a re-mote times out it tries to re-transmit some number of times before finally indicatingan error if it is not successful, and that each time it re-transmits it first waits an in-creasingly longer time, somewhat like the ?exponential backoff? protocol used inEthernet [15] and various other communication schemes.There are 16 pairs of frequencies that are used by the base station. Each pair isidentified by two letters in the range A-D. During the change of channel frequency,112.1. Understanding the Communication Protocolafter two keys are pressed, the remote uses the two keys to determine the pair offrequencies it will use to transmit and receive from the base station. The remotethen sends its ID to the base station using the selected transmit frequency andlistens for an acknowledgement on the receiving frequency.Efforts were made to understand how the wireless communication between aremote and a base station works. Previous work [23] managed to get the firmwarefrom the remote, and we followed the same approach. After disassembling theremote, we found that the AVR in-system programming port was exposed on theprinted circuit board (PCB) in six pins, which allowed us to read and reflash thememory. An AVRISP mkII [1] was used to dump the program memory content tofile. Then Atmel Studio 6 [8] was used to convert the binary code to hex code andthen decompile that to assembly code. We were only able to determine that whena key is pressed, different values are stored in register R16 indicating which keyis pressed (0x02, 0x04, 0x08, 0x20, 0x10 for A-E, respectively, and 0x01 for thepower key). Presumably each value corresponds to an input pin for a key. At someother point during execution, the first three bytes of the remote ID are stored inregisters R16, R17 and R18, but we do not know how this information is combinedand transmitted to the base station because we were not able to fully understandthe assembly-level code in the firmware.Despite not fully understanding the communication protocol between the re-motes and the base station, we obtained enough information to proceed with ourresearch through our reverse engineering exercise. Figuring out the protocol be-tween the base station and the PC was easier because we had a USB sniffer, but theprotocol was more complicated so there was more to figure out.2.1.4 Communication between the Base Station and a PCThe communication protocol between a computer and a base station was obtainedby reverse engineering. That is, we ran the i>clicker software in the way it wasnormally used in a classroom, and at the same time we ran a USB traffic sniffersoftware on the same computer to capture all of the incoming and outgoing pack-ets exchanged between the base station and the computer. Whenever any eventhappened, such as the instructor starting or stopping a voting session or a studentcontributing a new vote, the corresponding packets were recorded and analyzed.Both the new ?white? i>clicker base station and the old ?black? i>clicker basestation hardware were used in the reverse engineering process. Only the protocolfor the new ?white? base station is presented in this chapter, but both were fully an-alyzed as part of the research and the augmented drivers that were later developedare compatible with both types of base station.Version 6.1 of the i>clicker software was used throughout the reverse engi-122.1. Understanding the Communication Protocolneering process for the new ?white? base station, which had firmware version 4.5.For the old ?black? base station, the i>clicker software was version 5.4.5 and thefirmware version was 2.3. The ?sniffing? software USBlyzer [14], version 1.4, wasused to capture all packets. Other types of USB analyzers could have been used,but this software-only solution met our needs and avoided the need to purchaseadditional hardware.All packets sent between the base station and the PC have 64 bytes. On Win-dows machines, however, when a PC sends a packet to the base station, one extrabyte is added at the beginning of the packet, with a value of 0x00, making thepacket 65-byte long.The PC has full control of when to initialize the base station. It specifies theinstructor?s remote ID, what pair of channel frequencies to use, when to start andstop voting, and what to display on the base station LCD screen. These are deter-mined by commands that the PC sends to the base station. The base station simplyinterprets and carries out the commands.To connect to a base station, a USB cable is attached to the USB port on thebase station and to one of the USB ports on the PC, or to a USB hub that is con-nected to the PC. The PC is then responsible for specifying the instructor?s remoteID so that the base station can receive the instructor?s vote even in ?not-accepting?mode. Until this is done, the base station will only accept commands from thePC via the USB connection; it will not accept votes from the instructor?s remotebecause it has not been provided with the proper ID so it cannot tell which votescome from the instructor.2.1.5 Format and Protocol for VotesAll votes share the same format, no matter whether they come from the instructor?sremote or from students? remotes. The base station is told the ID for the instructor?sremote by the PC, which it treats as special. The base station always receives andprocesses the votes that come wirelessly from every remote no matter whether thebase station is in ?accepting? or ?not-accepting? mode. However, it will send allvotes to the PC if it is in ?accepting? mode, but it discards everything it sees exceptfor votes from the instructor?s remote if it is in ?not-accepting? mode.When voting is turned on, it is the PC that actually separates students? votesfrom the instructor?s votes, and it is up to the PC to decide how to interpret differ-ent choices implied by the instructor?s votes and by students? votes. The vendor-supplied software has a fixed set of interpretations for votes from the instructor thatis recognizes. The firmware in the base station does not know the interpretations.It passes the instructor?s votes on to the PC, which interprets them and sends backappropriate commands to the base station.132.1. Understanding the Communication ProtocolVotes from students are also uninterpreted by the base station, and by the PCas well. They are simply logged by the i>clicker software for subsequent analysis,although the software does provide capabilities to display the number of votes orhistograms of counts of the five possible vote values that have been received duringthe current voting session, as well as some other summary information about ag-gregate votes. But the software does not do any special that depends on the valueof a particular student vote. Any distinctions are made off-line by evaluation orgrading software that processes text files in .csv format that are produced by thei>clicker software. These record some aspects of the votes that were received, aswell as timestamps and other statistical information such as the number of votescast by each remote.Starting voting moves the base station from ?not-accepting? mode to ?accept-ing? mode.Once the base station receives a new vote, it sends a packet to the PC. Thepacket includes the key that was pressed (A-E), the first six characters of the hex-adecimal remote ID, and the index of the vote in a 256-entry circular buffer (soindices range from 0x00 to 0xFF, and then go back to 0x00 again). Packets willstay in the PC?s I/O memory buffer, waiting to be read by programs running on thePC.Stopping voting moves the base station to ?not-accepting? mode. Additionally,the base station will send a summary packet that includes information such as howmany votes have been collected in total, the instructor?s remote ID, and other statusinformation.2.1.6 Controlling the LCD Display on the Base StationThe PC decides what to show on the base station?s 32-character LCD screen, andwhen to update the content displayed. The base station displays the character stringit receives from the PC. The vendor-supplied i>clicker software on the PC updatesthe LCD screen every second by sending the appropriate command and data.2.1.7 Summary of Commands for the Base StationIn Appendix B we provide a full list of all the packet types sent to and from the basestation. Here we only provide a partial description. For the ease of description, weclassify packets according to both their roles and associated tasks. There are threetypes of roles. All the packets sent from the PC to the base station are called PCCommand (PCC) packets. Acknowledgement packets sent by the base station arecalled Base Station Acknowledgement (BSA) packets. All non-acknowledgementpackets sent by the base station are in the third group, Base Station Response (BSR)142.1. Understanding the Communication ProtocolPackets. BSR packets usually include some useful information, for example, in-formation about the base station itself, votes received, etc. Also, a packet is sent tocomplete one of the following five tasks: initializing the base station, starting vot-ing, requesting votes, stopping voting, and updating the LCD display on the basestation.Because the number of commands is huge, the reader is referred to the appen-dices for details. Here, for the purpose of demonstration, we show what packet istransmitted when voting is going on (the packet type BSR Y in Appendix B, it is aBSR packet, and it is sent for requesting votes).Byte 0 Byte 1 Byte 2 Byte 30x02 0x13 Choice1 ClickerID1ClickerID1 VoteIndex1 0x00. . . (6 rows, omitted, all being 0x00) . . .0x02 0x13 Choice2 ClickerID2ClickerID2 VoteIndex2 0x00. . . (6 rows, omitted, all being 0x00) . . .Table 2.1: A BSR Y packet sent by the base station.As long as the base station has received new votes that have not been trans-mitted, it will send a packet in the above format to the PC. One packet containstwo votes, the current vote and the previous vote (except for the first packet, whichonly contains one vote). Each vote contains an index value (indicated by the fieldsVoteIndex1 and VoteIndex2 in the table, respectively), which starts incrementingfrom 0x00 for each incoming vote, and it goes back to 0x00 when it reaches 0xFF,so that 0xFF is considered to be smaller than 0x00 even though numerically it islarger. The vote with a larger index value (in the circular sense just described) is thecurrent vote, while the one with a smaller index value (in the circular sense) is theprevious vote. Choice1 and Choice2 are the respective choices for the two votes,and ClickerID1 and ClickerID2 are the first six digits of the IDs for the remotesthat sent the two votes.152.2. Base Station Driver Development2.2 Base Station Driver DevelopmentWith the full communication protocols at hand, it was straightforward to writedriver code to manipulate the base station. For the purpose of modularization, astand-alone driver was developed. The source code for the driver can also be usedas a building block for any application that utilizes i>clicker hardware, either re-using some of its components or by attaching to a process that runs it through asocket connection.Two platforms were considered, Microsoft Windows and Mac OS. These arethe dominant systems used in classroom nowadays. Java was chosen to build thedriver (as well as other applications in this project) because of its cross-platformcapability. We believe that our software could easily be adapted to a Linux platformif there is a need to do so.We used the HIDAPI [9] library because it allows an application to easily in-terface with USB and Bluetooth HID-Class devices. The javahidapi [10] packageis a JNI wrapper for the C/C++ HIDAPI library that provides a Java interface towork with these devices. Using javahidapi, a driver for the i>clicker base stationwas developed that works on both the Windows and Mac OS platforms.Although packets being sent have different types of roles, each packet aims atcompleting one of the five tasks, as stated previously: initializing the base station,starting voting, requesting votes, stopping voting, and updating the LCD displayon the base station. This provides a fairly clean interface for client applications.In Java, it is a class with one constructor and five additional public methods. Theimplementation of each method includes constructing packets and sending themto the device, receiving acknowledgment packets if there are any, and receivingresponse packets and parsing their content if there are any. Some commands sentto the base station receive no packets back. Other commands receive just an ac-knowledgement packet but others receive just a response packet. Some commandsreceive both an acknowledgement packet and then a response packet.After sending a packet to the base station, the driver has to wait for the re-sponse or acknowledgment packet?s arrival before trying to read them. However,the lag from a command being sent to the base station to the moment when the ac-knowledgment or response packet arrives back at the PC differs greatly for differ-ent commands. This was determined by experimenting with the vendor-providedsoftware.The exact lag values were obtained by running the i>clicker software and theUSBlyzer software application at the same time, recording the timestamp when acertain packet was sent and also when its acknowledgment or response packet(s)arrived, and then computing the differences of the timestamps. This process wasrepeated several times for each command sent by the PC and the results were av-162.2. Base Station Driver Developmenteraged. For several commands, the lag values were quite large (longer than onesecond). To make the driver run more efficiently, smaller values were used andtested, but this unfortunately led to non-functional code so we reverted to using thelarger values.Once appropriate lags were determined for each command type the driver soft-ware was ready to be used to develop applications for enhanced i>clicker usage inclassroom.17Chapter 3WebClicker: From Remotes tothe CloudThe base station driver described in the previous chapter only supports voting viaregular clicker remotes. However, the pervasiveness of personal computing de-vices provides a new chance for classroom interaction and collaboration. Nowa-days, most students own at least one of the following personal devices: a mobilephone (the majority of which are smart phones), a laptop, or a tablet. These de-vices are all very portable and are commonly brought with students to class. Theyprovide significantly more powerful computational capabilities, as well as interac-tivity, compared to most i>clicker classroom remotes, which only have five keys.Here, we focus on the development of WebClicker, a cloud-based system that sup-ports classroom activity via the personal devices such as those just listed. Thischapter describes the structure of WebClicker and demonstrates how it can be usedto provide input for various client applications.3.1 Client Devices SupportTo integrate personal devices into the classroom, they need to be able to commu-nicate over a network to the software that is being used by an instructor. The wire-less network at UBC seemed at first to be a good choice, however, due to securityconstraints, point-to-point connections through Wi-Fi have been disabled (connec-tion requests are filtered by the campus network software and are not forwardedto clients that are connected via Wi-Fi). Therefore, an alternative communicationmechanism was needed. Our solution was to use a web server external to the UBCnetwork that could be set up to process all the incoming votes and feed the re-sults back to Clicker++ (Chapter 4), or any other client application that runs on aninstructor?s computer, such as Clic^in (Chapter 5) or Selection Tool (Chapter 8).Through a browser, a web service provides a lightweight approach for students toaccess the application using various client devices. Using a web service is also use-ful for the instructor because it centralizes all of the administrative information andvote session history in the cloud, which facilitate data management and retrieval,183.1. Client Devices Supportespecially if there are multiple instructors sharing a course.We developed four different web services to support various devices. RemoteService uploads votes received by the base station to the cloud. Laptop Service runsa website that is designed for laptop users, while Mobile Service is built for smartphone and tablet users. Finally, SMS Service, which is built on top of Twilio[12],receives SMS messages sent by students and converts them to votes.3.1.1 Remote ServiceRemote Service deals with votes coming from clicker remotes. The basic idea isthat after the base station receives a vote and sends it to a PC, a program in thePC then uploads the vote to the cloud. However, experimental results show thatthis solution leads to a significant delay, mainly because uploading one single voteimmediately when it is received is expensive. A workaround is to buffer the votes,and send them all together at regular intervals.The implementation of Remote Service uses the base station driver described inChapter 3 to receive votes within a classroom setting and then forwards them to thecloud-based WebClicker server using standard networking protocols. Connectionsare established by the Remote Service client contacting the WebClicker server,which is seen as an out-going connection request by the local Wi-Fi network, so theconnections are allowed to proceed. The additional software for Remote Service isstraightforward. It uses the API provided by the base station driver to monitor thelocal classroom remotes and it uses the API of the WebClicker server to forwardvotes into the cloud where they are processed.Only instructors have to use the Remote Service application, which they candownload from the WebClicker website. Once installed, it operates like an ex-tended version of the base station driver, with the actual management of classroomactivities, such as voting, controlled by cloud-based software accessed using a nor-mal web browser.3.1.2 Laptop ServiceLaptop Service allows students to vote during a class by connecting to a web pagefor voting on a website. It also provides support for registering in classes withinthe service and for managing student devices (e.g., multiple clicker remotes orcell phones). An instructor can upload course registration information so that onlystudents who take a course can successfully register with the service.The voting page contains an image of an i>clicker remote, with five keys toclick (see Figure 3.1). Sound (which can be turned off) and LED light effectsare added so that the voting page mimics a real clicker remote when it is used.193.1. Client Devices SupportFeedback is provided every time a student clicks on a key on the simulated remote.This shows whether the student?s click has been successfully received, what thechoice was, and the timestamp for the vote.There is also a module which helps students to manage their devices. Studentscan add multiple remotes by specifying the remote ID for any additional remotesthey might use, so that they need not remember which remote is registered for theclass. They can also add a cell phone by specifying its phone number, enablingthem to vote by sending text messages (via the SMS Service, described later).Only students use the Laptop Service. It is accessed using a normal webbrowser through a URL provided by the instructor.Figure 3.1: Laptop Service interface.3.1.3 Mobile ServiceMobile Service is the mobile interface for Web Service. It supports all of thefunctionalities provided by Laptop Service, but presents an interface that nicelyfits the dimensions of mobile devices through the use of Kurogo [11], an opensource mobile optimized middleware for developing mobiles websites. Kurogo?spower lies in its ability to format the layout of the content based on the type ofdevice initiating the web request. It also has a lot of built-in features that arestandard for today?s mobile applications, such as database access, user access and203.2. Managing the Vote Flow in the Cloudauthentication, session management, system log and administrator management,etc. All of these features easeed the process of development and management ofthe Mobile Service.Students can access the Mobile Service with a browser application, which isinstalled on a wide variety of smart phones and tablets.3.1.4 SMS ServiceWe realize that not everyone owns a smart phone. Support is therefore provided toregular cell phone users by establishing SMS Service, which is based on Twilio, aplatform that enables phones and messaging to be embedded into web applications.By applying for a local cell phone number and associating it with an SMS requestURL, anyone who sends a message to this number will initiate an HTTP requestto the specified URL, which in our case is the WebClicker server. The request isprocessed by a Java Servlet on the WebClicker server.Students only need to know the phone number to which they send their votemessege. Besides, they need to register their own cell phone number with theWebClicker server prior to voting, via either the Laptop Service or the MobileService. To submit a vote, a student specifies the phone number, provides thechoice (one character long with the value being one of ?A? to ?E?, case insensetive),and then confirms by pressing a button such as ?OK? or ?Send?. The SMS Servicetakes more steps to submit a vote than required by a regular remote or by the LaptopService or the Mobile Service, but it does permit a regular cell phone owener toparticipate in classroom activities without having to use a more specialized device.The process would take less time if the phone number is saved as a contact so thatstudents do not need to specify it every time they vote.3.2 Managing the Vote Flow in the CloudAmazon Web Service [7] was chosen for WebClicker because it is free and thequality provided is good enough for research purposes. The functionality of We-bClicker is similar to that of the standard vendor-provided i>clicker application,with extensions to support new features that were added in Clicker++ (Chapter 4)and the other i>clicker applications we developed that use the base station driver.When a new semester starts, an instructor uploads a list of user IDs of all theregistered students to the WebClicker and tells students to sign on WebClicker.A user ID can be any unique ID that can identify a student. The user ID list isprovided so that WebClicker will only allow students who registered in this courseto sign up. Students sign up on WeClicker using either the Laptop Service or the213.2. Managing the Vote Flow in the CloudMobile Service by providing their user ID and a a password, with which they canlog on to WebClicker via either the Laptop Service or the Mobile Service in thefuture. After logging on to WebClicker, students can add and manage their remoteID or phone number.For each course, WebClicker maintains a database that keeps a record of allstudents? WebClicker registration information: their remote ID and their cell phonenumber (if any), as well their user ID if they access the Laptop Service or theMobile Service.All services listed above will access the database when they receive new votes.Votes are logged in the database, in addition to being sent to the client that is run-ning. In the database a vote record contains the following information: user ID,timestamp when the vote is received, choice, service code (R for votes receivedvia Remote Service, L for Laptop Service, M for Mobile Service, and S for SMSService), and the device ID if any (phone number for regular cell phone and remoteID for a clicker remote). Currently client applications utilize the user ID, times-tamp, and choice information. The service code and the device ID are not used inany of our existing applications, but these fields may be useful for future researchstudying users? preferences about different devices.See Figure 3.2 for an overview of the WebClicker system. Votes from any ofthe four services (Remote, Laptop, Mobile, and SMS) are collected in the clouddatabase and then fed to clients to which an instructor connects in the classroom.Any client application that was originally designed to use the i>clicker basestation driver (Chapter 2) can get the vote flow through WebClicker using the APIdeveloped for the driver. In the next few chapters, we will describe some of theseclient applications in detail.223.2.ManagingtheVoteFlowintheCloudFigure 3.2: Overview of WebClicker architecture.23Chapter 4Clicker++: Reproducting andImproving the i>clicker SoftwareUsing the base station driver and WebClicker, three client applications were de-veloped to fully exploit the power of the i>clicker hardware: Clicker++, Clic^inand the Selection Tool. These applications all improve different aspects of theclassroom interaction and collaboration experience supported by the basic vendor-provided i>clicker software. They can be connected to either the base station driveror to WebClicker (Chapter 4) through a network socket, thus maintaining modu-larization. The driver or WebClicker simply acts as a vote collector that sends allreceived votes to the client application for processing.Each vote contains three pieces of information: the remote ID, the key clicked,and a timestamp indicating when the driver received the vote (as mentioned in theprevious Chapter, WebClicker also provides a service code and a device ID, butthese are currently not used by client applications). It is up to the client applicationto decide, based on what it is trying to achieve with its application logic, how tointerpret the received votes, and what action to perform as a response.This chapter describes the first client application, Clicker++, which is a re-engineered version of the vendor-provided i>clicker software functionality thatsupports basic classroom voting. It not only fixes some design shortcomings byintroducing a new ?View by Group? feature, but also adds more powerful visual-ization components, such as a Participation Bar and Performance Bars, to help bothinstructors and students qualitatively evaluate in-class performance.4.1 Basic Clicker++ FunctionalityClicker++ is intended to be a complete replacement for the i>clicker software.It has been programmed in Java, a language that works on many different plat-forms. The i>clicker vendor provides both an in-classroom application, which iswhat Clicker++ replaces, and i>grader, an application that takes the output of thei>clicker software and generates documents assessing each student?s performance.The interface between the classroom application and i>grader is through standard244.2. Improved Histograms.csv files. To allow instructors to continue to use i>grader, Clicker++ is designedto generate its output in exactly the same format as that generated by the i>clickersoftware fo rthe .csv files.For the most part, Clicker++ works in the same way as the vendor-providedi>clicker software. Instructors can create a new course, modify or delete an existingcourse, set the pair of frequency channels used by the base station, and specify theID of instructor?s i>clicker remote. During a classroom sesssion, instructors canstart or stop voting using the ?A? key, display a histogram of votes with the ?B? key,and navigate slides by pressing the ?C? key (advance forward one slide) or ?D? key(back up one slide). When voting is taking place, the histogram dynamically showsthe distribution of students? votes across the five possible answers.In addition to all of the above common features, several improvements weremade based on our observation of students? behavior in classroo settings and in-structor?s feedback on additional functionality that would be useful.4.2 Improved HistogramsThe histograms in the original software show the current number of votes for eachchoice. During class, we found that although students? choices may vary a lotwhen voting starts, as time goes by students who voted the choices with a minor-ity of votes will almost always change their minds and vote for the most popularchoice, even though sometimes that choice is incorrect. This phenomenon mightbe explained by the fact that in-class participation performance will often becomepart of a student?s final mark for the course (if an instructor has elected to do this),and students may think that it is always safer to go with the majority. This strategyholds little obvious benefit for the learning process, and it discourages the studentsfrom thinking independently.We realized that this happens because the application always shows the numberof supporters for each choice, thus revealing the most popular choice. Instructorscan of course turn off the histogram, but then students receive no feedback onthe voting process. The problem this raises is that there are currently only twoextremes for feedback: none at all or complete information about how students arecollectively voting.To solve this problem, a new type of histrogram visualization was added, whichshows votes by groups of students, instead of by choice. As shown in Figure 4.1,each vertical bar represents a lab group. The height of each bar indicates the per-centage of students who have voted. The solid area within the bar indicates thepercentage of students in the group whose choices were correct, and the light areawithin the bar indicates the percentage of students whose choices were incorrect.254.3. Specifying GroupsIn this way, students? performance is visually available without the identify of themost popular choice being given away.By default, the entire class is assumed to be in the same group. But the softwaresupports multiple groups, such as for lab sections or other divisions of the full class.The extra visual hints provided by the enhanced histograms can be beneficialfor both instructors and students to quantitatively assess students? performance.The group-based histogram can be also used to encourage a mild form of com-petition between lab sections, for example, by letting students see how well theirsection is doing compared to other sections, both in terms of responses (how manyhave voted) and accuracy (how many have the correct answer).The group-based nature of the histograms can be used to ?fine tune? the degreeof competition because individual students can benefit from how well their group-mates do, not just from their own performance. For example, it is possible for theinstructor to set the following rules: all students in a lab group will get participationmarks only if 75 percent of the students in the group contribute votes, but in orderto get a performance mark, 80 percent of those who have voted must have selectedthe correct answer.To make these types of rules visually clear during voting, Participation Bar andPerformance Bars were added to the histrograms (see Figure 4.1). All groups sharethe same Participation Bar, which is set to a threshold of 75 percent, as can be seenat the rightmost part of the figure. Only groups that reach the Participation Bar willget participation marks. Additionally, each group has its own Performance Bar.Their values are set to 80 percent (not shown in the figure). All students in a groupwill get performance marks only if the solid part of the group?s histogram exceedsthe Performance Bar.4.3 Specifying GroupsTo support the ?Viewing by group? feature, a registration file maintaining thestudent-group information is required. It can be loaded when the application startsrunning. Some students own more than one remote and they use them interchange-ably. If votes are recognized solely by their remote ID, students who vote with twodifferent remotes will have more say. Thus, we need to keep the student-remotemapping as well (which the vendor-provided software also does).To support all these, a registration file is created for each course, which con-tains a list of tuples: student ID (or equivalent unique identifier if student ID isconfidential or unavailable), remote ID, and lab group. This file allows the sys-tem to make use of group information, and ensures that votes are associated withstudents instead of remotes. It is the instructor?s responsibility to provide this file.264.4. Classroom Beta TestingShould the registration file not be available, the application will tell different votesapart solely by looking at the remote ID. This backwards compatibility is usefulwhen dealing with cases like seminar talks or conference workshops, where re-motes are distributed to the audience, so it is not likely that someone has two ofthem. Requiring registration for these short-term, ad-hoc voting sessions is rarelyworth the effort.As with the vendor-provided software, instructors must specify the correct an-swers for each vote if they want to provide real-time feedback about performance.Answers are not required for participation feedback.Figure 4.1: Histogram by lab group. Bar height represents the percentage of stu-dents who have voted; solid part represents the percentage of those who contributedcorrect answer, and the light part represents those whose answers were incorrect.Participation Bar and per-group Performance Bars are shown as well.4.4 Classroom Beta TestingClicker++ was used in the UBC course CPSC 260, Data Structures and Algorithmsfor Computer Engineers, during the first term of the 2012-2013 school year. Theapplication handled about two hundred remotes without any problems. No for-mal evaluation was conducted, but no problems were detected and it is likely thatstudents did not realize that new software was being used (except when the newvisualizations were shown).While we were able to handle the volume of clicks that were coming in, we metwith an unanticipated problem while using the system in the lecture hall: light col-ors in the interface (e.g., yellow, pink, etc.) that rendered nicely on PC screens werewashed out when displayed using the projectors in the lecture hall. Researcherswho develop applications that rely on projectors and large screens should thor-oughly test all of the visualizations on their intended displays to ensure all the274.4. Classroom Beta Testingcolors look correct and are distinguishable. Support for alternate color palettesmight also be useful.28Chapter 5Clic^in: Motivation, Architectureand DesignProjected presentation slide is a popular way of displaying instructional material tothe class. However, it lacks interactivity. Student response system (SRS) is a bigstep over traditional classroom teaching and learning approach in the sense that itprovides a way for students to participate in the learning process beyond simplylistening to the instructor and asking questions when they are confused. However,what can be done with the current i>clicker system is still very limited, mainlycompleting multiple-choice questions. Additionally, an SRS and a presentationapplication run in two separate processes independently, which fails to provideseamless user experience. This chapter presents Clic^in, a gamified education ap-plication that provides more interactivity, combines both presentation slides andinteractive modules (called gamelets) and makes better use of extra screen realestate.5.1 MotivationThe goal of education gamificaiton is to introduce game elements into regularclass to encourage students to try out new concepts and ideas right in the class,to help them gain hands-on problem solving skills, to facilitate collaboration, andto provide real-time feedback about their performance. These are the core ideas inmind when developing Clic^in. We hope that introducing gamelet will increase thesense of in-class participation among students, and will help them to understandthe course material in a faster and better way.Another motivation behind Clic^in is to better exploit extra screen real estate.Nowadays, more and more classrooms are equipped with two or even more pro-jectors and screens. However, in most cases instructors just use one screen, oruse more by simply duplicating the current slide across all screens. This duplica-tion provides no extra benefit to both the instructor and the students. Contrarily,displaying different material on different screens may greatly increase the contentthroughput, thus reducing unnecessary context switch. Previous work on Multi-295.2. Architecture OverviewPresenter [29] showed how displaying different slides on different screens can bebeneficial.Finally, switching between static presentation slides and gamelets in a classmight be both time-consuming and interruptive. Clic^in was designed so that itcontains a sequence of activities, each of which can be either a static slide or aninteractive gamelet. Combining slides and gamelets together in one applicationhelps the instructor to present and illustrate course material in a natural and orga-nized way.Clic^in was first designed and developed in [33]. Modifications were laterapplied to better support the in-class activities of UBC CPSC 260, Data Structuresand Algorithms for Computer Engineering. This is a second-year computer sciencecourse targets students from the Faculty of Applied Science. Two main topics arebasic data structures, such as linked list, binary search tree (BST), and associatedoperations, such as inserting a node into a linked list or a BST. Students are requiredto understand how to write correct and fast code for these operations. While lettingthe instructor to explain the mechanism of these operations is essential, we wouldalso like to ask students, though not to write code directly in the class, but tocomplete gamified exercises that simulates above operations. The i>clicker systemis a perfect platform because it can get real-time input from a large number ofstudents at the same time. With the help of a large shared display, class-wideparticipation, and group competition are also made available.5.2 Architecture OverviewFigure 5.1 shows the architecture of Clic^in. The major component of Clic^in ispresentation, which is designed for a specific number of screens, and is responsiblefor making wise use of available screen real estate. It detects the number of screens,and figures out how to position and re-size itself to fit into the screens perfectly. InFigure 5.1, two screens are allocated. Presentation is loaded when Clic^in startsrunning.Presentation contains an ordered list of scenarios. Each type of scenario wasdesigned to organize content in a different way. For example, one scenario canbe used to show only one gamelet and its associated instructions (left-up scenarioin Figure 5.1); one scenario is designed for group competition, where the wholescreen is equally divided into eight parts, each of which shows a gamelet controlledby a group of students (right-up scenario in Figure 5.1); one scenario shows onesingle gamelet and its associated histogram showing students? choice distributionside by side (left-bottom scenario in Figure 5.1); and another scenario can be usedto show two slides (right-bottom scenario in Figure 5.1). Currently a slide can305.3. Design Specificationonly be designed as a static image, which is loaded when Clic^in starts running.Gamelets are interactive modules. Each type of gamelet was designed to supportone type of operation associated with a certain data structure, for example insertinga new node into an existing BST.The instructor can advance and back scenarios to select the one he/she wants toshow. When a scenario is selected, all students? votes are directed to it. As statedbefore, a scenario can contain slides or gamelets or both. If there are gamelets con-tained in the current scenario, they are called active gamelets. The rest scenariosare hidden and in pause state.5.3 Design SpecificationClic^in is written in Java, because of its cross-platform property. Processing [13],a Java based library focusing on simplified graphics drawing, is used to render allthe visual components. Another advantage of using Processing is that it allowsinstructors to quickly sketch and test a gamelet in a light-weight development en-vironment, and later to integrate it into Clic^in with very little modification.An external registration file is used to provide users? information, includingone?s remote ID, user?s role (Instructor, Student or Demonstrator) and lab group ifthe user is of role Student. Every time the application starts, it reads the registrationfile and loads all users? information.Users with role of Instructor can control the application but cannot interact withthe gamelets. The key-action mappings on Instructor?s remote were designed in away that is consistent with the ones used in the i>clicker software, thus exploitingpositive transfer: ?A? is a toggle switching between Start and Stop (students canonly interact with an active gamelet when the application is in Start mode); ?B? isof no use currently; ?C? goes to the next scenario; ?D? goes back to the previousscenario; ?E? resets the current scenario (so that students can practice the samegamelet one more time). For remotes owned by users? of role Student or Demon-strator, each key?s action is different depending on what the active gamelets is.When a vote reaches the Clic^in application, presentation first determines therole of its contributor by looking at its remote ID and the registration information.If it is an Instructor, the vote is interpreted immediately, based on the vote?s choice.Otherwise it is a user of role Student or Demonstrator who contributes this vote,which is further passed to the current scenario if the application is in Start mode.The current scenario is then responsible for processing all the incoming votes. Howthe votes are processed, and how different interaction modes can be achieved withSplitter, Filter and Thresholder will be described in the next chapter.315.3.DesignSpecificationFigure 5.1: Overview of Clic^in architecture.32Chapter 6Interacting with a GamletAs mentioned in the previous chapter, a gamelet is an interactive module that can becontained in a Scenario. Each gamelet is designed to support one type of operationassociated with a certain data structure, for example inserting a new node intoan existing binary search tree (BST). Using the BST gamelet as an example, wecontinue the description of a gamelet in this chapter. We first describe how tointeract with a BST gamelet. Then we demonstrate how it can be used to supportdifferent interaction modes (concept demonstration, class-wide participation, andgroup competition) by making use of Splitter, Filter, and Thresholder.6.1 Binary Search Tree GameletFigure 6.1 shows a BST gamelet. The goal of developing this gamelet is to helpstudents to understand how to correctly insert a node into a BST. When a BSTgamelet is shown, there are seven existing nodes in the tree (the root and all thenodes at depth 1 and depth 2). Users are required to insert a total of eight nodesinto a BST, all at depth 3. The blue node shown at the top-left corner is the targetnode waiting to be inserted into the tree. The current node is in green, which is thedeepest node in the traversal path. The magenta nodes, along with magenta edges,show the traversal path.A BST gamelet has five actions. Each of the five keys on the clicker remotemaps to one action. As shown in Figure 6.1, the ?A? key means selecting the rootof the tree; the ?B? key and the ?C? key are for visiting the left and right child ofthe current node; the ?D? key and the ?E? key are used to insert the target node asa new left child or a new right child of the current node. The mappings are alwaysexplicitly displayed beside the gamelet.6.2 Interaction ModesAs mentioned in the previous chapter, there are many types of scenarios. Amongthem three types of scenarios are of most importance, because they are tailored forthree types of interaction modes: concept demonstration, class-wide participation,336.3. Splitter, Filter, and Thresholderand group competition. These three interaction modes follow a natural way of howan instructor presents the material and how students absorb the knowledge in theclass.A scenario for concept demonstration has its gamelet labeled with ?Demo? atthe top-left corner, as shown in Figure 6.1. Only a Demonstrator can interact withsuch a gamelet. A Demonstrator is usually an instructor or a teaching assistantwho explains the knowledge (what a BST is in this case) and shows the class howto interact with a gamelet using a remote (how to insert a node into an existing BSTbased on the mapping provided).A scenario for class-wide participation has its gamelet labeled with ?All? (seeFigure 6.2) and only users of a Student role can interact with it. Key-action map-ping is displayed beside the gamelet. The whole class contribute to the samegamelet, which makes it easier for the instructor to focus and explain along theentire process.Finally, a scenario for group competition has multiple gamelets embedded, asshown in Figure 6.3. Each gamelet is labeled with the associated lab section, andcontrolled only by students belonging to that lab section. During the game, allgamelets run in parallel. Different scenarios were developed to deal with compe-titions involving different number of groups so that the gamelets are arranged in avisually appealing way.Figure 6.1: A gamelet for concept demonstration.6.3 Splitter, Filter, and ThresholderIn this section we describe Splitter, Filter, and Thresholder in detail and explainhow these components can be used to achieve different interaction modes men-tioned previously. For a scenario that has more than one gamelet, Splitter dupli-cates the vote so that each gamelet gets one copy. Filters, each of which is associ-346.3. Splitter, Filter, and ThresholderFigure 6.2: A gamelet for class-wide participation.Figure 6.3: A gamelet for group competition.ated with one gamelet, is responsible for filtering out votes coming from users whodo not have control over this gamelet. Finally, the Thresholder associated with onegamelet maintains a histogram counting how many users select each of the action.As long as the number of supporters of a certain action exceeds a threshold value,that action is triggered and sent to the corresponding gamelet, and the histogram isreset.A concept demonstration scenario has only one gamelet, so a splitter is notnecessary, as can be seen in Figure 6.4. All votes are sent to the Filter directly.Because only a Demonstrator is allowed to interact with a gamelet embedded ina concept demonstration scenario, all the votes contributed by Students are dis-carded. The Demonstrator?s vote then reaches the Thresholder. Because usuallythere is only one user, either the instructor or the teaching assistant, controlling thegamelet, the threshold value of the Tresholder is set to one. This means every votefrom a Demonstrator will trigger the corresponding action of a gamelet.A class-wide participation scenario also has one gamelet, which makes a split-356.4. Examples of Other Gameletster unnecessary. Figure 6.5 shows a class-wide participation scenario. When all thevotes reach the Filter, votes from a Demonstrator will be discarded. The remainingvotes are used to update the histogram in the Thresholder. In Figure 6.5, one canfind that there are three Students selecting action B, one for action A and actionD respectively, and none for action C and action E. Because the threshold value isset to three and the number of Students selecting action B reaches the threshold,action B is triggered in the gamelet.Finally we talk about the most complex scenario: a scenario designed forgroup competition. Figure 6.6 shows a group competition scenario involving threegroups: L1A, L1B, and L1C. Since there are three gamelets in this scenario, asplitter is used to make three copies of every vote, and each copy is sent to a Filterassociated with one gamelet. The rest process is exactly the same for all groups, sowe will just use group L1A as an example for explanation. The Filter for L1A, la-beled as ?L1A Filter? in the figure, filters out all the votes that are not contributed byStudents from L1A. The remaining votes arrive at the Thresholder, whose thresh-old value is set to 2. Since two Students selects action B, action B is triggeredin the gamelet controlled by group L1A. No action is triggered in other gameletsbecause none of the action has enough supporters in both groups.6.4 Examples of Other GameletsThe BST gamelet was used as an example through this chapter to explain whata gamelet is, and how it can be used to support different interaction modes usingSplitters, Filters, and Thresholders. Other gamelets were also developed to includemore data structures. In this section we present the linked list gamelet to demon-strate how to simulate the process of linked list insertion.The linked list gamelet presents a target node and an existing linked list, whosenodes are sorted in alphabetical order, as can be seen in Figure 6.7. Students arerequired to insert the target node into the existing linked list so that the new linkedlist still maintains the alphabetical order. In this example, the node ?wow? shouldbe inserted between ?foo? and ?zap?.The key-action mapping for the linked list gamelet is as follows. The ?A? keyselects the head of the linked list as the current node. The ?B? key is of no use.The ?C? key is used to make the current node move one step forward. The ?D? keyredirects the pointer of the current node to the target node, while the ?E? key doesthe opposite by redirecting the pointer of the target node to the current node.366.4.ExamplesofOtherGameletsFigure 6.4: Lifetime of a vote in a concept demonstration scenario. The threshold value is 1. Choice of each vote is displayed.The grey vote comes from a Demonstrator; yellow, blue and green votes come from Students who belong to Lab A, Lab Band Lab C respectively.Figure 6.5: Lifetime of a vote in a class-wide participation scenario. The threshold value is 3.376.4.ExamplesofOtherGameletsFigure 6.6: Lifetime of a vote in a group competition scenario. The threshold value is 2.386.4. Examples of Other GameletsWe demonstrate how to complete the linked list insertion as a Demonstratorusing a clicker remote in Figure 6.8. The process always starts from the head bypressing the ?A? key, as shown in (a), where the head of the linked list is highlightedin red. Then the current node moves forward with three consecutive ?C? keys till thecurrent node is larger than the target node alphabetically (?zap? in this example),as shown in (b), (c), and (d). Then the Demonstrator would press the ?E? key sothat ?wow? points to ?zap?, as can be seen in (e). However the node ?foo? doesnot point to ?wow? yet. So the demonstrator goes back to the head again (shownin (f)), traverses the linked list until it reaches ?foo? (shown in (g) and (h)), andpresses a final ?D? key so that ?foo? points to ?wow?.We now have explored all the details of gamelets from a software engineeringpoint of view. However, we are not sure if interacting with a clicker remote maycause additional cognitive load, and if yes, how much is the cognitive load com-pared to other devices. In the next chapter we describe an experiment measuring thecognitive load of interacting with a gamelet. Although we are referring to gameletin general, the BST gamelet is used in the experiment. The experiment helps usbetter understand and evaluate the design of a gamelet from a HCI perspective.Figure 6.7: A linked list gamelet.396.4. Examples of Other GameletsFigure 6.8: Procedure of linked list insertion40Chapter 7Cognitive Load in GameletsIn Chapter 5, we presented a presentation framework Clic^in that supports bothstatic slides and interactive gamelets. In Chapter 6, we described how gameletsare designed and used to increase classroom interactivity by engaging studentsin game-like activities and we showed the example of a gamelet that illustratesinsertion into a binary search tree (BST). A critical aspect of the design of anygamelet is the mapping from clicker keys to actions in the gamelet. As with anyinteractive system, users (students in the classroom) will need to know the mappingin order to use the system. This could increase the cognitive load on users if themapping is not carefully chosen.Traditionally, clickers are mainly used to answer multiple-choice questions.While a multiple-choice question and a gamelet are similar in the way that usersperform interactions, namely choosing the key that represents either the correct an-swer (when completing a multiple choice question) or the correct operation (wheninteracting with a gamelet), the reasoning processes are fundamentally different.There is only one format for multiple-choice questions, namely one question withmultiple answers, no matter what the question is about, and the mapping fromanswers to keys is always the same because each answer is identified by a key,as shown in Figure 7.1. Students are already familiar with this style of answer-ing because they have taken multiple-choice exams that use a very similar format.However, this is not the case for gamelets. Gamelets can be very different fromeach other, both in terms of logic and representation. Users have to first understandthe state of a gamelet, and then determine what action to take depending on thestate. Moreover, the actions in a gamelet do not map to the keys on the remote in asemantically meaningful way. The mapping is often quite arbitrary and thus mustbe learned for each gamelet.We assumed that the need to remember key mappings might lead to extra cog-nitive load because of the need to learn, or re-learn, the key mappings for eachgamelet. This chapter describes an experiment that investigated the cognitive loadfor two different ways of representing the key mapping for the BST gamelet. Afollow-up study is also presented to help us understand how participants? perfor-mance changed through practice using both representations.417.1. BackgroundFigure 7.1: A slide showing a multiple-choice question.7.1 BackgroundThe cognitive load of determining the mapping between keys and their actionswhen playing a gamelet using a remote arises from the need to associate a desiredoutcome with a specific action and then determine the key to press to achieve thataction. This is well explained by Norman?s notions of the Gulf of Execution andthe Gulf of Evaluation [34]. We will examine the level of cognitive load imposedby the key mapping using the BST gamelet that was described in Chapter 6.In the previous chapter, a simple solution was proposed to reduce the cognitiveload. The key mapping is explicitly displayed as part of the instructions associatedwith a gamelet, as shown in Figure 7.2 (a). During the game, while concentratingon the gamelet most of the time, users can occasionally refer to instructions tofigure out which key to press for a certain action, in case they are not clear aboutthe mapping. The cognitive load of this split attention is not clear to us, whichis one of the research questions that needs to be answered. Another question iswhether the different key correspondences in different gamelets also adds cognitiveload by requiring users to remember a new mapping for each gamelet. To addressthe second question, we considered using color-key mapping, which is consistentacross all gamelets.427.1. BackgroundBecause it is difficult to measure cognitive load directly, we compared per-formance using a remote against that using a mouse, which does not require keymapping and has no split attention. We also compared a remote against verbalcommands, which does not require key mapping but has split attention. Using a re-mote with the traditional interface requires key mapping and it also results in splitattention. Using a remote with the color-coded interface does not result in splitattention, yet it requires mapping from the color to the key, instead of showing thekey explicitly on the screen. While there are many indicators of cognitive load, wepicked task completion time and error rate in this experiment.7.1.1 Design of InterfaceTraditional InterfaceThe traditional interface is exactly the same with the one we presented in the pre-vious chapter, as shown in Figure 7.2 (a).Color-coded InterfaceThe advantage of making use of color to design the mapping is two-fold. First,introducing color into a gamelet does not change its geometric layout. Second,color is used to indicate different keys in Clicker++: red for the ?A? key, green forthe ?B? key, blue for the ?C? key, yellow for the ?D? key, and purple for the ?E?key. Once appropriately established, the color-key mapping is stable in the longrun. Although color might be more effective when the keys on a remote are alsocolor-coded (by either painting the keys with different colors or by putting coloredstickers), we remained to use the remote shown in Figure 1.1 in this experiment.The color-key mapping used for the color-coded interface is the same with the oneused in Clicker++.The result of applying colors to a BST gamelet is shown in Figure 7.2 (b). Alledges connecting a left child are in green. Because green corresponds to the ?B?key, it tells a user to click the ?B? key in order to go to the left child. All edgesconnecting a right child are in blue. When a user reaches a node that has less thantwo children, edges connecting non-existing children are displayed, yellow for leftchild and purple for right child, indicating that a user needs to press the ?D? keyor the ?E? key for inserting the target node as a new left child or a new right child.Finally, the red edge connecting the target node and the root tells the user to pressthe ?A? key to select the root.For some gamelets, re-design its interface by introducing color might be prob-lematic, because color channel has been used to convey other information. Asmentioned in the previous chapter, in a traditional BST gamelet (see Figure 7.2437.1. Background(a)), color indicates the role of a node. A blue node is the target node. A greennode is the current node. The purple ones are those that have been visited alongthe traversal path. Now that the color channel is used to convey the informationof which key to press, we need to find a way to represent information that waspreviously denoted by color, especially which node is the current node. We solvedthis problem by enlarging the size of the current node so that it becomes visuallydominant, as the node M in Figure 7.2 (b). Finally, all nodes that are impossible toreach from the current node become grey, which helps users to focus on the part ofthe tree they are working on.Figure 7.2: Graphics interface of a BST gamelet: (a). traditional interface; (b).color-coded interface; (c). interface when using a mouse.Mouse InterfaceThe mouse interface used in a BST gamelet was modified based on the traditionalinterface. It supports interaction using a mouse by clicking inside the node (seeFigure 7.2 (c)). To be more specific, clicking at the root means going to the root.Clicking at the left child of the current node means going to the left child, andclicking at the right child means going to the right child. To insert a node, becausethe spot was empty before insertion, an empty rectangle was displayed to indicate447.2. Participantsthe area to click at (see the bottom rows of the BST in Figure 7.2 (c)). Clickingat nodes that could not be reached from the current node (e.g., nodes representingthe grandchild of the current node) would not have any effect. Clicking at areasoutside of the BST had no effect neither.Verbal Commands InterfaceWhen interacting with a BST gamelet using verbal commands, the graphic inter-face is exactly the same with the traditional interface, as shown in 6.1 (a). However,the key-action mapping is replaced by command-action mapping. A user tells anassistant what the next step is using verbal commands. There are five commands,each maps to one action: ?Root? (go to the root of the tree), ?Left? (go to the leftchild of the current node), ?Right? (go to the right child of the current node), ?In-sert Left? (insert target node as a new left child of the current node), and ?InsertRight? (insert target node as a new right child of the current node). Upon hearinguser?s command, the assistant will press the corresponding key on the remote asfast as possible: A for ?Root?, B for ?Left?, C for ?Right?, D for ?Insert Left?, andE for ?Insert Right?.7.2 ParticipantsIn order to replicate a classroom scenario in which students are often not alreadyfamiliar with the material being presented, we only recruited participants who hadnever heard of a BST. A total of 26 participants were recruited. Data from twoparticipants? results were not used because their raw data files were accidentallydeleted during a file transfer process. The remaining 24 participants were all UBCstudents, 10 males and 14 females. There were 4 master?s students, 3 doctoralstudents and 1 postdoctoral fellow. All the others were undergraduate students (4had just graduated with a bachelor?s degree). Participants came from diverse aca-demic backgrounds: 4 from the Faculty of Applied Science, 9 from the Faculty ofArts, 5 from Sauder School of Business, 4 from the Faculty of Science, 1 from theFaculty of Land and Food Systems, and 1 from the Faculty of Forestry. Fourteenparticipants had used an i>clicker before.All participants received compensation of $10 for their participation, regardlessof their performance.457.3. Apparatus7.3 ApparatusThe experiment was carried out at UBC in ICICS X521. Each participant sat ata desk, facing a large screen 6 meters away. Both mouse and i>clicker remotewere provided. The screen was about 2.85 meters wide and 2.15 meters high, withresolution of 1280 x 960 pixels. The software ran on a laptop (Gateway T6308C)with Windows 7 Ultimate installed. The Clic^in software, which was describedin Chapter 5, was used in the experiment, modified to accept and process mouseinput in addition to clicker input, and to keep a record of all the remote key clicks(timestamp and the key clicked) and mouse clicks (timestamp and coordinates).The BST gamelet, which was described in Chapter 6, was modified to also generatethe color-coded interface.7.4 Task and ProcedureAfter signing a consent form, participants were instructed to carry out four blocksof experiments, followed by a short survey after all blocks were completed. Eachblock consisted of 5 BSTs, each with 7 nodes and having 8 additional nodes waitingto be inserted into the BST. A trial was the insertion of one node into a BST, soeach BST was used for eight consecutive trials before a new BST was encountered.Participants used four different interaction techniques to complete differentblocks. Thus, the overall design of the experiment was: 24 participants x 4 in-teraction techniques x 5 trees x 8 nodes, a total of 3840 trials. The order of thefour blocks was counter-balanced. Before each block started, there was a shorttraining session for participants to practice, in which they were asked to insert twonodes into a BST. The goal of the training session was to help participants betterunderstand how the task should be performed. We avoided longer training sessionbecause it might result in significant learning effects. For all tasks, participantswere told that accuracy had higher priority over speed.For error handling, two scenarios were explained to all participants. First,whenever realizing that they traversed in the wrong direction, e.g. they shouldgo left but instead they go right, participants should always correct the error. Sec-ond, if a node was inserted at an incorrect place, a red cross would appear over thatnode. Participants must correct the error first before inserting the next node. Inboth cases, participants corrected the error by going back to the root and restartingthe traversal process. As long as either type of mistake occurred in a trial, that trialwas a failed trial. Otherwise it was a successful trial. Whether a trial is successfulor not was recorded. Details of each task are described below.467.4. Task and Procedure7.4.1 Remote with Traditional InterfaceThis task measured the cognitive load of interacting with a traditional BST gameletusing a remote. Participants performed this task by using an i>clicker remote.Associated instructions were displayed beside the gamelet.7.4.2 Remote with Color-coded InterfaceThis task measured the cognitive load of interacting with the color-coded BSTgamelet using a remote. Participants learned the color-key mapping before theexperiment. To help them better remember the mapping, a separate training sessionwas carried out before this task, in which a colored rectangle was displayed on thebig screen. Participants were told to click the corresponding key based on the colorof the rectangle. The training session ended when the error rate of the last 25 trialsdropped below ten percent.7.4.3 MouseThe goal of this task was to determine the cognitive load of interacting with a BSTgamelet when using a mouse, one of the most common input devices. The mouseinterface was used in this task.7.4.4 Verbal CommandsThis task measured the cognitive load of a BST gamelet itself without using anydevice. The verbal commands interface was used, and the experimenter acted as anassistant for the user. We were aware that the completion time measured in this taskmay not be accurate because it included the time it took for participants to speakout the command and the time it took for the experimenter to respond. However,we assumed that the error rate should be an accurate estimate of the cognitive loadwhen interacting with a BST gamelet without using any physical device.7.4.5 Post-experiment SurveyIn the survey, participants were asked about their background information includ-ing gender, major, and which year they were in. They were also asked about theirexperience using i>clicker, including whether they had used it before, number ofcourses they took in Winter Term 2012, and how many of these courses requiredan i>clicker. Participants were asked to provide their general opinion towardsgamelet, including whether they have seen/ heard of/experienced other ways ofusing i>clicker in the class besides completing multiple-choice questions, their477.5. Hypothesespreference towards gamelet, the reason why they liked or disliked it. At the end ofthe survey they were encouraged to provide any comment in mind.7.5 HypothesesThe main goal of this experiment was to compare the cognitive load of interactingwith a BST gamelet using different interaction techniques. More specifically, wewere interested in comparing the cognitive load when using a remote and that whenusing a mouse. Also, we would like to compare the cognitive load of differentgraphical interface (traditional and color-coded) when using a remote.We had two primary hypotheses. The first hypothesis dealt with error rate.The error rate was computed as follows. For each interaction technique, each par-ticipant performed 8 x 5 insertions. The error rate was obtained by dividing thenumber of unsuccessful insertions by the number of total insertions, which is 40.We hypothesized that there would be a significant effect of interaction tech-nique on error rate. More specifically, we expected that using a remote wouldproduce significantly more errors than using a mouse or verbal commands. Thisis because it is possible for a participant to press the wrong key, although he/shedoes understand what action to take. For example, a participant wants to go tothe left child, and he/she understands that the ?B? key should be pressed in thiscase, but instead the adjacent the ?C? key is pressed. Additionally, we expectedthat when using a remote, the color-coded interface would be less error-prone thanthe traditional interface.The second hypothesis was about completion time. The task completion timewas computed only for successful trials. The task completion time for a successfultrial starts from the moment when the target node appeared on the top left of thescreen, to the moment when the node was successfully inserted into the tree.We hypothesized that there would be a significant effect of interaction tech-nique on task completion time. More specifically, we expected that using a remotewould be significantly slower than using a mouse. In addition, we expected that in-teracting with the color-coded interface when using a remote would be significantlyfaster than interacting with the traditional interface.We had one additional hypothesis solely regarding to the BST gamelet, whichwould help us better understand it. We broke down a successful trial into foursteps: selecting root, first branch, second branch, and inserting node. In each step,the current node gets one level deeper. We hypothesized that for all interactiontechniques, there would be a significant effect of step on both interaction time anderror rate. To be more specific, as the current node gets deeper, it should be fasterfor participants to make a decision, and the decision should be less error-prone.487.6. Results and DiscussionThis is because when the current node gets deeper, participants were comparingcharacters closer to each other in the alphabet, which should be simpler (i.e. weassumed that comparing I and J should be simpler than comparing I and F).To conclude, the following hypotheses were tested in the experiment.H1: interaction technique does affect error rate. More specifically, using aremote is more error-prone than using a mouse or verbal commands. When using aremote, the color-coded interface is less error-prone than the traditional interface.H2: interaction technique affects task completion time. More specifically, us-ing a remote is slower than using a mouse. When using a remote, interacting withthe color-coded interface is faster than interacting with the traditional interface.H3: For all interaction techniques, depth of the current node affects error rate.More specifically, when the current node gets deeper, the decision is less error-prone.H4: For all interaction techniques, depth of the current node affects interactiontime. More specifically, when the current node gets deeper, it takes less time toperform a correct action.7.6 Results and DiscussionWe describe the results obtained from the experiment. We first look at the errorrate and per-step error rate. Then we present task completion time and per-step in-teraction time. After that we describe how the participants? performance improvedover time as they gained more experience interacting with a BST gamelet. Finally,feedback from post-experiment survey was presented.7.6.1 Error RateThe first dependent variable to look at was error rate. The average error rate foreach interaction technique was as follows: 2.0% for mouse, 7.2% for verbal com-mands, 15.4% for remote with the color-coded interface and 19.4% for remotewith the traditional interface. A repeated measure ANOVA revealed that there wasa significant effect of interaction technique on error rate (F2.39,54.9 = 25.1, p <.005, ?2p = .52). Post-hoc analysis using Tukey?s method showed that when us-ing a remote, regardless of which type of interface was used, the error rate wassignificantly higher than using verbal commands (q4,68 = .12, p < .001 for the tra-ditional interface and q4,68 = .08, p < .005 for the color-coded interface). Similarresults were obtained between a remote and a mouse. There was a significant dif-ference between a remote and a mouse (q4,68 = .16, p < .001 for the traditionalinterface and q4,68 = .12, p < .001 for the color-coded interface). When using a497.6. Results and Discussionremote, the two interfaces were not significantly different in terms of error rate(q4,68 = .04, p > .05). However participants made significantly less errors whenusing a mouse than when using verbal commands (q4,68 = .04, p < .005).DiscussionMost results obtained were anticipated, except that using a mouse led to signifi-cantly less errors than verbal commands, while there should not be any differenceif we assumed that using verbal commands would introduce few errors. The onlypossibility that gave rise to a 5.2 percent difference could have been that partici-pants sometimes made mistakes when speaking out the command, e.g. intendingto go left but saying the command ?Right?.ConclusionTo summarize, H1 was partially confirmed: interaction technique affects error rate,and using a remote is more error-prone than using a mouse or verbal commands.However, the two types of interface do not significantly differ from each other interms of error rate.7.6.2 Error Rate in Each StepWe examined the error rate in each step. A two-way repeated measure ANOVArevealed significant effects of both interaction technique (F2.39,54.9 = 25.1, p <.001, ?2p = .52) and step (F2.25,51.8 = 24.6, p < .001, ?2p = .52) on error rate. Theinteraction was significant as well (F4.47,103 = 5.4, p < .001, ?2p = .19). Figure 7.3shows the average error rate for each interaction approach in each step.DiscussionFrom the above results, we concluded that depth of the current node does affecterror rate. In Figure 7.3 we found that the first step, which was selecting root,had the lowest error rate for all interaction approaches. This could be explainedby the fact that selecting root was the easiest step, which did not involve muchdecision-making. For all interaction techniques except mouse, the error rates at thesecond branch were lower than the ones at the first branch, which was consistentwith our hypothesis. It was interesting to see how the error rates in the last stepchanged differently among different interaction techniques. For mouse and verbalcommands, the error rates in the third step and the last step were quite close (q4,68 =.13, p > .05 for mouse and q4,68 = .083, p > .05 for verbal commands). However,when remotes were used, significant increase was observed (q4,68 = 1.12, p < .01507.6. Results and DiscussionFigure 7.3: Error rate for each interaction technique and step combination.for color coded interface and q4,68 = 1.08, p < .01 for traditional interface). Thatis to say, participants tended to make more mistakes in the last step when using aremote.To understand why the last step was more error-prone when using a remote,results in each step were further broken down and were categorized into bins ac-cording to which key should press and which key was actually pressed. Table 7.1(a) to 7.1 (d) show the broken-down details for both the traditional interface andthe color-coded interface when using a remote.From the table we can see that there are two main reasons why the last steps hadhigher error rates for both interfaces. First, in the third step (second branch), fewparticipants pressed the ?D? key and the E ?key?. This is understandable becausethe ?D? key and the ?E? key are only used when performing the last step (insertingnode). On the contrary, in the last step more participants pressed the ?B? key andthe ?C? key, probably because they forgot to switch to the ?D? key and the ?E? keywhen trying to insert a new node.Second, we found that the error rate caused by the confusion between the ?D?key and the ?E? key in the fourth step (should insert as a new left child but actu-ally inserted as a new right child, and vice versa) was 3.981% for the traditionalinterface and 2.217% for the color-coded interface, both of which were higher thanthe error rate caused by the confusion between the ?B? key and the ?C? key in thethird step (should branch left but actually branched right, and vice versa), whichwas 1.981% for the traditional interface and 1.6% for the color-coded interface.The second reason mentioned above could be treated as a side-effect of switch-ing from the ?B? key and the ?C? key to the ?D? key and the ?E? key. This can beproven by the fact that the error rate in the last step did not increase for interactiontechnique that does not involve any device (e.g. verbal commands) or interaction517.6. Results and Discussiontechnique requiring a device but does not need such a switch (e.g. mouse).ConclusionTo summarize, H3 was rejected: depth of the current node affects error rate. How-ever the error rate does not necessarily decrease as the current node gets deeper.It depends not only on which step the current node is in, but what the interactiontechnique is.7.6.3 Completion TimeCompletion time was measured in millisecond. As expected, a repeated mea-sure ANOVA showed that there was a significant effect of interaction approachon completion time (F3,69 = 5.74, p < .005, ?2p = .20), with mouse being thefastest (6446), followed by verbal commands (7517), then remote with the color-coded interface (8230), and finally remote with the traditional interface (8751).Post-hoc analysis using Tukey?s method revealed that a mouse was significantlyfaster than a remote (q4,68 = 1783, p < .005 for the color-coded interface, andq4,68 = 2304, p < .005 for the traditional interface). Although interacting with thecolor-coded interface was faster than with the traditional interface, the differencewas not significant (q4,68 = 521, p > .05).ConclusionFrom the above results we conclude that H2 was partially confirmed: interactiontechnique affects completion time. Using a remote is slower than using a mousefor interacting with a BST gamelet. However, the advantage of the color-codedinterface over the tradition interface is not significant.7.6.4 Interaction Time in Each StepWe the looked at the interaction time in each step. A two-way repeated measureANOVA revealed a significant effect of interaction technique (F2.59,59.6 = 5.74, p<.005, ?2p = .20) and step (F2.33,53.5 = 18.5, p < .001, ?2p = .45). There was a sig-nificant interaction effect as well (F4.85,112 = 29.3, p < .001, ?2p = .56). Figure 7.4shows the average interaction time for each interaction technique in each step.DiscussionWe first share some discoveries obtained from the graph. We found that partic-ipants spent longest time in the first step only when using a mouse. This was527.6. Results and DiscussionPressed A Pressed B Pressed C Pressed D Pressed EShould Press A 935 954 12 0 12 3 0 2 1 1Should Press B 0 0 0 0 0 0 0 0 0 0Should Press C 0 0 0 0 0 0 0 0 0 0Should Press D 0 0 0 0 0 0 0 0 0 0Should Press E 0 0 0 0 0 0 0 0 0 0(a). Selecting root.Pressed A Pressed B Pressed C Pressed D Pressed EShould Press A 0 0 0 0 0 0 0 0 0 0Should Press B 5 13 444 449 20 16 0 0 0 0Should Press C 5 16 40 34 414 426 7 0 0 0Should Press D 0 0 0 0 0 0 0 0 0 0Should Press E 0 0 0 0 0 0 0 0 0 0(b). First branch.Pressed A Pressed B Pressed C Pressed D Pressed EShould Press A 0 0 0 0 0 0 0 0 0 0Should Press B 4 0 405 421 13 9 4 0 0 0Should Press C 0 0 4 5 424 436 4 1 0 3Should Press D 0 0 0 0 0 0 0 0 0 0Should Press E 0 0 0 0 0 0 0 0 0 0(c). Second branch.Pressed A Pressed B Pressed C Pressed D Pressed EShould Press A 0 0 0 0 0 0 0 0 0 0Should Press B 0 0 0 0 0 0 0 0 0 0Should Press C 0 0 0 0 0 0 0 0 0 0Should Press D 1 3 9 9 3 7 393 408 7 6Should Press E 0 0 3 1 6 6 26 13 381 404(d). Inserting node.Table 7.1: Break-down of results in each step for traditional interface (left in eachcell) and color-coded interface (right in each cell).537.6. Results and DiscussionFigure 7.4: Interaction time for each interaction technique and step combination.because the cursor was located at the bottom of the screen after one trial was over,and it took time for participants to move the cursor to the top and to selected theroot when inserting the next target node. Unlike other interaction techniques, whenusing verbal commands the time spent at different steps was not significantly dif-ferent (F4.85,112 = 1.07, p > .05, ?2p = .045). This was because verbal commandswas the only interaction technique that did not require participants to use any de-vice. Instead it was the experimenter who applied all commands using a remoteand the experimenter was equally fast to respond to all verbal commands. Finally,selecting root took the least time among all steps when using a remote, which wasexpected because selecting root is straightforward.We now look at how the interaction time changed as the current node gotdeeper. We excluded verbal commands in the following analysis because as men-tioned above, there was no significant effect of step on interaction time. For therest three interaction techniques, significant drops of interaction time were ob-served from the first branch to the second branch (q4,68 = 368, p< .001 for mouse,q4,68 = 477, p < .005 for remote with traditional interface, q4,68 = 1068, p < .001for remote with color-coded interface). This is consistent with our hypothesis: ittakes less time to perform an action when the depth of the current node increases.However if we look at the time it took to perform the last step, contrary resultswere observed. No significant decrease was found for all interaction techniques(q4,68 = 15.3, p > .05 for mouse, q4,68 = 99.4, p > .05 for remote with traditionalinterface, q4,68 = 49.2, p > .05 for remote with color-coded interface). Thus, wecannot accept our hypothesis.547.6. Results and DiscussionConclusionTo conclude, H4 was rejected: depth affects interaction time. However the interac-tion time does not necessarily decrease as the current node gets deeper. It dependson both where the current node is and what the interaction technique is.7.6.5 Performance ImprovementWe were also interested in how participants? performance improved as they weremore and more experienced in interacting with a BST gamelet. We depict the errorrate and interaction time for each BST in Figure 7.5 and Figure 7.6. We found thaterror rate did not change much over time (F4,92 = .31, p > .05, ?2p = .013), as canbe seen in Figure 7.5. However, there was a significant drop of completion time(F4,92 = 5.48, p < .001, ?2p = .192). Figure 7.6 shows that it took longest time forparticipants to complete the first BST for all interaction techniques. Later on theyspent less and less time for each BST.Figure 7.5: Error rate from the first tree to the last (fifth) tree when using differentinteraction techniques.557.6. Results and DiscussionFigure 7.6: Interaction time from the first tree to the last (fifth) tree when usingdifferent interaction techniques.In order to explore the training effects more thoroughly and understand thelowest possible error rate and the shortest interaction time when using a remote, afollow-up between-subjects experiment was carried out. This time newly recruitedparticipants were asked to perform the same task with a remote using either thetraditional interface or the color-coded interface. Each participant was required tocomplete a total of sixteen trees. Ten participants were recruited, and each interfacewas tested by five participants. Figure 7.7 and Figure 7.8 shows the averaged errorrate and interaction time for each tree.Figure 7.7: Error rate from the first tree to the last (sixteenth) tree when usingcolor-coded interface and traditional interface.567.6. Results and DiscussionFigure 7.8: Interaction time from the first tree to the last (sixteenth) tree when usingcolor-coded interface and traditional interface.DiscussionFigure 7.7 shows that for both interaction techniques, from the forth BST the errorrate started oscillating between 5% and 25%, with an average value of 16% for thetraditional interface and 13% for the color-coded interface. On the contrary, onecan found in figure 7.8 that it took longer time for the interaction time curves toconverge for both techniques. The best performance for both interaction techniquesappeared in the last tree, 5595 milliseconds for color-coded interface and 5251milliseconds for traditional interface. It is possible to get smaller values if weincreased the task size, yet a value of 5 seconds should be a good estimate of thebest performance for both interaction techniques. To summarize, it took longertime for participants to achieve their highest speed.7.6.6 Results from SurveyWe present survey results provided by our participants. When asked whether therewere other ways of using i>clicker in the class besides completing multiple-choicequestion, one participant mentioned that some SRS remotes had number keys sothat students could answer arithmetical questions. Another participant mentionedthat instructors used i>clicker to call the roll. When asked if gamelet was a goodway to learn BST in class, 7 participants displayed an extremely positive attitude,10 for mildly positive, 6 for neutral and 1 for mildly negative. Participants whomaintained an extremely positive attitude or positive attitude were asked to explaintheir reasons. The most popular reason, mentioned by 12 participants, was thatgamelet helped increasing classroom participation and sense of involvement. A577.7. Conclusiontotal of 11 participants mentioned that the interface was properly designed so thatit was easy to see and understand the result of interaction on the big screen. Addi-tionally, 8 participants valued Clic^in?s combination of learning and entertainment,and 7 participants demonstrated support because most students already owned ani>clicker remote. One participant who demonstrated a negative attitude explainedthat it would take too much time in the class to learn how to interact with a gamelet.The time could be better used to cover material that was more helpful to students.Participants also provided feedback regarding to the design of both the inter-face and the device. Three participants mentioned that it was easier for them tomemorize the mapping and to click the keys when holding the remote horizontally.Two participants expected a mobile application that could substitute the physicalremote. Two participants argued that it was important to include an ?undo? func-tionality so that one did not always need to go back to the root to restart the entiretrial in case he/se made a mistake. One participant mentioned that gamelet wasa good idea to support interaction besides completing multiple-choice question.One participant pointed out that for the color-coded interface, he/she preferred adifferent color schema that was similar to the one in the rainbow, which might beeasier to memorize. Although that participant also pointed out that different usersmight have different ideas in terms of what color schema worked out best. Oneparticipant believed that it would be useful to color-code the remote as well (byputting colored stickers) so that users could refer to the color-key mapping moreeasily. One participant mentioned that the color-key mapping was hard to remem-ber and another one pointed out that explicitly showing the mapping informationsomewhere on the screen would be more convenient.7.7 ConclusionIn this section we summarize all the results obtained from this experiment. Therewas a significant performance gap between a remote and a mouse when interact-ing with a BST gamelet. Using a remote with the traditional interface leads to17.4% more errors than using a mouse. The percentage drops down to 13.4% forthe color-coded interface. For completing one trial, a remote is about two sec-onds slower than a mouse. We also compared two types of interface when using aremote, i.e. the traditional interface and the color-coded interface. Although bet-ter performance was observed for the color-coded interface in terms of both errorrate and interaction time, yet the difference was not statistically significant in bothcases. For a BST gamelet, the error rate and interaction time in each step varied,depending on both the depth of the current node and the interaction technique.Additionally, as participants gained more training experience, they were able to587.7. Conclusionimprove their performance, in terms of both error rate and interaction time.Although it is true that in this experiment a mouse is superior than a remotein terms of both error rate and completion time, we would like to justify the usageof the remote in a real classroom situation. Students bring remotes to the class,but few students will bring a mouse because nowadays most laptops include atouchpad. What?s more, we would like to emphasize that Clic^in was not designedfor a single user. Instead the support for collaboration is one of the most importantdesign features we had in mind. However it is technically impossible for a largegroup of students to control the same cursor using many mice at the same time in aclass. Additionally when deploying Clic^in in a real class, we found that most timewas spent waiting the number of students choosing the correct answer exceedingthe threshold, which made the two seconds delay almost negligible. The same caseapplies for error rate. Although there were some students who clicked the wrongkeys, as long as the majority did it correctly, the game could still proceed withoutany problem.59Chapter 8Selection Tool: Slide Navigationand Content Highlighting using aRemotePresentation slides are a popular way to share information in a classroom. It hasseveral advantages over a traditional blackboard, such as off-line authoring, easyediting, and quick distribution. During a lecture, the presentation software is usu-ally controlled by an instructor, not by students. Students? lack of direct controlmay be inconvenient sometimes, especially when a student wants to perform slidenavigation or to refer to the content on a certain slide. In this chapter, we firstprovide a detailed description of the research question. We then present our solu-tion, a tool for slide navigation and content highlighting that lets students performthese actions when enabled by the instructor. We demonstrate how the tool canbe used from a student?s perspective. We explore the cognitive load of the contenthighlighting tool in the next chapter.8.1 MotivationIn this section we explain the motivation for the Selection Tool with two scenarios.The first scenario shows that sometimes it is useful to allow students to have directcontrol over slide navigation. The second scenario explains why visually highlight-ing the content on a big screen can benefit both the instructor and the students.Image the following scenario. An instructor is talking about the concept of aBST, and the presentation software is showing a slide of a BST on the big screen.One student is confused by the difference between a BST and a regular binarytree. However, the concept of a binary tree is not described in the current slide,but in one of the previous slides. So the student would like to go back to that slideand have a detailed look. The student raises his/her hand and tells the instructorto back the slide. Because there is more than one slide that talks about a binarytree, the student does not know exactly which one he/she wants. Thus, the studenthas to ask the instructor to back one slide every time if necessary. This process608.2. Overview of the Selection Toolmay take a long time, with the whole class sitting in the classroom waiting. Thisprocess can be sped up if the student who asks a question can navigate the slide byhimself/herself, which is faster than telling the instructor to do it.The next scenario explains why visually highlighting the content on a bigscreen can be beneficial. Think about the following scenario. In a machine learn-ing class, the instructor is deriving an equation. At the end, the current slide isfull of mathematical terms and symbols (see Figure 8.1). One student realizes thatthere is a mistake on the slide: one theta misses a subscript. The student raiseshis/her hand and tells the instructor. However, the instructor, along with the wholeclass, has no idea which theta the student is referring to, because there are morethan twenty theta letters on this slide. So the instructor would ask if the studentcan be more specific in terms of where that theta is. The student then spends sometime determining which line and which term it is in, and tells the instructor thatthe theta locates in the sixth line, and it is the first theta in the forth term insidethe parenthesis. The instructor then follow what the student says and find out thetheta. Meanwhile, any student who are taking notes also have to repeat this processto modify their notes. This process can be very time-consuming. One solution canbe letting the student to highlight that theta on the big screen. Because the studentknows exactly where the theta locates, this process should be more efficient. Addi-tionally, providing visual feedback on the big screen allows everyone to see clearlywhere the theta is.Both the slide navigation task and the content highlighting task require an in-teraction platform through which a student can provide his/her input. Now that thei>clicker system are widely used in the class, we explore approaches that supportslide navigation and content highlighting using an clicker remote.8.2 Overview of the Selection ToolWe built a tool that enabled a student to navigate slides and to highlight contentusing a remote. Before going into details of how the tool works, we?d like to de-scribe how to grant control only to the student who raises a hand or asks a question.As described in Chapter 2, the base station only has two modes, ?accepting? and?not-accepting?. Granting control to one student means putting the base stationinto ?accepting? mode, thus granting control to all the other students at the sametime. We assume that this works most of the time because students will followsocial convention: they will let their peer who asks a question control the tool.However, this will be problematic when other students accidentally hit the keys ontheir remotes. To solve this problem, we adopted a ?first come, first only? strategy:when the instructor grants control to students (by pressing the ?E? key on his/her618.2. Overview of the Selection ToolFigure 8.1: A slide which has a mistake (modified from [19]. The first theta of theforth term inside the parenthesis in the sixth line misses a subscript n.remote, which will put the base station into ?accepting? mode), the student whoclicks first will be granted control. Votes from other students can still be receivedby the base station, but they are discarded by the Selection Tool. The instructorreclaims control by pressing the ?E? key again, which puts the base station into?not-accepting? mode. One can see that the process of control competition stillrelies on social convention, but this is much more reliable than allowing everyoneto control the tool during the entire navigation or highlighting process.In summary, an instructor presses the ?E? key to allow the students to com-pete for control. The student who clicks first will be granted control. The studentpresses ?C? key and ?D? key to forward and back a slide. ?E? key is for initiatingthe content highlighting tool (hereinafter referred to as highlighting tool). Duringany time point of slide navigation or content highlighting, control can be reclaimedby the instructor by pressing the ?E? key again. The next section is dedicated to thedesign of highlighting tool.628.3. Design of the Highlighting Tool8.3 Design of the Highlighting ToolVarious kinds of information can be displayed on a slide, including text, image andvideo, etc. In this project we focused on text highlighting, because it is easier torefer to images and videos verbally. Text has many forms of representation as well,including plain text (which are usually sentences or paragraphs), tables, programsegments, and equations, etc. The smallest semantically meaningful unit is a word,and it also should be the smallest unit to highlight as well. A word is usuallya sequence of characters, sometimes mixed with numbers. A word can appearalmost anywhere on the slide.Highlighting a word using a remote, which only has five keys, is very challeng-ing. Keys only support discrete operation, while highlighting a target is usually per-formed by a mouse, which can move continuously in the real world (although deepdown the movement of cursor is still discrete, since pixel is the smallest unit on thescreen). We now propose a solution that recursively ?Shrink? the highlighted area,followed by fine tuning using ?Translate? to complete the highlighting process.Figure 8.2 shows one complete highlighting process. Once the highlightingtool is initiated, the whole screen is highlighted, as shown in Figure 8.2 (a). Thehighlighted area contains two Separators, which split the whole area into four parts.Depending on which part contains the target, or which part has the largest portionof the target, that part is selected by pressing the key indicated by the label locatedon its corner. In this example the word ?Target? is in part D, so the ?D? key shouldbe pressed. Now only one part of the screen is highlighted (thus the highlightedarea ?Shrinks?, as can be seen in Figure8.2 (b)). This process continues for severaliterations until the size of the highlighted area is similar to the size of the target,although the highlighted area may be slightly off the target (see Figure8.2 (e)).To make them better overlap, a user presses the ?E? key, which brings the tool into?Translate? mode. In ?Translate? mode, all Labels change their positions: now firstfour keys represent the direction of translation (see Figure8.2 (f)). By hitting one ofthese keys, the highlighted area translates accordingly. Once the highlighted areaalmost covers the target, the user can press a final ?E? key to finish the highlightingprocess.We are not yet sure of the cognitive load of our highlighting tool in terms ofinteraction speed and error rate. Next chapter describes an experiment that investi-gates the cognitive load involved in this process.638.3. Design of the Highlighting ToolFigure 8.2: Content highlighting process: (a). when the process starts, the entirescreen is highlighted; (b). ?D? key is pressed to highlight the bottom right part; (c).?A? key is pressed to highlight the top left part; (d). ?D? key is pressed to highlightbottom right part; (e). ?C? key is pressed to highlight bottom left part; (f). ?E? keyis pressed, which brings the tool from Shrink mode to Translate mode; (g). ?A? keyis pressed and highlighted area moves up; (h). ?C? key is pressed and highlightedarea moves right.64Chapter 9Performance Assessment of theContent Highlighting ToolIn the last chapter we described the Selection Tool, an application that allows astudent to navigate slides and to highlight contents. However, we were not sureif the highlighting tool is fast and accurate enough to complete the highlightingtask. An experiment was carried out to evaluate the usability of the highlightingtool. The experiment compared our highlighting paradigm with a mouse, in termsof both interaction speed and error rate.We were aware that relying on a remote, our highlighting tool was slower than amouse. However, because mouse is the most popular and efficient input device forthe single user highlighting task, such a comparison may help us better understandthe discrepancy, and to help us discover any advantage of our highlighting toolover a mouse. Additionally, we would like to examine the error rate when usingthe highlighting tool, as well as how the performance changes when highlightingtargets with different sizes. Finally, because our highlighting paradigm is verydifferent from that using a mouse, we would like to examine some of the aspectsthat is unique to the highlighting tool, such as the effect of number of hands usedon performance, etc. This chapter describes the experiment and reports the results.9.1 ParticipantsA total of 25 participants were recruited for the study. Among them, 4 participants?results were not used, because the system parameters for their experiments wereincorrectly set. One participant misunderstood the instruction and his/her resultwas also discarded. The rest 20 participants were all UBC students, 15 males and 5females. There were 11 master?s students and 4 doctoral students, with the rest fivebeing undergraduate students. 11 of them came from the Faculty of Science, 6 fromthe Faculty of Applied Science and 3 from the Faculty of Arts. Nine participantshad used an i>clicker remote before.All participants were compensated with $10 for their participation, regardlessof their performances.659.2. Apparatus9.2 ApparatusThe experiment was carried out at UBC in ICICS X521. Each participant sat at adesk, facing a large screen 6 meters away. Both mouse and clicker remote wereprovided. The screen was about 2.85 meters wide and 2.15 meters high, withresolution of 1280 X 960 pixels. The software ran on a laptop (Gateway T6308C)with Window 7 Ultimate installed. A customized version of the highlighting tooldescribed in the previous chapter was used. It was modified so that key clicks wererecorded, including which key was pressed and the timestamp. The software alsokept record of information such as target size, target location, and when and wherea mouse click took place.9.3 Task and ProcedureAfter signing a consent form, participants were told to complete two blocks ofexperiments, followed by a short survey after both blocks were completed. In bothblocks participants were asked to highlight targets on the screen, but with differentdevices, either a mouse or a clicker remote. The order of the two blocks wascounter-balanced. After instructions were explained, participants were asked topractice until they fully understood how to use both devices.Although in real case the task would be highlighting a word on a slide, thescenario in this experiment was simplified to highlighting a rectangle on a blankslide. At any time point there was only one rectangle on the slide, which appearedrandomly at any location. There were three sizes of rectangles, small, medium, andlarge.To highlight a target using a mouse, a participant moved the mouse and clickedinside the rectangle. We believe this is appropriate because the most commonapproach to highlight a word on a slide is to move the mouse so that the cursorhovers over the target. Once the participant clicked, this target would disappear,regardless of whether the click was inside or outside of the rectangle, and a newtarget would appear at another random position on the slide. All participants weretold that accuracy had higher priority over speed.To highlight a target using the highlighting tool, participants were told to shrinkthe highlighted area until they felt its area was close to that of the target. Thenparticipants switched to Translate mode and moved the highlighted area until itroughly covered the target. The criteria was still the same, accuracy came first,followed by speed.Similar to the survey of the previous experiment, at the end of this experimentparticipants were asked about their background information, their experience with669.4. Hypothesesi>clicker, and whether they had seen/heard of/experienced other ways of usingi>clicker besides completing multiple-choice questions. Additionally they wereasked about whether they have seen/heard of/experienced other ways of referringto a word on a slide besides verbally mentioning its location. They were alsoencouraged to talk about the usability of the highlighting tool, including the effec-tiveness of Separators and Labels, number of hands used during the experiment,etc.The experiment was a 2-way within-subject design, with one variable being de-vice, and the other variable being target size. If a trial was defined as the process ofhighlighting one target, each cell contains 30 trials. That is to say, each participantcarried out 30 trials for each target size and device combination. In the experi-ment, target of different sizes within the same device type were randomly shuffledand mixed together. The overall design of the experiment was: 20 participants x 2device types x 3 target sizes x 30 trials, a total of 3600 trials.9.4 HypothesesThere were three goals in mind before carrying out the experiment. First we wouldlike to compare the usability of the highlighting tool with that of a mouse, in termsof both task completion time and error rate. Second we were interested in howthe performance of the highlighting tool changed depending on the target size.Third, we looked at results obtained solely from the highlighting tool from differentperspectives, which we hoped could help us better understand some of the aspectsthat were unique to the highlighting tool.We had two hypotheses regarding the comparison between a mouse and thehighlighting tool. First, we hypothesized that there would not be a significant effectof device type on error rate. Namely, we assumed that the highlighting tool couldperform as well as a mouse in terms of error rate since participants were told thataccuracy had higher priority over speed.Error rate was computed by dividing the total number of unsuccessful trials bythe total number of trials. An unsuccessful trial is defined differently for differentdevices. For a mouse, a trial is unsuccessful if the point clicked is outside of therectangle. For the highlighting tool, the process is unsuccessful if it meets anyof the following three conditions: 1. at any time of Shrink, the highlighted areafails to cover any part of the target (see Figure 9.1 (a)); 2. the final highlightedarea after Translate fails to cover any part of the target; 3. the final highlightedarea covers part or all of the target, but the size of the highlighted area is too large(refer to Figure 9.1 (b) as an example). We added the third condition because inreal case an overly large highlighted area does not help the audience to understand679.4. Hypotheseswhich is the target word, because there may be many words being highlighted. Inthis experiment, only highlighted area after six or seven times of Shrink (seven isthe largest number allowed in the highlighting tool) were considered to be smallenough for all targets. Error rate for each device type and target size combinationwas calculated respectively.Figure 9.1: Examples of unsuccessful highlighting trials: (a). the highlighted areadoes not cover any part of the target; (b). the highlighted area covers the target, butit ends up being too large.Second, we hypothesized that it would take longer time for the highlighting toolto complete the highlighting task, because the highlighting tool requires multipleclicks, while for a mouse only a translation and a click is required.Task completion time was calculated only based on successful highlightingprocesses. For a mouse, it starts when the target appears on the screen, and endswhen the user performs a click. For the highlighting tool, it also starts when thetarget appears on the screen, and ends when the user clicks a second ?E? key, whichindicates the end of a trial.We also had two hypotheses related to the difference of performance whenhighlighting different size of targets using the highlighting tool. First, we expectedthat target size would not affect error rate. It is true that when using a mouse it ismore error-prone to select smaller targets [20]. However we assumed that becausehighlighting a Small target using the highlighting tool would require only one moreShrink step compared to a Large or a Medium target (namely there would be atotal of seven Shrink steps, instead of six, which is usually the case for Large andMedium targets), which does not necessarily increase the error rate significantly.Second, we expected that there would be a significant effect of target size oncompletion time. More specifically, the smaller the target was, the slower the pro-cess would be. This hypothesis was based on the assumption that highlightingsmaller targets requires one extra step of Shrink.Finally we made four more hypotheses solely for the highlighting tool, which689.4. Hypotheseswe hoped could help us to better understand some details and features that wereunique to this highlighting paradigm. First, we hypothesized that previous experi-ence with remotes would not affect the highlighting speed. We assumed that thehighlighting task was relatively easy for novice users and that they were able toachieve high speed without special training experience.Second, we were interested in how participants held the remote when highlight-ing the target, especially how many hands were used. We expected that bi-manualinteraction would be faster than single-handed interaction when using a remote.Third, we define Coverage to be the percentage of the highlighted area insidethe target (see Figure 9.2). Since it is usually impossible for the highlighted areaand the target to overlap perfectly, this metric helps us understand the best extentto which they can overlap. We hypothesized that the smaller the target is, the lowerthe coverage would be.Figure 9.2: Black rectangle is the target, white area is the highlighted area, red areais the part of the highlighted area that is inside the target.Fourth, we would like to further look at the Shrink mode. we hypothesizedthat there would be a significant difference of per-click time among the sequenceof buttons in Shrink mode, i.e. the time it took to click the 1st, 2nd, 3rd,. . . , 6thbutton in the Shrink mode was different. We expected that the time would increase699.5. Results and Discussionas the process went on. We assumed this was due to the fact that as the highlightedarea became smaller, it was more likely that more than one quadruples would coverpart of the target. Thus it would take longer time for participants to decide whichquadruple contains the most portion of the target.In summary, the goal of our evaluation is to test following hypotheses.H1: Error rate will be the same for a remote and a mouse.H2: Target size will not affect error rate.H3: Highlighting with a remote will be slower than highlighting with a mouse.H4: It takes more time to highlight smaller targets when using a remote.H5: When using a remote, previous training experience does not make a dif-ference in terms of highlighting speed.H6: Bi-manual interaction will be more efficient than single-handed interac-tion.H7: It is more difficult to perfectly highlight smaller targets.H8: Per-click time in Shrink mode will increase as the Shrink process goes on.9.5 Results and DiscussionWe now present the experimental results and discussion. We first talk about errorrate and highlighting time for both devices. We then solely look at the highlightingtool to analyze its usability.9.5.1 Error RateWe measured error rate for each trial with device and target size being independentvariables. A repeated measure ANOVA showed that there was no significant effectof size on error rate (F1.97,37.3 = .14, p> .05, ?2p = .007). Contrary to our anticipa-tion, there was a significant effect of device (F1,19 = 14.5, p < .005, ?2p = .43) onerror rate. The interaction effect was not significant (F2.00,37.9 = .34, p> .05, ?2p =.018). Figure 9.3 shows the average error rate for each device and size combina-tion.ConclusionFrom these results, we rejected H1: error rate was higher when using a remote.However H2 was accepted: target size did not affect the error rate of highlighting.However, we would like to emphasize that while there was a significant effect ondevice, a discrepancy of about four percent is still acceptable.709.5. Results and DiscussionFigure 9.3: Error rate for each device and size combination.9.5.2 Highlighting TimeThe next dependent variable was time (in millisecond), the inverse of which was anindicator of speed, again with device and target size being independent variables.As anticipated, a repeated measure ANOVA revealed significant effect of device(F1,19 = 226, p < .001, ?2p = .92) and size (F1.80,34.3 = 14.5, p < .001, ?2p = .43)on highlighting time. The interaction was a significant factor as well (F1.8,34.2 =9.15, p< .005, ?2p = .33). Figure 9.4 depicts the average time used for highlightingone target. Simple main effect analysis showed that when using a mouse, therewas a significant effect of target size (F2,38 = 13.2, p < .001, ?2p = .41). Post-hoc analysis using Tukey?s method revealed that time spent on Small targets wassignificantly longer than that spent on Large (q3,57 = 125, p < .001) and Medium(q3,57 = 91.5, p < .005) ones, yet the difference between the time highlightingLarge and Medium targets was not significant (q3,57 = 33.4, p > .05). Similarresults were obtained when using a remote. Simple main effect analysis showedthat there was also a significant effect of size (F2,38 = 11.7, p < .001, ?2p = .38).Post-hoc analysis indicated that the difference was significant between large andsmall targets (q3,57 = 1146, p< .005), as well as between medium and small targets(q3,57 = 1135, p< .005). However it was not the case for large and medium targets(q3,57 = 11.1, p > .05).DiscussionAs mentioned previously the time spent highlighting a large or a medium targetwas not significantly different. This could be explained by the fact that to highlight719.5. Results and Discussiona large or a medium target, the highlighted area usually shrank to the same size;while for small targets participants would perform Shrink one more step, which ledto longer highlighting time.ConclusionIn conclusion the above results supported H3: highlighting with a remote wasslower than highlighting with a mouse, as well as H4: it took more time to highlightsmall targets when using a remote.Figure 9.4: Highlighting time for each device and size combination.9.5.3 Training Experience and Number of Hand UsedNine participants had used a clicker remote before. The average highlighting timefor both experienced and inexperienced participants is depicted in Figure 9.5. Al-though it seemed that there was a difference in interaction time, repeated measureANOVA showed otherwise: there was a significant effect only on size (F1.35,10.8 =10.1, p< .01, ?2p = .56), yet not on experience level (F1,8 = .60, p> .05, ?2p = .07).Number of hands used was not a significant factor either (F1,6 = .015, p> .05, ?2p =.003).ConclusionFrom the above results, we can conclude that H5 was supported: when using aremote to highlight a target, previous training experience did not make a difference729.5. Results and Discussionin terms of speed. However H6 was rejected: number of hand involved in theinteraction does not affect highlighting speed.Figure 9.5: Interaction time for each experience level and size combination.9.5.4 CoverageAverage Coverage values for different target sizes are depicted in Figure 9.6. Asexpected, Coverage decreased as the size of the target became smaller. A repeatedmeasure ANOVA revealed that there was a significant effect of size on Coverage(F1.33,25.3 = 40.2, p < .001, ?2p = .68). Post-hoc analysis using Tukey?s methodshowed that Coverage for each size was significantly different from one another(q3,57 = .054, p < .001 for large and medium, q3,57 = .20, p < .001 for large andsmall, q3,57 = .14, p < .001 for medium and small).ConclusionIn words, H7 was accepted: it is more difficult to perfectly highlight smaller targets.9.5.5 Per-click Time in Shrink ModeIn Shrink mode, we compared the time used for the first, second, . . . , sixth click,and generate the plot, Figure 9.7 is obtained. Repeated measure ANOVA showedthat there was a significant effect on index (F2.62,49.7 = 40.3, p < .001, ?2p = .68),yet not on size (F1.94,36.8 = 1.56, p > .05, ?2p = .076) and interaction (F3.17,60.2 =1.16, p > .05, ?2p = .057).739.5. Results and DiscussionFigure 9.6: Coverage for different target sizes.DiscussionFigure 9.7 showed that from the second click, the time it took to perform a clickincreased as the highlighted area getting smaller. This was probably because as theShrink process went on, more than one quadruple would contain part of the tar-get, in which case participants needed to determine which quadruple contained themost. However, the theory did not apply to the first click, which took much longertime than the second click. This was probably because when a new highlightingprocess started, the location of the target changed, and it took time for participantto figure it out where the new target was and how to perform the first Shrink.ConclusionThe results above somewhat supported H8 : Per-click time in Shrink mode willincrease as the Shrink process goes on.9.5.6 Results from SurveyWe now present results obtained from the survey. Participants were asked if theyknew any other way of using i>clicker besides multiple-choice question. One par-ticipant mentioned that he/she had seen i>clicker being used for roll call. Anotherparticipant mentioned using i>clicker remote to interact with a Gamelet, describedin Chapter 5. When participants were asked if they had seen/heard of/experiencedother ways of highlighting contents on a slide besides verbally mentioning its lo-cation, 2 participants mentioned using a laser pointer. Participants were then asked749.6. ConclusionFigure 9.7: Per-click time from the first to sixth click.to determine the usefulness of Separators and Labels when using the highlightingtool with five-point scales (1 = very obtrusive, 3 = neutral and 5 = very helpful),average scores of 4.35 and 4.25 were obtained for Separators and Labels respec-tively. It is worth noting that one participant said Labels were redundant becausehe/she was able to quickly memorize their locations in both modes after severaltrials of practicing.At the end of the survey participants provided comments about the design ofthe highlighting tool. Six participants mentioned that they were not used to thelayout of the Labels in the experiment, because the layout of buttons on the re-mote, which formed a straight line, did not correspond to the quadruple nature ofthe graphic interface. Four participants mentioned that during the Shrink mode,the position of Label C and D should be exchanged so that the four Labels couldfollow a traditional left to right, top to bottom fashion. Two participants mentionedthat an ?undo? functionality would be useful in the Shrink mode in case an erroroccurred. Two participants expected automatic Shrink technique so that users nolonger needed to go through the long Shrink process, although they did not provideany insight regarding how the highlighting tool was able to know which word theuser wanted to highlight.9.6 ConclusionIn this section we first provide a conclusive summary of results obtained from thisexperiment. Compared to a mouse, the highlighting tool was significantly slowerand more error-prone. However, we believed that in real case an error rate of759.6. Conclusionabout 5% was still acceptable when using the highlighting tool. The discrepancyof highlighting time between two device types was larger. Participants spent about16 seconds to highlight a target when using the highlighting tool, while it tookless than 2 seconds when using a mouse. When using the highlighting tool, targetsize affected the highlighting time, but not error rate. Experimental results alsoshowed that neither training experience nor number of hand affected highlightingtime. Additionally, we found that it was more difficult to highlight smaller targetsperfectly, and that it took more time for participants to make a decision of whichquadruple to choose when the highlighting area became smaller.We would like to further discuss the advantages and disadvantages of the high-lighting tool in a real world scenario. It is true that the highlighting tool has a majordisadvantage that it takes longer time than a mouse. However it is currently im-possible for a student sitting in a class to highlight a target on the big screen with amouse. The highlighting tool has another advantage over verbally mentioning thelocation of the target. In a big class, it is often very hard for students who sit at theback of the classroom to hear clearly what the question is asked by their classmatessitting at the front. The highlighting tool, however, is able to provide visual hintson the big screen to guide the audience to focus on the part that is in question onthe slide.76Chapter 10Conclusion and Future WorkThe existing i>clicker SRS adopted by many universities is an excellent platformto build applications that supports classroom interaction and collaboration, mainlybecause there is no extra hardware infrastructure cost. To make use of the existinghardware, a driver working with the i>clicker base station was developed, whichwas able to fetch all the votes collected by the base station. With other necessaryfunctionalities included, such as initializing the base station, start or stop voting,updating LCD, the driver provided a complete yet clean interface for developers tobuild various applications. WebClicker moves the traditional voting functionalityto the cloud, and adds support to various digital devices using different services.In this thesis, three applications were presented that support classroom interac-tion and collaboration in different ways. Clicker++ is a reproduced version of thei>clicker software, which introduces several new features, such as visualizing votesby lab group, student-oriented (instead of remote-oriented) identification, partici-pation bar and performance bar. Clic^in is a pedagogical application that provides away for students to practice newly-acquired knowledge right in the class, with bothwhole-class participation and group competition supported. Finally, we proposeda solution that enables a user to highlight contents on the slide using a remote.Two laboratory studies were carried out and their results were discussed. Thefirst study investigated the cognitive load of interacting with a gamelet using aremote. The result was compared with the ones using verbal commands, a mouseand a remote with color-coded interface respectively. Although it was found thata remote was significantly slower and more error-prone than a mouse, we arguedthat the discrepancy was acceptable in a real world scenario. It was also foundthat both interaction speed and accuracy, could be improved over time. The secondstudy evaluates the usability of content highlighting feature of the Selection Tool,compared with a mouse. Experimental results revealed the same results. Using aremote was significantly slower and more error-prone than using a mouse. Also,when using a remote, size of the target affected interaction time, but no for error-rate.There is considerable potential for future work that might be valuable.First, the applications built based on the base station driver were just examples.More can be designed and implemented based on specific user scenarios or usabil-77Chapter 10. Conclusion and Future Workity problem discovered in a real class so that (1) it saves time for both instructorsand students to complete a task which can be very tedious (roll call, for example),and (2) communication between instructors and students, instructors and teachingassistants, and among students can be more efficient and effective.Second, both quantitative and qualitative studies can be carried out in a class-room that utilizes WebClicker to compare the usability of different digital devicesin a classroom setting. Devices discussed in this thesis, e.g. laptops, tablets, smartphones and cell phones are inherently different in terms of size, weight and affor-dance, and it would be valuable to examine the effects of these differences on howstudents perform interaction in a classroom setting.Third, currently Clic^in is designed for a computer science course. It is rela-tively easy for instructors or teaching assistants having programming backgroundto write customized gamelets. However, this would be difficult for faculty membersfrom other departments who do not have programming experience. One promis-ing future direction might be investigating the probability of developing a gameletauthoring tool that can help instructors who do not have programming backgroundto easily generate gamelet that is customized to their field. A good example isE-Prime [4], a psychology software tool that supports a drag and drop graphicalinterface for designing psychological experiment. We believe this improvementwill remove the barrier between Clic^in and many potential users and will makethe application more accessible and user friendly.78Bibliography[1] Avrisp mkii. Online at atmel.ca/tools/AVRISPMKII.aspx, accessed inJul-2012.[2] Meridia audience response. Online at meridiaars.com, accessed in Sep-2013.[3] Poll everywhere, live audience participation. Online at polleverywhere.com, accessed in Sep-2013.[4] E-prime 2. Online at pstnet.com/eprime.cfm, accessed in Sep-2013.[5] i>clicker. Online at iclicker.com, accessed in Jun-2012.[6] irespond. Online at irespond.com, accessed in Sep-2013.[7] amazon web services, 2012. Retrieved from aws.amazon.com.[8] Atmel studio 6, 2012. Retrieved from atmel.ca/Microsite/atmel_studio6.[9] Hid api for linux, mac os x, and windows, 2012. Retrieved from signal11.us/oss/hidapi.[10] javahidapi - java api for working with human interface usb devices (hid),2012. Retrieved from code.google.com/p/javahidapi.[11] The kurogo mobile platform: Empowering your mobile future, 2012. Re-trieved from kurogo.org.[12] Twilio, 2012. Retrieved from twilio.com.[13] Processing, 2012. Retrieved from processing.org.[14] Usblyzer, 2012. Retrieved from usblyzer.com.[15] D. Bertsekas and R. Gallager. Data Networks. Prentice Hall, Upper SaddleRiver, New Jersey, 1992.79Bibliography[16] E. Blood. Effects of student response systems on participation and learningof students with emotional and behavioral disorders. Behavioral Disorders,35(3):214?228, 2010.[17] K. Crossgrove and K. L. Curran. Using clickers in nonmajors- and majors-level biology courses: student opinion, learning, and long-term retention ofcourse material. Life Sciences Education, 7(1):146?154, 2008.[18] R. Cummings and M. Hsu. The effects of student response systems on per-formance and satisfaction: an investigation in a tax accounting class. Journalof College Teaching and Learning, 4(12):21?26, 2007.[19] N. de Freitas. Bayesian learning, 2013. Retrieved from cs.ubc.ca/~nando/540-2013/lectures/l5.pdf.[20] P. M. Fitts. The information capacity of the human motor system in control-ling the amplitude of movement. Journal of Experimental Psychology, 47(6):381?391, 1954.[21] J. G. Fletcher. An arithmetic checksum for serial transmissions. IEEE Trans-actions on Communications, 30(1):247?252, 1982.[22] S. A. Gauci, A. M. Dantas, D. A. Williams, and R. E. Kemm. Promotingstudent-centered active learning in lectures with a personal response system.Advances in Physiology Education, 33:60?71, 2009.[23] D. Gourlay, Y. L. Sit, Y. Sunarto, and T. Wang. Security analysis of thei>clicker audience response system, 2010.[24] L. Greer and P. J. Heaney. Real-time analysis of student comprehension: anassessment of electronic student response technology in an introductory earthscience course. Journal of Geoscience Education, 52(4):345?351, 2004.[25] R. H. Hall, H. L. Collier, Marcie L. Thomas, and M. G. Hilgers. A studentresponse system for increasing engagement, motivation, and learning in highenrollment lectures. In Proceedings of the Americas Conference on Informa-tion Systems, pages 621?626, 2005. URL .[26] R. W. Hamming. Error detecting and error correcting codes. Bell Systemtechnical journal, 29(2):147?160, 1950.[27] E. Judson and a. D. Sawada. Learning from past and present: electronic re-sponse systems in college lecture halls. Journal of Computers in Mathematicsand Science Teaching, 21(2):167?181, 2002.80Bibliography[28] A. Kristine. Is it the clicker, or is it the question? untangling the effects ofstudent response system use. Teaching of Psychology, 38(3):189?193, 2011.[29] J. Lanir, K. S. Booth, and A. Tang. Multipresenter: a presentation system for(very) large display surfaces. In Proceedings of the 16th ACM internationalconference on Multimedia, pages 519?528, 2008. URL .[30] P. M. Len. Different reward structures to motivate student interaction withelectronic response systems in astronomy. Astronomy Education Review,5(2):5?15, 2007.[31] J. M. Mula and M. Kavanagh. Click go the students, click-click-click: theefficacy of a student response system for engaging students to improve feed-back and performance. e-Journal of Business Education and Scholarship ofTeaching, 3(1):1?17, 2009.[32] L. Nayak and J. P. Erinjeri. Audience response systems in medical studenteducation benefit learners and presenters. Academic Radiology, 15(3):383?389, 2008.[33] S. Newson. Clic^in: A large-lecture participation sys-tem, 2012. Retrieved from bitbucket.org/sgnewson/clic-in-a-large-lecture-participation-system/downloads/ClicIn_Final_Report_2012.pdf.[34] D. A. Norman. The Design of Everyday Things. Basic Books, New York,2002.[35] M. O?Donoghue and B. O?Steen. Clicking on or off? lecturers?rationale forusing student response systems. Providing choices for learners and learning- Proceedings Ascilite Singapore, pages 771?779, 2007.[36] W. R. Penuel, C. K. Boscardin, K. Masyn, and V. M. Crawford. Teaching withstudent response systems in elementary and secondary education settings: asurvey study. Educational Technology Research and Development, 55(4):315?346, 2007.[37] R. W. Preszler, A. Dawe, C. Shuster, and M. Shuster. Assessment of theeffects of student response systems on student learning and attitudes over abroad range of biology courses. Life Science Education, 6:29?41, 2007.[38] R. Rivest. The md5 message-digest algorithm, 1992. Online at tools.ietf.org/html/rfc1321, accessed in Sep-2013.81Bibliography[39] J. R. Stowell and J. M. Nelson. Benefits of electronic audience response sys-tems on student participation, learning, and emotion. Teaching of Psychology,34(4):253?258, 2007. doi: 10.1080/00986280701700391.[40] A. R. Trees and M. H. Jackson. The learning environment in clicker class-rooms: student processes of learning and involvement in large university-level courses using student response systems. 2007, 32(1):21?40, 2007.82Appendices83Appendix ACommunication ProtocolBetween i>clicker Base Station(Old) and PCPackets sent between the base station and the PC can be described as an orthogonalcombination of two attributes: function group and role. There are a total of fivefunction groups: initializing base station, starting voting, requesting vote, stoppingvoting and updating LCD. There are three roles: A PC Command (PCC) is sentby the PC to the base station; a Base Station Acknowledgement (BSA) confirmsto the base station that a PC command has been received; a Base Station Response(BSR) conveys useful information back to the base station about a PC command,e.g. the votes that the base station has collected. Table A.1 shows which packetsexist for each function and role combination.RoleFunction GroupPC Command(PCC)Base Station Ac-knowledgement(BSA)Base StationResponse (BSR)Initializing BaseStationY Y NStarting Voting Y Y NRequesting Vote Y Y YStopping Voting Y Y YUpdating LCD Y N NTable A.1: The Classification of packets for the old base station. Each packet hasa role (PCC, BSA, or BSR) as indicated in the columns, and a function group, asindicated in the rows.Each packet is described in detail below according its function group, on func-tion group per sub-section. The role of the packet is indicated in the table caption,with a number differentiating among packets belonging to the same role group (e.g.84A.1. Initializing Base StationPCC 1, PCC 2, etc.). All table cells, except the last row, constitute the data area,with hex numbers representing the data in the packet. Sometimes labels are usedfor the data (e.g. InstructorID), which are described in the last row of the table,which is a a brief comment is either explaining what a label means or providingextra information to help readers to understand the packet. PC Commands are la-beled in sequence, e.g. PCC 1, PCC 2, etc. Base Station Acknowledgement andBase Station Response are labeled according to the corresponding PC Command,e.g. BSA 5 and BSR 5 are the acknowledgement and response packets, respec-tively, for PCC 5.The protocol for the old base station was determined by reverse engineering,which involved using the base station with the vendor-provided software and ?sniff-ing? the protocol to determine the data that was being transmitted between the basestation and the PC on the USB port. The descriptions that follow are the result ofthat exercise. There is no guarantee that the list of commands is complete (thelist includes all of the commands that we observed) nor that the interpretation ofthe commands is accurate (it is consistent with the understanding we developedthrough the reverse engineering exercise, but it may be incomplete in details).Moreover, because we switched out development to the new base station whenit became available, we did not fully explore the protocol for the old base stationbehond what we needed for our driver.A.1 Initializing Base StationThe following packets are transmitted when the PC initializes the base station.Byte 0 Byte 1 Byte 2 Byte 30x01 0x10 BF1 BF2. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC initializes the base station. Order of BF1 andBF2 cannot be changed.BF1: Base frequency 1, ranging from 0x21 to 0x24.BF2: Base frequency 2, ranging from 0x41 to 0x44.Table A.2: PCC 1 - Set the frequency of the base station85A.1. Initializing Base StationByte 0 Byte 1 Byte 2 Byte 30x01 0x17 0x06 InstructorID0InstructorID1 2 0x00 0x00. . . (14 words (56 bytes), omitted, all being 0x00) . . .This packet is sent every time PC initializes the base station, if instructor?sclicker ID is provided.InstructorID: Instructor?s clicker ID, first six characters only. Only one remotecan be set as an instructor?s clicker, which can be any remote (doesn?t have tobe the blue one).Table A.3: PCC 2 - Set instructor?s remote IDByte 0 Byte 1 Byte 2 Byte 30x01 0x16 0x00 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC initializes the base station.This packet tells the base station to switch to ?not-accepting? mode.Table A.4: PCC 3Byte 0 Byte 1 Byte 2 Byte 30x01 0x16 0xAA 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent as an acknowledgement of PCC 3, only if the base stationreceives PCC 2.Table A.5: BSA 386A.2. Starting VotingByte 0 Byte 1 Byte 2 Byte 30x01 0x17 0x03 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC initializes the base station.Table A.6: PCC 4Byte 0 Byte 1 Byte 2 Byte 30x01 0x17 0xAA 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent as an acknowledgement of PCC 4.Table A.7: BSA 4A.2 Starting VotingThe following packets are transmitted when PC starts voting.Byte 0 Byte 1 Byte 2 Byte 30x01 0x17 0x05 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC starts voting.This packet reset the base station.Table A.8: PCC 587A.3. Requesting VoteByte 0 Byte 1 Byte 2 Byte 30x01 0x17 0xAA 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is be sent as an acknowledgement of PCC 5.Table A.9: BSA 5Byte 0 Byte 1 Byte 2 Byte 30x01 0x11 0x00 0x05. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC starts voting.This packet tells the base station to switch to ?accepting? mode.Table A.10: PCC 6Byte 0 Byte 1 Byte 2 Byte 30x01 0x11 0xAA 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is be sent as an acknowledgement of PCC 6.Table A.11: BSA 6A.3 Requesting VoteThe following packets are transmitted when PC requests fresh votes.88A.3. Requesting VoteByte 0 Byte 1 Byte 2 Byte 30x01 0x17 0x01 LV0x04 0x00 0x00 0x00. . . (14 words (56 bytes), omitted, all being 0x00) . . .This packet asks base station to send fresh votes, as long as the base stationconnection is opened. It is sent every 0.1 second in the i>clicker software.LV: Index of last received vote. It tells the base station that PC has received allthe votes up to LV, and please send LV+1, LV+2, ..., etc, if any, to PC. If LV iszero, it means PC has not received any vote yet. LV is only one byte, whosevalue ranges from 0 to 255. However, it is found that the base station can onlysend as many as 56 votes in one response set.Table A.12: PCC 7 - Request new votes from the base stationByte 0 Byte 1 Byte 2 Byte 30x01 0x17 0xAA 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent as an acknowledgement of PCC 7.Table A.13: BSA 789A.3. Requesting VoteByte 0 Byte 1 Byte 2 Byte 30x02 0x18 LastFlag VoteLeftLVL LVC VoteCurrent Choice1ClickerID1 Choice2ClickerID2 Choice3ClickerID3. . . (11 words(44 bytes),omitted) . . .. . . (11 words (44 bytes), omitted) . . . CheckSumThis packet is sent after BSA 7, as a follow-up response of PCC 7. It containsall the fresh votes.LastFlag: A flag indicating whether this is the last response packet of the cur-rent response set. One BSR 7 packet can hold a maximum of 14 votes. Ifbetween two adjacent polls (PCC 7 packets), more than 14 new votes comein, more than one BSR 7 packet will be generated in order to send all the newvotes. LastFlag of 0x01 tells PC this packet is not the last packet of the cur-rent response set. Instead, LastFlag with a value of 0x00 indicates that this isthe last one. For example, if between two adjacent polls, 38 new votes comein, the base station will generate 3 BSR 7 packets, with LastFlag being 0x01,0x01, 0x00, each holding 14, 14 and 10 votes respectively.VoteLeft: Total amount of votes left in this response set, namely, the totalamount of new votes that are received by the base station, but not yet sent toPC. In the above example, VoteLeft will be 38 for the first packet, 24 for thesecond packet and 10 for the last one.LVL: The vote that has the largest vote index in the last packet. LVL+1 is theindex of the first vote in the current packet.LVC: The vote that has the largest vote index in the current packet.VoteCurrent: Total amount of vote(s) in the current packet. VoteCurrent = LVC- LVL.Choice{1,2,...,14}: Choice of the {first, second, ..., fourteenth} vote in thecurrent packet, 0x61 for A, 0x62 for B, 0x63 for C, 0x64 for D and 0x65 forE.ClickerID{1,2,...,14}: Clicker ID of the {first, second, ..., fourteenth} vote inthe current packet, first six characters only.CheckSum: Checksum of the current packet.Table A.14: BSR 7 - New votes from the base station90A.4. Stopping VotingA.4 Stopping VotingThe following packets are transmitted when PC stops voting.Byte 0 Byte 1 Byte 2 Byte 30x01 0x16 0x00 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC stops voting.This packet tells the base station to switch to ?not-accepting? mode.Table A.15: PCC 8Byte 0 Byte 1 Byte 2 Byte 30x01 0x16 0xAA 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent as an acknowledgement of PCC 8.Table A.16: BSA 8Byte 0 Byte 1 Byte 2 Byte 30x01 0x17 0x04 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC stops voting.This packet requests the summary of the current voting session.Table A.17: PCC 991A.5. Updating LCDByte 0 Byte 1 Byte 2 Byte 30x01 0x17 0xAA 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent as an acknowledgement of PCC 9.Table A.18: BSA 9Byte 0 Byte 1 Byte 2 Byte 30x02 0x18 0x16 0x00? 0x00 ? 0x000x00 0x00 ? 0x00TotalVote 0x00 0x01 0x000x16 0x00 TotalVote 0x00InstructorID 0x000x01 0xFA 0x00 0x00. . . (9 words (36 bytes), omitted, all being 0x00) . . .This packet is sent after BSA 9, as a follow-up response of PCC 9. It providesa summary of the current voting session.TotalVote: Total amount of votes. As long as a vote is received by the basestation, it counts, no matter whether the person who contributes this vote hasvoted earlier in this vote session, or who, either an instructor or a student,contributes this vote.InstructorID: Instructor?s clicker ID, first six characters only.Table A.19: BSR 9 - Vote count summary from base stationA.5 Updating LCDThe following packets are transmitted when content displayed on the LCD is up-dated.92A.5. Updating LCDByte 0 Byte 1 Byte 2 Byte 30x01 0x13 C0 C1C2 C3 C4 C5C6 C7 C8 C9C10 C11 C12 C13C14 C15 0x00 0x00. . . (11 words (44 bytes), omitted, all being 0x00) . . .This packet is sent every time the first line of the base station LCD needs to beupdated.Cx: ASCII code of a character. The first line of the base station LCD can holda maximum of 16 characters.Table A.20: PCC 10 - Set first line of base station LCDByte 0 Byte 1 Byte 2 Byte 30x01 0x14 C0 C1C2 C3 C4 C5C6 C7 C8 C9C10 C11 C12 C13C14 C15 0x00 0x00. . . (11 words (44 bytes), omitted, all being 0x00) . . .This packet is sent every time the second line of the base station LCD needs tobe updated.Cx: ASCII code of a character. The second line of the base station LCD canhold a maximum of 16 characters.Table A.21: PCC 11 - Set second line of base station LCD93Appendix BCommunication ProtocolBetween i>clicker Base Station(New) and PCThe protocol for the new base station is described in a manner similar to howthe protocol for the old base station was described. Table B.1, a classificationof packets based on function group and role group, is provided for the ease ofreference, although it is exactly the same as Table A.1RoleFunction GroupPC Command(PCC)Base Station Ac-knowledgement(BSA)Base StationResponse (BSR)Initializing BaseStationY Y NStarting Voting Y Y NRequesting Vote Y Y YStopping Voting Y Y YUpdating LCD Y N NTable B.1: The Classification of packets for the new base station. Each packet hasa role (PCC, BSA, or BSR) as indicated in the columns, and a function group, asindicated in the rows.The protocol for the new base station is more complex. As was shown inTable A.1, the format of all the Base Station Acknowledgement packets (BSA)has a patten. The first two bytes are always the same as the first two bytes of thecorresponding PC Command packets (PCC), the third byte is always 0xAA, and therest of the bytes are all 0x00. Thus one can tell what a BSA packet looks like giventhe description of its corresponding PCC packet. Thus, for the sake of conciseness,in the protocal of the new base station, all the BSA packets are omitted, except forthe first example (PCC 1), but the comment area of the corresponding PCC packet94B.1. Initializing Base Stationindicates the BSA that is expected.The protocol for the new base station was determined by reverse engineer-ing, which involved using the base station with the vendor-provided software and?sniffing? the protocol to determine the data that was being transmitted betweenthe base station and the PC on the USB port. The descriptions that follow are theresult of that exercise. There is no guarantee that the list of commands is complete(the list includes all of the commands that we observed) nor that the interpretationof the commands is accurate (it is consistent with the understanding we developedthrough the reverse engineering exercise, but it may be incomplete in details).B.1 Initializing Base StationThe following packets are transmitted when the PC initializes the base station.Byte 0 Byte 1 Byte 2 Byte 30x01 0x10 BF1 BF2. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC initializes the base station.The order of BF1 and BF2 cannot be changed.BF1: Base frequency 1, ranging from 0x21 to 0x24.BF2: Base frequency 2, ranging from 0x41 to 0x44.Table B.2: PCC 1Byte 0 Byte 1 Byte 2 Byte 30x01 0x10 0xAA 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This acknowledgement packet is sent to the PC every time a PCC 1 is receivedby the base station.Table B.3: BSA 195B.1. Initializing Base StationByte 0 Byte 1 Byte 2 Byte 30x01 0x2A BF1 BF20x05 0x00 0x00 0x00. . . (14 words (56 bytes), all being 0x00) . . .This packet is sent every time PC initializes the base station.BF1: Base frequency 1, ranging from 0x21 to 0x24.BF2: Base frequency 2, ranging from 0x41 to 0x44.Table B.4: PCC 2Byte 0 Byte 1 Byte 2 Byte 30x01 0x12 0x00 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC initializes the base station.An acknowledgement packet BSA 3 is returned.Table B.5: PCC 3Byte 0 Byte 1 Byte 2 Byte 30x01 0x16 0x00 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC initializes the base station.An acknowledgement packet BSA 4 is returned.Table B.6: PCC 496B.1. Initializing Base StationByte 0 Byte 1 Byte 2 Byte 30x01 0x1E InstructorID0 InstructorID1InstructorID2 0x00 0x00 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC initializes the base station, if instructor?sclicker ID is provided.InstructorID: Instructor?s clicker ID, first six characters only. Only one remotecan be set as an instructor?s clicker, which can be any remote (doesn?t have tobe the blue one).An acknowledgement packet BSA 5 is returned.Table B.7: PCC 5 - Set instructor?s remote IDByte 0 Byte 1 Byte 2 Byte 30x01 0x15 0x00 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC initializes the base station.Table B.8: PCC 697B.1. Initializing Base StationByte 0 Byte 1 Byte 2 Byte 30x01 0x15 V1 V2BF1 BF2 0x01 0x020x66 0x00 0x00 0x00. . . (13 words (52 bytes), omitted, all being 0x00) . . .This packet is sent as a response to PPC 6, which contains base station infor-mation, like firmware version and frequency used.V1: Version number 1, major version of the base station; 02 for the old oneand 04 for the new one.V2: Version number 2, minor version of the base station; 03 for the old oneand 05 for the new one.BF1: Base frequency 1 currently used by the base station, ranging from 0x21to 0x24.BF2: Base frequency 2 currently used by the base station, ranging from 0x41to 0x44.Table B.9: BSR 6 - Base station firmware and frequencyByte 0 Byte 1 Byte 2 Byte 30x02 0x2C 0x00 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent as a response of nothing (seems to be). Sometimes thepacket appears here, other times it is after PPC 2 or BSA 6 or BSR7.Table B.10: BSR XByte 0 Byte 1 Byte 2 Byte 30x01 0x2D 0x00 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC initializes the base station.An acknowledgement packet BSA 7 is returned.Table B.11: PCC 798B.1. Initializing Base StationByte 0 Byte 1 Byte 2 Byte 30x01 0x29 0xA1 0x8F0x96 0x8D 0x99 0x970x8F 0x80 0x00 0x00. . . (13 words (52 bytes), omitted, all being 0x00) . . .This packet is sent every time PC initializes the base station.An acknowledgement packet BSA 8 is returned.Table B.12: PCC 8Byte 0 Byte 1 Byte 2 Byte 30x01 0x17 0x04 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC initializes the base station.An acknowledgement packet BSA 9 is returned.Table B.13: PCC 999B.1. Initializing Base StationByte 0 Byte 1 Byte 2 Byte 30x02 0x18 0x1A TotalVote0TotalVote1 0x00 0x00 0x000x00 TotalVote TotalVote0TotalVote1 0x00 0x00 0x000x00 0x00 0x00 0x01InstructorID 0x000x01 0x00 V1 v2. . . (9 words (36 bytes), omitted, all being 0x00) . . .This packet is sent as a response to PCC 9, which provides a summary of theprevious voting session.TotalVote: Total amount of votes received in the previous question. As long asa vote is received by the base station, it counts, no matter whether the personwho contributes this vote has voted earlier in this vote session, or who, eitheran instructor or a student, contributes this vote. This field occupies two bytes.InstructorID: Instructor?s clicker ID, first six characters only.V1: Version number 1, major version of the base station; 02 for the old oneand 04 for the new one.V2: Version number 2, minor version of the base station; 03 for the old oneand 05 for the new one.Table B.14: BSR 9 - Summary of the previous voting sessionByte 0 Byte 1 Byte 2 Byte 30x01 0x17 0x03 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC initializes the base station.An acknowledgement packet BSA 10 is returned.Table B.15: PCC 10100B.2. Starting VotingByte 0 Byte 1 Byte 2 Byte 30x01 0x16 0x00 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC initializes the base station.An acknowledgement packet BSA 11 is returned.Table B.16: PCC 11B.2 Starting VotingThe following packets are transmitted when PC starts voting.Byte 0 Byte 1 Byte 2 Byte 30x01 0x19 0x66 0x0A0x01 0x00 0x00 0x00. . . (14 words (56 bytes), omitted, all being 0x00) . . .This packet is sent every time PC starts voting.An acknowledgement packet BSA 12 is returned.Table B.17: PCC 12Byte 0 Byte 1 Byte 2 Byte 30x01 0x17 0x03 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC starts voting.An acknowledgement packet BSA 13 is returned.Table B.18: PCC 13101B.3. Requesting VoteByte 0 Byte 1 Byte 2 Byte 30x01 0x17 0x05 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC starts voting.An acknowledgement packet BSA 14 is returned.Table B.19: PCC 14Byte 0 Byte 1 Byte 2 Byte 30x01 0x19 0x66 0x0A0x01 0x00 0x00 0x00. . . (14 words (56 bytes), omitted, all being 0x00) . . .This packet is sent every time PC starts voting.An acknowledgement packet BSA 15 is returned.Table B.20: PCC 15Byte 0 Byte 1 Byte 2 Byte 30x01 0x11 0x00 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC starts voting.An acknowledgement packet BSA 16 is returned.Table B.21: PCC 16B.3 Requesting VoteThe following packets are transmitted when PC requests fresh votes.102B.3. Requesting VoteByte 0 Byte 1 Byte 2 Byte 30x01 0x11 0xAA 0x00. . . (7 words (28 bytes), omitted, all being 0x00) . . .0x02 0x13 Choice ClickerIDClickerID VoteIndex 0x00. . . (6 words (24 bytes), omitted, all being 0x00) . . .Once the base station receives fresh votes, it will automatically send the votesto PC. This packet is sent when the vote in this packet is the first vote in thecurrent voting session.Choice: Choice of the vote in the current packet, 0x81 for A, 0x82 for B, 0x83for C, 0x84 for D and 0x85 for E.ClickerID: Clicker ID of the vote in the current packet, first six characters only.VoteIndex: Index of the vote in the current packet. When a new question starts,the index continues, instead of starting from zero.Table B.22: BSR Y - First new vote from the base station103B.4. Stopping VotingByte 0 Byte 1 Byte 2 Byte 30x02 0x13 Choice1 ClickerID1ClickerID1 VoteIndex1 0x00. . . (6 words (24 bytes), omitted, all being 0x00) . . .0x02 0x13 Choice2 ClickerID2ClickerID2 VoteIndex2 0x00. . . (6 words (24 bytes), omitted, all being 0x00) . . .Once the base station receives fresh votes, it will automatically send the votesto PC. One packet only contains exactly two votes, the current one and the lastone (except for the first packet, which only contains one vote). Vote with alarger index value is the current vote.Choice{1,2}: Choice of the {first, second} vote in the current packet. Refer toBSR Y for choice format.ClickerID{1,2}: Clicker ID of the {first, second} vote in the current packet,first six characters only.VoteIndex{1,2}: Index of the {first, second} vote in the current packet. Whena new question starts, the index continues, instead of starting from zero.Table B.23: BSR Z - Follow-up new vote from the base stationB.4 Stopping VotingThe following packets are transmitted when PC stops voting.Byte 0 Byte 1 Byte 2 Byte 30x01 0x12 0x00 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC stops voting.Table B.24: PCC 17104B.4. Stopping VotingByte 0 Byte 1 Byte 2 Byte 30x02 0x12 0xAA 0x00. . . (7 words (28 bytes), omitted, all being 0x00) . . .0x02 0x13 Choice ClickerIDClickerID VoteIndex 0x00. . . (6 words (24 bytes), omitted, all being 0x00) . . .This packet is sent as a response to PCC 17.Choice: Choice of last vote received.ClickerID: Clicker ID of the vote in the current packet, first six characters only.VoteIndex: Vote index of the last vote received.Table B.25: BSR 17Byte 0 Byte 1 Byte 2 Byte 30x01 0x16 0x00 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC stops voting.An acknowledgement packet BSA 18 is returned.Table B.26: PCC 18Byte 0 Byte 1 Byte 2 Byte 30x01 0x17 0x01 0x000x04 0x00 0x00 0x00. . . (14 words (56 bytes), omitted, all being 0x00) . . .This packet is sent every time PC stops voting.An acknowledgement packet BSA 19 is returned.Table B.27: PCC 19Another PCC 19 goes here.105B.4. Stopping VotingByte 0 Byte 1 Byte 2 Byte 30x01 0x17 0x03 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC stops voting.An acknowledgement packet BSA 20 is returned.Table B.28: PCC 20Byte 0 Byte 1 Byte 2 Byte 30x01 0x17 0x04 0x00. . . (15 words (60 bytes), omitted, all being 0x00) . . .This packet is sent every time PC stops voting.An acknowledgement packet BSA 21 is returned.Table B.29: PCC 21106B.5. Updating LCDByte 0 Byte 1 Byte 2 Byte 30x02 0x18 0x1A TotalVote0TotalVote1 0x00 0x00 0x000x00 TotalVote TotalVote0TotalVote1 0x00 0x00 0x000x00 0x00 0x00 0x00InstructorID 0x000x01 0x00 V1 V2. . . (9 words (36 bytes), omitted, all being 0x00) . . .This packet is sent as a response to PCC 21, which provides a summary of thecurrent voting session.TotalVote: Total amount of votes received in the current question. As long asa vote is received by the base station, it counts, no matter whether the personwho contributes this vote has voted earlier in this vote session, or who, eitheran instructor or a student, contributes this vote. This field occupies two bytes.Instructor?s clicker ID, first six characters only.V1: Version number 1, major version of the base station; 02 for the old oneand 04 for the new one.V2: Version number 2, minor version of the base station; 03 for the old oneand 05 for the new one.Table B.30: BSR 21 - Vote count summary from base stationB.5 Updating LCDThe following packets are transmitted when content displayed on the LCD is up-dated.107B.5. Updating LCDByte 0 Byte 1 Byte 2 Byte 30x01 0x13 C0 C1C2 C3 C4 C5C6 C7 C8 C9C10 C11 C12 C13C14 C15 0x00 0x00. . . (11 words (44 bytes), omitted, all being 0x00) . . .This packet is sent every time the first line of the base station LCD needs to beupdated.Cx: ASCII code of a character. The first line of the base station LCD can holda maximum of 16 characters.Table B.31: PCC 22 - Set first line of base station LCDByte 0 Byte 1 Byte 2 Byte 30x01 0x14 C0 C1C2 C3 C4 C5C6 C7 C8 C9C10 C11 C12 C13C14 C15 0x00 0x00. . . (11 words (44 bytes), omitted, all being 0x00) . . .This packet is sent every time the second line of the base station LCD needs tobe updated.Cx: ASCII code of a character. The second line of the base station LCD canhold a maximum of 16 characters.Table B.32: PCC 23 - Set second line of base station LCD108Appendix CParticipant Survey: Comparisonof the Cognitive Load of DifferentGamelet Interaction TechniquesEach participant in the laboratory study described in Chapter 7 was asked to com-plete the following survey questionnaire that was designed to elicit informationabout about the participant?s experience using the i>clicker system and the cogni-tive load of the interaction techniques tested in the study.109  Participant Survey: Comparison the Cognitive Load of Different Gamelet Interaction Techniques Part I (Background Information): 1. Gender Male Female 2. Please specify if you are a in High School 1st Year Undergrad 2nd Year Undergrad 3rd Year Undergrad 4th Year Undergrad 5th or Higher Undergrad Master PhD Other, please specify: ______________________________________________________________ 3. If you are receiving post-secondary education, please specify your major area. Applied Science Arts Commerce and Business Administration Dentistry Education Forestry Land and Food Systems Law Medicine Science Other, please specify: ____________________________________________________________ Part II (Survey on i>clicker): In this part, you will be asked some questions about how i>clicker system was used in your past education experience. 4. Have you ever used i>clicker before? Yes No 110  5. How many courses did you take for Winter Term 1, 2012? _________________________________________ Among these courses, how many use i>clicker? ______________________________________________ 6. How many courses did you take for Winter Term 2, 2012? _________________________________________ Among these courses, how many use i>clicker? ______________________________________________ Part III (General Opinion towards Gamelet): In this part, you will be asked some general questions about Gamelet. 7. Have you ever seen/heard of/experienced other ways of using i>clickers in the class besides classic multiple choice question? Yes No If Yes, please specify: ____________________________________________________________ 8. Assume that you are a student in a class where the instructor is explaining binary search tree. To be specific, the instructor is talking about how to insert a new node into the current binary search tree. Please indicate your attitude of the instructor using Gamelet in the class while explaining the idea. Strongly Negative Mildly Negative Neutral Mildly Positive Strongly Positive 9. If you choose ?Mildly Positive? or ?Strongly Positive? in Question 8, find out two most important criteria that affects your decision. ? Easy access to the device (most students already own an i>clicker remote and they bring them to school anyway; not need to buy extra device). ? Increase of classroom participation and sense of involvement (I feel I would participate more in this type of classroom activity). 111  ? Combination of learning and entertainment (this game-type of application is more fun than classic way of teaching: students listen while the teacher talks). ? Visibility (I can see and understand the result of my interaction on the big screen). ? Other, please specify: ________________________________________________________ 10. If you choose ?Mildly Negative? or ?Strongly Negative? in Question 8, find out two most important criteria that affects your decision. ? I feel none the designs of the Gamelet are good enough for me to practice binary search tree and to understand how inserting a new node works. ? I feel the demonstration and practice of Gamelet takes too much time, which can be better used to cover material that is more useful. ? Other, please specify: ________________________________________________________ Part IV (Follow-up Interview): In this part, you will be interviewed for a few questions. ? For the best design of Gamelet, do you think is there anything needs to be added/deleted/changed so that the design gets even better? Any other comments regarding to Gamelet in general is also welcome. ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________  Thank you so much for your participation! 112Appendix DParticipant Survey: Comparisonthe Cognitive Load of DifferentGamelet Interaction Technique(Follow-up Study)Each participant in the follow-up laboratory study described in Chapter 7 was askedto complete the following survey questionnaire that was designed to elicit informa-tion about the participant?s experience using the i>clicker system.113  Participant Survey: Comparison the Cognitive Load of Different Gamelet Interaction Techniques (Follow-up Study) Part I (Background Information): 1. Gender Male Female 2. Please specify if you are a in High School 1st Year Undergrad 2nd Year Undergrad 3rd Year Undergrad 4th Year Undergrad 5th or Higher Undergrad Master PhD Other, please specify: ______________________________________________________________ 3. If you are receiving post-secondary education, please specify your major area. Applied Science Arts Commerce and Business Administration Dentistry Education Forestry Land and Food Systems Law Medicine Science Other, please specify: ____________________________________________________________ Part II (Survey on i>clicker): In this part, you will be asked some questions about how i>clicker system was used in your past education experience. 4. Have you ever used i>clicker before? Yes No 114  5. How many courses did you take for Winter Term 1, 2012? _________________________________________ Among these courses, how many use i>clicker? ______________________________________________ 6. How many courses did you take for Winter Term 2, 2012? _________________________________________ Among these courses, how many use i>clicker? ______________________________________________  Thank you so much for your participation! 115Appendix EParticipant Survey: Comparingthe Performance of Highlightingusing i>clicker Remote andMouseEach participant in the laboratory study described in Chapter 9 was asked to com-plete the following survey questionnaire that was designed to elicit informationabout about the participant?s experience using the i>clicker system and opinionsabout the interaction techniques for highlighting that were tested in the study.116  Participant Survey: Comparing the Performance of Highlig hting using i>clicker Remote and Mouse  Part I (Background Information): 1. Gender Male Female 2. Please specify if you are a in High School 1st Year Undergrad 2nd Year Undergrad 3rd Year Undergrad 4th Year Undergrad 5th or Higher Undergrad Master PhD Other, please specify: ______________________________________________________________ 3. If you are receiving post-secondary education, please specify your major area. Applied Science Arts Commerce and Business Administration Dentistry Education Forestry Land and Food Systems Law Medicine Science Other, please specify: ____________________________________________________________ Part II (Survey on i>clicker): In this part, you will be asked some questions about how i>clicker system was used in your past education experience. 4. Have you ever used i>clicker before? Yes No 117  5. How many courses did you take for Winter Term 1, 2013? ______________________________________________ Among these courses, how many used i>clicker? ______________________________________________ 6. How many courses did you take for Winter Term 2, 2013? ______________________________________________ Among these courses, how many used i>clicker? ______________________________________________ Part III (General Opinion towards Selection Tool): In this part, you will be asked some general questions about Selection Tool. 7. Have you ever seen\ heard of\ experienced other ways of using i>clickers in the class besides classic multiple choice question? Yes No If Yes, please specify: ____________________________________________________________ 8. Assume that in a class, a student has a question about the definition of a word displayed on the slide, and s/he wants to refer to this word when asking the question. Have you ever seen\ heard of\ experienced other ways of selecting or highlighting content (e.g. words, symbols, cells in tables)  on the slide in the class besides verbally mentioning where it locates on the slide? Yes No If Yes, please specify: ____________________________________________________________ Part IV (Design of Selection Tool): In this part, you will be asked some questions about your opinion towards the visualization  design of the Selection Tool (i.e. we will not consider mouse for this part) . There are two different visual aids, Separator and Label.   118  9. What do you think of the Separator when making the decision regarding to which sub-area to pick? Very Obstructive Obstructive Neutral Helpful Very Helpful 10. What do you think of the Label when making the decisi on regarding to which sub-area to pick? Very Obstructive Obstructive Neutral Helpful Very Helpful Part V (Follow-up Interview): In this part, you will be interviewed for a few questions. ? For the best design using i>clicker remote, do you think is there anything needs to be added\ deleted\ changed so that the design gets even better? __________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________  Thank you so much for your participation! 119

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0052178/manifest

Comment

Related Items