Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Implementation and evaluation of a classroom synchronous participation system Beshai, Peter 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2014_september_beshai_peter.pdf [ 20.82MB ]
Metadata
JSON: 24-1.0166030.json
JSON-LD: 24-1.0166030-ld.json
RDF/XML (Pretty): 24-1.0166030-rdf.xml
RDF/JSON: 24-1.0166030-rdf.json
Turtle: 24-1.0166030-turtle.txt
N-Triples: 24-1.0166030-rdf-ntriples.txt
Original Record: 24-1.0166030-source.json
Full Text
24-1.0166030-fulltext.txt
Citation
24-1.0166030.ris

Full Text

Implementation and Evaluation of aClassroom Synchronous Participation SystembyPeter BeshaiB.Math., The University of Waterloo, 2010a thesis submitted in partial fulfillmentof the requirements for the degree ofMaster of ScienceinThe Faculty of Graduate and Postdoctoral Studies(Computer Science)the university of british columbia(Vancouver)August 2014c© Peter Beshai, 2014AbstractIn large university classes it can be difficult to provide an engaging environ-ment for effective student learning. Many instructors have turned to usingstudent response systems (e.g., clickers) to mitigate this problem, with thei>clicker brand common in North America. However, usage of such systemsis typically limited to administering multiple choice questions to the class,with real-time feedback displaying only an aggregate representation of theresult distribution.We developed an architecture for Classroom Synchronous ParticipationSystems (CSPS) to extend the use of such systems to support a wider varietyof activities. We implemented a working system called Rhombus CSPS,which allows clickers to be treated as generic five-button controllers that canbe used as inputs for interactive, multi-player applications.Several game-theoretic exercises were implemented using the system andtested in a classroom setting with students enrolled in a third-year universitycognitive systems course. The evaluation took place across two consecutiveterms, with the researcher assisting with using the system in the first, andthe instructor using it alone in the second. The results indicate both stu-dents and the instructor had a positive experience using the system. Stu-dents reported high levels of engagement and valued the activities’ effect ontheir learning, although there were differences between the two terms. Theinstructor praised the system for enabling him to teach the curriculum of ac-tivities he desired to, where he was previously limited by lack of technologicalsupport.Displaying individual feedback to users in a CSPS can be challengingiiwhen there is only a single shared display all students are viewing, especiallyif it is desirable to keep responses private. To tackle this problem, we devel-oped a display technique for providing semi-private feedback to users basedon exploiting limitations in visual perception. We ran two experiments totest the efficacy of our technique, with results indicating that our techniqueprovides a high degree of accuracy for interpreting one’s own feedback, whilelimiting the ability to simultaneously interpret another’s feedback to nearrandom chance.iiiPrefaceThe research presented in this thesis was carried out under the supervisionof Dr. Kellogg S. Booth. I was the primary researcher in all work presented.Junhao Shi provided the initial code for the i>clicker driver that was usedwhen creating the Clicker Server.Ethics approval for the classroom evaluation and the experimental studywith human participants was provided by the Behavioural Research EthicsBoard at UBC under IDs H13-02138 and H14-01384 respectively.The experimental work presented in Chapter 9 also appears in a manu-script intended for submission as a conference paper for which I am theprimary author, with collaboration from Dr. Kellogg S. Booth.The main body of the thesis will appear in a condensed form in a manuscriptintended for submission as a journal paper for which I am the primary au-thor, with collaboration from Dr. Kellogg S. Booth.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . xv1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Student Response Systems . . . . . . . . . . . . . . . . . . . . 41.2 The i>clicker SRS Device . . . . . . . . . . . . . . . . . . . . 71.3 Thesis Overview and Contributions . . . . . . . . . . . . . . . 92 Game-Theoretic Exercises . . . . . . . . . . . . . . . . . . . . 122.1 Coin Matching . . . . . . . . . . . . . . . . . . . . . . . . . . 132.2 Coordination . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.3 Stag Hunt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.4 Prisoner’s Dilemma . . . . . . . . . . . . . . . . . . . . . . . . 152.5 Ultimatum . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.6 Playing Prisoner’s Dilemma with Clickers . . . . . . . . . . . 172.7 Grid Application . . . . . . . . . . . . . . . . . . . . . . . . . 21v3 The Classroom Synchronous Participation System Archi-tecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.1 The Participant Input Component . . . . . . . . . . . . . . . 243.2 The Identity Manager Component . . . . . . . . . . . . . . . 263.3 The Session Manager Component . . . . . . . . . . . . . . . . 273.4 The Instructor Controller Component . . . . . . . . . . . . . 283.5 The Application Views Component . . . . . . . . . . . . . . . 293.6 A Simple Example of the CSPS Architecture . . . . . . . . . 303.7 A More Complex Example of the CSPS Architecture . . . . . 313.8 Prior Work That Informed CSPS . . . . . . . . . . . . . . . . 344 Rhombus Classroom Synchronous Participation System . . 364.1 Clicker Server . . . . . . . . . . . . . . . . . . . . . . . . . . . 374.2 ID Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.3 Web Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.3.1 Application Managers, Viewers, and Controllers . . . . 434.4 Web Framework . . . . . . . . . . . . . . . . . . . . . . . . . 454.4.1 Controller . . . . . . . . . . . . . . . . . . . . . . . . . 454.4.2 Viewers . . . . . . . . . . . . . . . . . . . . . . . . . . 534.4.3 State Applications . . . . . . . . . . . . . . . . . . . . 534.4.4 User Representation . . . . . . . . . . . . . . . . . . . 565 Sequence Aliaser . . . . . . . . . . . . . . . . . . . . . . . . . 576 Term 1 Evaluation and Results . . . . . . . . . . . . . . . . . 616.1 Pilot Deployment . . . . . . . . . . . . . . . . . . . . . . . . . 616.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626.2.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . 636.2.2 Environment and Apparatus . . . . . . . . . . . . . . 636.2.3 Student Representation . . . . . . . . . . . . . . . . . 636.2.4 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . 646.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666.3.1 Student Questionnaire . . . . . . . . . . . . . . . . . . 666.3.2 Student Short Answer Responses . . . . . . . . . . . . 70vi6.3.3 Instructor Interview . . . . . . . . . . . . . . . . . . . 726.3.4 Observations . . . . . . . . . . . . . . . . . . . . . . . 766.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797 Term 2 Evaluation and Results . . . . . . . . . . . . . . . . . 807.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807.1.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . 817.1.2 Environment and Appartus . . . . . . . . . . . . . . . 817.1.3 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . 817.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827.2.1 Student Questionnaire . . . . . . . . . . . . . . . . . . 827.2.2 Student Short Answer . . . . . . . . . . . . . . . . . . 857.2.3 Instructor Interview . . . . . . . . . . . . . . . . . . . 877.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 928 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939 No-onset Presentation of Semi-Private Feedback on a SharedDisplay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 1029.2.1 Applications . . . . . . . . . . . . . . . . . . . . . . . 1029.2.2 Perception . . . . . . . . . . . . . . . . . . . . . . . . . 1039.3 Experiment 1: No-Onset Display . . . . . . . . . . . . . . . . 1049.3.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . 1059.3.2 Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . 1059.3.3 Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059.3.4 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . 1089.3.5 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 1099.3.6 Measures . . . . . . . . . . . . . . . . . . . . . . . . . 1109.3.7 Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . 1119.3.8 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 1119.3.9 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 1189.4 Experiment 2: Abrupt Onset Display . . . . . . . . . . . . . . 120vii9.4.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . 1209.4.2 Apparatus, Procedure, and Design . . . . . . . . . . . 1219.4.3 Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219.4.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 1229.4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 1279.5 General Discussion . . . . . . . . . . . . . . . . . . . . . . . . 1289.6 Conclusion and Future Work . . . . . . . . . . . . . . . . . . 1299.7 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . 12910 Conclusions and Future Work . . . . . . . . . . . . . . . . . 131Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134A Experiment Resources . . . . . . . . . . . . . . . . . . . . . . 142A.1 Pre-experiment Questionnaire . . . . . . . . . . . . . . . . . . 142A.2 Post-block Questionnaire . . . . . . . . . . . . . . . . . . . . . 144A.3 Post-experiment Questionnaire . . . . . . . . . . . . . . . . . 146A.4 Consent Form . . . . . . . . . . . . . . . . . . . . . . . . . . . 148B Classroom Evaluation Resources . . . . . . . . . . . . . . . . 151B.1 Student Clicker Questionnaire . . . . . . . . . . . . . . . . . . 151B.2 Student Short Answer Questionnaire . . . . . . . . . . . . . . 155B.3 Consent Form . . . . . . . . . . . . . . . . . . . . . . . . . . . 157B.4 Sequence Aliaser Mapping . . . . . . . . . . . . . . . . . . . . 160C Rhombus Game Details . . . . . . . . . . . . . . . . . . . . . 161C.1 Prisoner’s Dilemma . . . . . . . . . . . . . . . . . . . . . . . . 163C.2 Iterated Prisoner’s Dilemma . . . . . . . . . . . . . . . . . . . 170C.3 Ultimatum Game . . . . . . . . . . . . . . . . . . . . . . . . . 176C.4 Stag Hunt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184C.5 Coin Matching . . . . . . . . . . . . . . . . . . . . . . . . . . 189C.6 Coordination . . . . . . . . . . . . . . . . . . . . . . . . . . . 195D Term 1 Interview Transcript . . . . . . . . . . . . . . . . . . 200viiiE Term 2 Interview Transcript . . . . . . . . . . . . . . . . . . 219F Rhombus Instructor Manual . . . . . . . . . . . . . . . . . . 234ixList of TablesTable 2.1 The Coin Matching game pay-off matrix . . . . . . . . . . 13Table 2.2 The Coordination game pay-off matrix . . . . . . . . . . . 14Table 2.3 The Stag Hunt pay-off matrix . . . . . . . . . . . . . . . . 15Table 2.4 The Prisoner’s Dilemma pay-off matrix . . . . . . . . . . . 15Table 2.5 The colours and animations of avatars in the Grid application 22Table 4.1 The commands accepted by the Clicker Server . . . . . . . 38Table 5.1 The colours and animations of avatars in the Sequence Aliaser 59Table 9.1 The visual angles, and pixel and physical dimensions forthe elements of the avatars used in the experiment. . . . . 106Table B.1 Sequence-alias mapping used in Sequence Aliaser . . . . . 160xList of FiguresFigure 1.1 The i>clicker remote control and base station . . . . . . . 8Figure 2.1 The giver and receiver pairing in the Ultimatum game . . 17Figure 2.2 The check-in screen for playing Prisoner’s Dilemma withclickers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Figure 2.3 Prisoner’s Dilemma being played. . . . . . . . . . . . . . . 19Figure 2.4 Prisoner’s Dilemma results screen. . . . . . . . . . . . . . 20Figure 2.5 The Grid application . . . . . . . . . . . . . . . . . . . . . 23Figure 3.1 Classroom Synchronous Participation System Architecture 25Figure 4.1 Overall architecture of Rhombus Classroom SynchronousParticipation System . . . . . . . . . . . . . . . . . . . . . 37Figure 4.2 The relationship between the Manager, Controller, andViewers in the Web Server . . . . . . . . . . . . . . . . . . 43Figure 4.3 The Controller interface in Rhombus . . . . . . . . . . . . 46Figure 4.4 The status bar in the Controller interface. . . . . . . . . . 47Figure 4.5 Configuration Panel for team-based Prisoner’s Dilemmain Rhombus . . . . . . . . . . . . . . . . . . . . . . . . . . 50Figure 4.6 Virtual web clickers in Rhombus debug mode . . . . . . . 52Figure 4.7 The Prisoner’s Dilemma basic state machine . . . . . . . . 55Figure 4.8 Avatar representation of users in Rhombus . . . . . . . . . 56Figure 5.1 An example slip of paper used to provide a sequence to aparticipant . . . . . . . . . . . . . . . . . . . . . . . . . . 59xiFigure 5.2 Sequence Aliaser in use . . . . . . . . . . . . . . . . . . . 60Figure 6.1 Avatar representation of users in Rhombus . . . . . . . . . 64Figure 6.2 Combined questionnaire results in term 1 . . . . . . . . . 67Figure 6.3 Charts comparing helpfulness with learning between quizzesand Rhombus . . . . . . . . . . . . . . . . . . . . . . . . . 69Figure 6.4 Charts comparing time value for completing quizzes andusing Rhombus . . . . . . . . . . . . . . . . . . . . . . . . 70Figure 7.1 Combined questionnaire results in term 2 . . . . . . . . . 82Figure 7.2 Charts comparing helpfulness with learning between quizzesand Rhombus in term 2 . . . . . . . . . . . . . . . . . . . 84Figure 7.3 Charts comparing time value for completing quizzes andusing Rhombus in term 2 . . . . . . . . . . . . . . . . . . 85Figure 9.1 A sample user’s avatar . . . . . . . . . . . . . . . . . . . . 100Figure 9.2 The letters A to E as shown in the 7-segment display. . . 106Figure 9.3 A sample 5x5 grid of avatars. Here the target letter is Aand the distractor, “bruce”, reveals letter E. . . . . . . . . 107Figure 9.4 The sequence the target avatar moves through in eachtrial: fade in, reveal letter, hide letter, accuracy feedback,fade out. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108Figure 9.5 The locations where distractors can show up in. Dark graysignifies a distractor location in the “outer ring” and lightgray signifies a distractor location in the “inner ring”. Theblack square in the center is where the target avatar waslocated in all trials. . . . . . . . . . . . . . . . . . . . . . . 110Figure 9.6 The mean accuracy percentage for the target and distrac-tor based on the duration of the target and distractor re-spectively. The error bars represent 95% confidence interval.115Figure 9.7 The sequence the target avatar moves through in eachtrial in Experiment 2: fade in, reveal letter, hide letter,accuracy feedback, fade out. . . . . . . . . . . . . . . . . . 121xiiFigure 9.8 The mean accuracy percentage for the target and distrac-tor based on the duration of the target and distractorrespectively for Experiment 2. The error bars represent95% confidence interval. . . . . . . . . . . . . . . . . . . . 123Figure B.1 The Question app in Rhombus . . . . . . . . . . . . . . . 152Figure C.1 Rhombus check-in screen . . . . . . . . . . . . . . . . . . . 162Figure C.2 Prisoner’s Dilemma play screen in round 1 . . . . . . . . . 164Figure C.3 Prisoner’s Dilemma results screen in round 1 . . . . . . . 165Figure C.4 Prisoner’s Dilemma play screen in round 2 . . . . . . . . . 166Figure C.5 Prisoner’s Dilemma results screen in round 2 . . . . . . . 167Figure C.6 Prisoner’s Dilemma play screen in round 5 . . . . . . . . . 168Figure C.7 Prisoner’s Dilemma total results screen . . . . . . . . . . . 169Figure C.8 Iterated Prisoner’s Dilemma play screen in round 1 . . . . 171Figure C.9 Iterated Prisoner’s Dilemma round results screen . . . . . 172Figure C.10 Iterated Prisoner’s Dilemma play screen in round 2 . . . . 173Figure C.11 Iterated Prisoner’s Dilemma phase 1 results . . . . . . . . 174Figure C.12 Iterated Prisoner’s Dilemma cumulative phase results . . 175Figure C.13 Ultimatum Game giver play screen . . . . . . . . . . . . . 177Figure C.14 Ultimatum Game receiver play screen . . . . . . . . . . . 178Figure C.15 Ultimatum Game giver results screen . . . . . . . . . . . . 179Figure C.16 Ultimatum Game receiver results screen . . . . . . . . . . 180Figure C.17 Ultimatum Game combined results screen . . . . . . . . . 181Figure C.18 Ultimatum Game phase 1 results . . . . . . . . . . . . . . 182Figure C.19 Ultimatum Game cumulative phase results . . . . . . . . 183Figure C.20 Stag Hunt play screen in round 1 . . . . . . . . . . . . . . 185Figure C.21 Stag Hunt results screen in round 1 . . . . . . . . . . . . . 186Figure C.22 Stag Hunt phase 1 results . . . . . . . . . . . . . . . . . . 187Figure C.23 Stag Hunt cumulative phase results . . . . . . . . . . . . . 188Figure C.24 Coin Matching play screen in round 1 . . . . . . . . . . . 190Figure C.25 Coin Matching round results screen . . . . . . . . . . . . . 191Figure C.26 Coin Matching play screen in round 2 . . . . . . . . . . . 192xiiiFigure C.27 Coin Matching phase 1 results . . . . . . . . . . . . . . . 193Figure C.28 Coin Matching cumulative phase results . . . . . . . . . . 194Figure C.29 Coordination game play screen in round 1 . . . . . . . . . 196Figure C.30 Coordination game round results screen . . . . . . . . . . 197Figure C.31 Coordination game phase 1 results . . . . . . . . . . . . . 198Figure C.32 Coordination game cumulative phase results . . . . . . . . 199xivAcknowledgementsI thank my supervisor Dr. Kellogg S. Booth for his endless supply of inter-esting ideas, thoughtful discussion, and expert guidance.I also thank Dr. Peter Danielson for his fearlessness in integrating Rhom-bus into his classroom.Thanks to Dr. Ron Rensink for providing guidance on visual perception,and for inspiring the display technique used in the experiments.Thanks to Dr. Steve Wolfman for being my second reader.Thank you to the all members of the MUX lab, in particular to MateiNegulescu and Matt Brehmer for their sage writing and statistics advice, andto Ben Janzen and Francisco Escalona for assisting in running the no-onsetexperiment.And lastly, I thank my mother for her amazing, unwavering support, mylate father for inspiring me to achieve my best, and Vivian Chen for puttingup with me through it all.xvChapter 1IntroductionAs classrooms and lecture halls have grown in size, instructors have soughtout ways to keep lectures engaging by including in-class learning exercisesin which students can participate rather than simply listen and take notes.This has given rise to a number of technological supports being introducedinto the classroom, a recent one being student response systems (SRS). Ourresearch uses the i>clicker [10] SRS, one that is common in North Americanuniversities. An i>clicker is a simple remote with five buttons that allowsstudents to submit answers (“votes”) to multiple choice questions during alecture while the instructor receives real-time feedback about the distributionof responses.Unlike some SRS devices, the i>clicker lacks a display; it has only asingle LED that turns green or red to acknowledge a vote being received bythe system. Consequently, students know only whether their response hasbeen collected, but not whether it is “right” or “wrong”. If the instructorchooses, a histogram showing the distribution of responses can be displayed,enabling students to see the aggregate behaviour, but not their individualperformance, which they must infer by recalling the button they pressed,interpreting the histogram, and listening to the instructor’s comments. Afurther restriction is that the vendor-provided software does not allow sub-sequent activity to be guided by previous responses. That must be donemanually by the instructor. Despite these limitations, use of i>clickers and1similar SRS devices, their use in classrooms has been widespread and studieshave shown they can have a positive impact on student attendance, motiva-tion, engagement, and understanding [37, 40, 70].Previous research by Shi [65] investigated using clickers to provide group-based feedback that allows the instructor and students to see the relativeperformance of each section of students in large-lecture courses rather thanjust the aggregate performance of the entire class. Shi also extended theinteractivity of clicker exercises so students can guide interactive simulationsof algorithms for common data structures, such as linked lists and trees, se-lect areas of interest on a projected slide, and control the forward-backwardprogress of slide presentations, which facilitates referring to previous slideswhen they ask questions during lecture. All of these capabilities took ad-vantage of the fact that every student in the classroom had a controller withfive buttons. Encouraged by Shi’s earlier work, we developed an architec-ture for what we call Classroom Synchronous Participation Systems (CSPS),systems that allow a sophisticated level of interaction between students andthe instructor with real-time feedback provided to both students and theinstructor during the activities.We implemented Rhombus CSPS (or Rhombus for short), a prototypesystem that realizes the new architecture. Our goal is to facilitate the cre-ation of interactive applications that use clicker input (and input from otherdevices that may be present in the classroom) to drive innovative instruc-tional activities that go well beyond the simple stimulus-response paradigmsof basic SRS technology. We hope to support improved student learning byproviding infrastructure that allows a wide range of pedagogically relevantactivities to be conducted in the classroom that extend the capabilities ofi>clickers and similar SRS devices. To accomplish this, our infrastructureis designed to support fully interactive applications where the progress ofthe activity depends not just on the instructor’s actions but also on those ofindividual students or groups of students working together. This approachenables the choice of what is displayed on a screen and what will happennext in an application to be based on the clicks gathered from students inclass, both the aggregate collection of clicks and also the identities of the2specific students who made the clicks and their in-class relationships witheach other.We see the opportunity to deploy highly interactive applications usingSRS technology as a way to extend their usefulness beyond limited stimulus-response activities. Answering multiple choice questions with an SRS re-quires students to select a single answer, typically testing knowledge orpolling student opinion; our system provides a platform that will allow stu-dents opportunities to move beyond these simple use cases by providing away to operationalize their knowledge and apply it in a relevant context.This form of teaching requires a deeper level of engagement from the stu-dent, an important factor in student success [12, 39]. Our system allowsinstructors to extend the benefits of thoughtful, engaging in-lecture activi-ties to large classes, where running complex activities is currently logisticallychallenging.The initial applications developed using Rhombus were game-theoreticexercises: the Prisoner’s Dilemma, the Ultimatum Game, and variants ofthese that can be used to demonstrate game-theoretic strategies. Wikipediapoints to definitions of game theory as “the study of mathematical modelsof conflict and cooperation between intelligent rational decision-makers” andnotes that it is also been called “interactive decision theory” in the literature[8]. These games provided a convenient testbed for our ideas because thevery nature of game theory is that the outcome of actions taken by oneperson depend in some way on the actions taken by others. The games thatwe used were aligned with the curriculum for a Cognitive Systems course atour university in which we deployed the system during the first (September-December) Winter 2013 academic term. Students in the course provided uswith feedback about the system through questionnaires and the instructorwas interviewed at the end of the term. Based on the success of this initialtrial, the same instructor agreed to use the system again during the second(January-April) term, this time without in-class support from the developers.We again surveyed the students, just once at the end of the term, and weconducted a second interview with the instructor soon thereafter.In addition to the classroom deployment of Rhombus, we conducted two3experiments to evaluate a novel display technique for sharing semi-privatefeedback with users of a shared display that we developed for Rhombus.Our motivation, informed by results from our classroom evaluation and byour personal experience, for developing the new technique was to supportstudents using clickers in classrooms who lacked confidence that their clickshad been received by the system or that what was received by the systemwas what the student had intended be sent. The display technique makesuse of perceptual limitations of human vision to provide a strong capabilityfor a student to interpret feedback intended for herself, while limiting theability to interpret feedback intended for other students.1.1 Student Response SystemsStudent Response Systems (SRS) have been in use since the 1960s; early sys-tems were typically mounted at students’ seats, with wiring connecting themto an instructor’s panel at the front of the class [49]. These were relativelyexpensive and were specific to the classrooms in which they were installed.When SRSs began using wireless technology, such as infrared or radio fre-quency, to transmit student responses to the instructor, the infrastructurecosts of using SRSs greatly reduced, and consequently SRSs gained a boostin popularity at many institutions. Today, SRSs still use infrared and radiofrequency, however, some are moving towards integrating web technologiesand wireless networking (WiFi) to allow users to vote via other mobile de-vices, such as smart phones, tablets, or laptops [1, 2, 9, 10].Despite the many years of SRS usage and changes in the technology,there has been very little difference in the way they have been used withinclassrooms to support learning.Just as in the 1960s, today instructors still commonly use SRSs solelyto pose multiple choice questions to their class and have students vote onwhich answer they think is correct [49]. The one notable difference is thatinstructors can now immediately display a histogram or other visualizationthat shows the aggregate distribution of responses to a question by the stu-dents in class. This is a useful way of allowing students to compare them-4selves with the rest of the class, but is still essentially a stimulus-responseparadigm. When SRSs are used in such a way, research has shown that theyhave no significant correlation to subsequent student academic achievement[49].Nevertheless, even limiting SRS usage to only administering multiple-choice questions in class, SRS technology has been shown to have manybenefits for students, and are consistently perceived with positive attitudesby both students and teachers [24, 29, 36, 45, 49, 68]. They have beenshown to have positive impacts on student attendance [23, 24, 31, 38, 62],attentiveness [17, 23, 24, 27, 28, 32, 44, 46, 55, 67, 69], and engagement[17, 45, 62, 68]. Furthermore, students appreciate the anonymity providedby SRSs, because it reduces the intimidation that students sometimes feelthat is associated with participating in class [15, 29].It is therefore puzzling that multiple studies have found that when SRStechnology is used in a stimulus-response paradigm, it provides no significantbenefit in terms of the final grades students receive [49]. However, in Kayand LeSage’s extensive literature review, they suggest a strong argument canbe made for modern SRS use improving learning performance [51], citingnumerous studies that demonstrate that classes with SRS use outperformtraditional lectures [22, 31, 34, 50, 52, 61, 62, 64, 69].Beyond student performance, there are several studies that point to otherdistinct learning benefits gained from using SRSs. There is increased inter-action in the classroom when SRSs are used effectively, increasing valuablepeer-to-peer discussion [16, 19, 28, 47, 59] and enabling active learning prac-tices in large classrooms [32, 53, 69, 71]. The increased class discussion andtwo-way interaction between students and the instructor allows for contin-gent teaching practices to take place, where the instructor can modify thelesson on-the-fly to clear up material students are having trouble grasping[19, 24, 28, 32, 38, 46, 52, 71]. Furthermore, students report that they learnmore when using SRS technology [32, 38, 41, 59, 61, 62, 67, 69, 71, 76] be-cause it forces them to think more [28, 38] and to discover and correct theirmisconceptions [27].It seems likely, however, that simply using SRS systems will not auto-5matically produce the benefits listed above, as suggested by Kristine [54].Indeed, one would not expect immediate learning benefits simply from push-ing buttons on a remote control. What seems to bring the biggest benefitout of using SRSs is leveraging peer-to-peer communication and other activelearning principles to ensure that students are engaged with the material.The discussion and interaction that happens around the SRS usage is whereperhaps the primary opportunity to affect learning takes place.While by far the majority of SRS usage involves posing a multiple-choicequestion and receiving answers from students, some instructors have exploredusing SRS technology in different ways, or with slight twists to further in-crease student engagement.A good example of how clickers can be used to “gamify” classroom teach-ing is introducing competition into in-class activities. Bruff reported onMcCoy’s experience when she awarded bonus marks to the student who wasfirst to answer a quiz question [21]. While this is only a small step fromtypical clicker quiz usage, it opens up the possibility of using clickers com-petitively. Unfortunately, the standard i>clicker software does not supportdoing this in real-time; there is no immediate feedback to the winning stu-dent unless the instructor runs additional software to immediately processthe clicker data and display the results.There are some examples in the literature of clickers being used for class-room activities that were not quiz-based. Salemi made use of clickers toauction off a T-shirt, helping to give students practical experience with theeconomics of auctions [63]. Bostian and Holt had students estimate thenumber of marshmallows in a jar by entering their estimates using clickers[18], reasoning that it was much faster than using pen and paper. Theydeveloped a system called Veconlab Clickers that was used to display aggre-gate results during class, and allowed students to sign on afterwards fromtheir own computers to find out their individual outcomes. Both of theseexamples used more sophisticated remotes than i>clickers because studentshad to enter numeric information. In neither situation did students receiveindividual real-time feedback.There are of course some challenges involved in even basic usage of SRS6technology in today’s classrooms. At times, students forget to bring them toclass, resulting in those students being unable to participate during question-and-answer periods [24]. Some times, the devices simply do not work, so theclicks students are trying to send are not received by the system, resulting inunnecessary stress for students, especially if they are being evaluated basedon their clicks [31, 41, 67]. Instructors may find they are unable to coveras much material as they would like, because more class time is devoted todiscussion and impromptu explanations of misunderstood material comparedto traditional lecture formats. Furthermore, good questions are required toget the most out of SRS usage, which can take time to develop and for whichthere are few repositories available for instructors to share. From the studentpoint of view, the discussions of different perspectives or solutions may leadto confusion as to which one is correct. Some students may simply havedifficulty adjusting to the new style of learning in which they are responsiblefor active in-lecture participation.Our research does not deeply examine the many pedagogical questionsthat are obvious targets for on-going study. We instead focus on how toextend SRS usage beyond the simple stimulus-response paradigms that havebeen the dominant mode within classrooms. We do this in the context ofthe specific SRS provided by i>clicker technology.1.2 The i>clicker SRS DeviceThe i>clicker technology is a fairly basic SRS. It is comprised of a basestation connected to a computer that communicates with the student’s re-mote controls, and vendor-provided software that manages the process andrecords the results for subsequent analysis. The two hardware componentsare shown in Figure 1.1. The remote control has six buttons, and is primarilyused by students. Five of the buttons are used to provide responses (theseare labelled A through E), which are transmitted over radio frequency, andthe sixth is used to turn the device on and off and to initiate synchroniza-tion with the hardware base station. There are three lights located abovethe buttons that are the only method of feedback the remote control has.7Figure 1.1: The i>clicker remote control and base station. The basestation connects to the instructor’s computer over USB, andreceives the clicks from the remote controls, used by students,over radio frequency transmission.The top light indicates whether the remote is on or off (labelled “POWER”),by emitting a blue light when the remote is on, and not emitting light whenturned off. The middle light (labelled “LOW BATTERY”) is only lit whenthe batteries are low, blinking red if that is the case. The bottom light in-dicates whether the button response sent was successfully received by thebase station (labeled “VOTE STATUS”). If one of the response buttons (Athrough E) is pressed, the vote status light will turn green for approximatelyhalf a second to indicate a successful transmission to the base station, andotherwise will turn red and flash four times to indicate a failure. When nobuttons have been pressed, the vote status light is not lit.The base station attaches to the instructor’s computer via a USB cable,and is configured to use one of 16 pre-defined pairs of frequencies throughwhich it broadcasts to remote controls and listens for their responses. Thepair of frequencies that are used are determined by configuration parameters8set via software on the instructor’s computer. Pairs of frequencies are codedto combinations of two letters between A and D (e.g., AA, BC, DB). Inorder for the clickers to communicate with the base station, they need to settheir transmission frequency by pressing and holding the ON/OFF buttonon the remote followed by pressing the two buttons that code the frequency.The base station itself can be in one of two states: accepting votes, or notaccepting votes. When accepting votes, all responses sent along its configuredfrequency by clickers will be received. When not accepting votes, only aspecially configured instructor clicker will have its responses sent through thesystem; all other clicks are rejected, resulting in red vote status lights flashingon the student clickers. For more details on how the i>clicker hardwarefunctions, refer to Shi’s master’s thesis [65].1.3 Thesis Overview and ContributionsSeveral game-theoretic exercises, or games, were implemented in our systemto aid students in understanding how the games work and provide experiencein developing strategies for them firsthand. A general description of each ofthese games, as well as an example of how one of them is played using oursystem, is given in Chapter 2.The chapters that follow cover the five contributions of the research re-ported in this thesis.• An architecture for Classroom Synchronous Participation Systems isdescribed in Chapter 3. This architecture is designed to support com-plex interaction between students and instructors, giving real-timefeedback to both parties while being used in a classroom environment.It has been designed with institution-level support in mind, while alsobeing able to function on a single user’s computer.• The implementation of a fully-functional prototype CSPS, Rhombus,that is based on the architecture, is presented along with various lessonslearned along the way in Chapter 4. Rhombus includes several com-ponents (Clicker Server, ID Server, Web Server, and Web Framework)9combined in ways that allow versatility for input device, anonymity,and synchronization across multiple displays. The web technologies areused in a novel way by leveraging the power of the combined HTTP andWebSocket server to allow multiple browser windows to synchronouslycommunicate with each other.• An efficient and simple method for concurrently registering multipleusers in an SRS is described and demonstrated by the Sequence Aliaserin Chapter 5. Users are provided a sequence of buttons to press on theirclicker along with an associated alias. Upon pressing the buttons insequence, their alias appears on screen and can then be controlled bytheir clicker to assure them that they have correctly linked their clickerto the system. This system was successfully tested with 40 students aspart of the first term classroom evaluation of Rhombus.• The results of using Rhombus in a university classroom environmentacross two academic terms are presented. Chapter 6 describes the eval-uation method used and the results from the first term field trial andChapter 7 does the same for the second evaluation. Chapter 8 pro-vides an interpretation and discussion that compares the results acrossthe two terms. Student feedback and instructor feedback was largelypositive, encouraging further use of the system in broader contexts.Notably, the effectiveness of the system appears to depend on the will-ingness of students to engage in discussion around the results shownwhile using the system, something that is impacted by the precedentset by course instructors.• Lastly, a novel method of displaying semi-private feedback to users of ashared display was designed and tested in two experiments. The tech-nique and the experiments are discussed in Chapter 9. Results fromthe two experiments indicate that the display technique allows users tointerpret their own feedback with high accuracy, while simultaneouslyhaving difficulty interpreting another user’s feedback.Chapter 10 provides a summary of the thesis, concluding remarks, and10some ideas for future work that would extend the scope of the researchreported in the thesis.11Chapter 2Game-Theoretic ExercisesGame theory is the mathematical analysis of decision making [57]. We il-lustrate the complexity of classroom activity supported by Rhombus by ex-plaining six game-theoretic exercises that were developed to assist studentsin understanding decision strategies for those games. These games were cho-sen because they are part of the curriculum for a third-year course in theCognitive Systems program at our university. In all of the games that fol-low, the goal is to attain the highest score. The main pedagogical benefits ofthe games come from having students actually experience playing the gamesfirst hand, as opposed to simply reading about them. Playing them firsthandgives students a direct experience of the divergence of actual behaviour fromtheoretical outcomes and provokes thoughtful discussion. For details on howthe game worked in Rhombus, see Appendix C.Each of the games discussed, besides the N-person Prisoner’s Dilemmavariant, require that students be secretly paired with one another and thatthey receive feedback from their actions after each round of play to helpinform the next. Completing these two actions is infeasible with the vendor-provided i>clicker software, as there is no method for providing individualfeedback and no concept of pairing student responses.122.1 Coin MatchingThe Coin Matching game (also known as Matching Pennies) [4] is a simpleexercise to familiarize students with playing games without requiring muchbackground knowledge. In this game, players are randomly partnered to-gether for each round of play in which they must choose between playingheads or tails. In each pair of players, one player is the “matcher” and re-ceives points when both players make the same choice, and the other is the“mismatcher”, receiving points when the choices differ, as indicated by thepayoff matrix in Table 2.1.Heads TailsHeads 1, 0 0, 1Tails 0, 1 1, 0Table 2.1: The Coin Matching game pay-off matrix. In each cell, thescore for a matcher is given by the first value, and the score for amismatcher by the second value.This game is typically formulated as a zero-sum game, in which whenone partner gains a point, the other player loses one, but in our formulation,informed by the instructor we worked with, no points are ever taken away.Still, this does not affect the overall strategy of the game: every combinationof choices leaves one partner who could increase their score by changingtheir choice. Hence, there is no pure strategy Nash equilibrium, but there ishowever a mixed Nash equilibrium achieved by playing heads or tails withequal probability, giving each player an expected pay-off of 0.5.In this game, the players have no incentive to cooperate with each other,since only one can benefit in any given round, and there is no obvious benefitto knowing what a player’s previous choice was since the equilibrium strategyinvolves randomly selecting a choice during each round of play.Due to the random nature of play this game evokes, it may be found to beless engaging to students than games with more complex strategy if playedfor an extended period of time. Alternative games may provide a deeperlevel of interest to the students by allowing them to construct more complexstrategies to play with. Explanations of these games follow. Still, the Coin13Matching game remains a good staple introductory exercise to playing gameswith clickers.2.2 CoordinationThe Coordination game [3] is very similar to the Coin Matching game, butinstead of using heads and tails, the labels A and B are used, and instead ofsplitting the partners up as matchers and mismatchers, both partners receivepoints if their choices are different (i.e., one partner chooses A and the otherchooses B), and neither partner receives points if their choices are the same.The pay-off matrix for this game is given by Table 2.2. In our formulation,players are partnered up secretly with the same person for multiple roundsof play. A common example of a situation the coordination game models isthe agreement to drive on the right side of the road to prevent collisions.A BB 1, 1 0, 0A 0, 0 1, 1Table 2.2: The Coordination game pay-off matrix. Both partners onlyreceive points if they coordinate their choices to be different.In this game, there are two pure strategy Nash equilibria: in a partnershipconsisting of players X and Y, player X chooses A and player Y chooses B, orplayer X chooses B and player Y chooses A. In both of these cases, if eitherplayer changes their choice, they will reduce their score, and as such, theyare Nash equilibria. This promotes cooperation amongst the players of thegame, since they will both benefit by agreeing to play in a coordinated way.This is easy to manage if players can communicate with one another, but ismore challenging if they cannot.2.3 Stag HuntStag Hunt [6] is a different type of coordination game, with varying pay-offsdepending on the choices both partners make (see Table 2.3). Players arepaired secretly and each must choose to either hunt a stag or hunt a hare.14If both hunt a stag, they each earn 3 points; if both hunt a hare, they eachearn 1 point; if their choices differ, the stag hunter receives 0 points and thehare hunter receives 2 points.Stag HareStag 3, 3 0, 2Hare 2, 0 1, 1Table 2.3: The Stag Hunt pay-off matrix.This game has two pure strategy Nash equilibria: both players hunt astag, or both players hunt a hare. In these situations, neither player canimprove their pay-off by changing their choice. However the two equilibriahave different pay-offs: hunting a stag offers a higher pay-off, but has ahigher risk, since it requires both players to decide to hunt a stag to get anypoints at all, while hunting a hare is safer, guaranteeing points regardless ofwhat your partner does, but nets a lower pay-off.2.4 Prisoner’s DilemmaIn the Prisoner’s Dilemma [5], players are secretly paired and each mustchoose to cooperate or defect. If both cooperate, they each earn 3 points; ifboth defect, they each earn 1 point; if one cooperates and the other defects,the cooperator earns 0 points and the defector earns 5 points (see Table 2.4).Cooperate DefectCooperate 3, 3 0, 5Defect 5, 0 1, 1Table 2.4: The Prisoner’s Dilemma pay-off matrix.This game has a single Nash equilibrium that is reached when both play-ers defect. In this case, they each receive a low pay-off of 1 point, but cannotimprove their scores by switching to cooperating unless they both do it to-gether, in which case they improve to 3 points each. However, in the statewhen both players have cooperated, there is incentive to switch to defecting,which boosts a player’s score to 5 points, while reducing their partner’s score15to 0. This leaves the only equilibrium as when both players defect, despiteit providing the lowest total score.Three variants of Prisoner’s Dilemma are of interest. The most basicplays a single round with an anonymous partner; each subsequent round isplayed with a different partner. Different results are obtained when partic-ipants are partnered with the same person for multiple consecutive rounds,known as the Iterated Prisoner’s Dilemma. The third variant is an N-Persongame, where individuals are not partnered, but instead the group plays as awhole, with all cooperators receiving the same score and all defectors receiv-ing the same score. Total social payoff is highest if all participants cooperate,but individual payoff is highest by defecting if everyone else cooperates.N-Person Prisoner’s Dilemma is more complex than the individual vari-ants, and thus is not ideal as an introduction to Prisoner’s Dilemma. How-ever, it does not require providing individual feedback to participants, and socan be played on systems lacking this capability. For instance, one can makeuse of the histogram display from the vendor-provided i>clicker software todisplay the proportion of cooperators and defectors, and manually calculatethe scores from them.2.5 UltimatumIn the Ultimatum game [7], players are secretly paired, with one the “giver”and the other the “receiver”. There is a sum of points that must be dividedamongst the two players, with the giver deciding what fraction to offer thereceiver. If the receiver accepts the offer, both players receive the designatedfractions of the sum as points; if the receiver rejects the offer, both playersreceive no points.This game functions in two stages: first the players acting as givers decidewhat amount they will offer to their partner, then receivers are presentedwith the offer their partner made them and must decide whether or not toaccept it. In order to scale this game so that all students in the class canexperience both roles, we used a directed cyclical partnering algorithm. Inthis case, each student in the class had a forward partner with whom they16P1 P2 P3Figure 2.1: The giver and receiver pairing in the Ultimatum game.Here player P1 acts as a giver to P2, who acts as a giver toP3, who acts as a giver to P1. This asymmetric pairing allowsreactions made to offers to act independently of offers given sincethey correspond to different partners.would act in the giver role to, and a backward partner with whom theywould act as a receiver to, as indicated in Figure 2.1. This format allows allstudents to act as givers at the same time, and then all to act as receiversat the same time as well, without enforcing a symmetrical partnership.This game is the most complex that we worked with. The five-buttonlimitation of i>clickers means that our implementation provides at most fivechoices for a giver to offer a receiver.2.6 Playing Prisoner’s Dilemma with ClickersIn this section, we will cover how to play the simplest form of the Prisoner’sDilemma with clickers. This version of the game demonstrates the mecha-nisms used in the other games discussed without needing as much complexitybecause it does not involve multiple stages or teams.To begin with, a check-in screen is presented to the class. As studentspress buttons on their clickers, an anonymized representation of themselvesknown as an avatar shows up on screen with a large checkmark superim-posed, as shown in Figure 2.2. Their appearance on screen confirms thattheir clicker is working with the system and they are ready to play. Theavatars themselves are inserted in lexicographic order as they appear onscreen, causing a bit of shuffling around as users check in. The position theyend up in before moving to the play state is the same position their avatarwill be in for the rest of the game. Prior to checking in, users must be in-formed which alias is registered to their clicker. How this is done is left up17Figure 2.2: The check-in screen used when playing Prisoner’s Dilemmawith clickers. When users press a button on their clicker, ananonymized representation of themselves appears on screen witha green overlay and big check mark fading in immediately there-after. Users are inserted in lexicographic order as they appear.to the instructor, with one method described in Chapter 5.Once the instructor is sure the students have all checked-in, he initiatesthe play state of the game. In doing so, the system secretly partners eachstudent that was checked in with another student in the system. In the eventof having an odd number of students checked in, a bot will be added to the18Figure 2.3: Prisoner’s Dilemma being played. Students press C tocooperate and D to defect. The scores that will be assignedto each user depending on the outcome of their match-up areindicated in the Pay-off Matrix. Those who have already pressedbuttons on their clicker to play are dimmed and show the word“Played” on their avatar.19Figure 2.4: The Prisoner’s Dilemma results screen. The action takenby a given student represented by the hue of their avatar, withblue representing those who cooperated and orange those whodefected. Student score is shown numerically and encoded inthe lightness of the avatar. The two letters on the avatar rep-resent the student’s action followed by their partner’s action.A histogram shows the average scores of cooperators (1.3) anddefectors (2.3), as well as the overall average (2.0).20system to ensure everyone has a partner. The play state is displayed onscreen by having each student’s avatar (and the bot’s if applicable) shownwith instructions placed beneath them, as shown in Figure 2.3. Students havethe option of pressing C on their clicker to cooperate or pressing D to defect,all other buttons are ignored. Upon pressing C or D, the correspondingavatar on screen will darken and display the word “Played”. Subsequentpresses of either button will register the new action and provide feedback tostudents that the click was received having the word “Played” on their avatarflash once.When all players have been marked as having played or the instructor hasdecided enough time has passed, the instructor can progress to the resultsstate. If a student has yet to play and the instructor moves on, a defaultaction of cooperating is assigned. In the results state, shown in Figure 2.4,students see how they fared in the game, as well as how the class behavedoverall.2.7 Grid ApplicationPrior to initial trials of playing games with clickers, we recommend usingthe Grid application to familiarize students with how the system works. InGrid, participants are represented as avatars on screen and placed in a grid,ordered lexicographically by alias. These avatars appear on screen upon thesystem receiving the first button press from a clicker, causing the grid toexpand as more users participate. Pressing buttons on their clickers causesthe user’s avatar to change colour, display the letter of the button pressed,and to animate as described in Table 2.5 and shown in Figure 2.5. Theintent is that users can get used to seeing their avatar and gain confidencethat indeed their clicker is connected to it since the feedback time betweenpressing a button on their clicker and seeing their avatar respond is shortenough (less than 160ms) to give a sense of causation between the two actions[56].The architecture we created to inform the development of systems thatcan support the applications described here follows in the next chapter.21Button Colour AnimationA Green Pulse. The avatar cycles between getting larger andsmaller.B Blue Bounce. The avatar moves up and down in a bouncingmotion.C Purple Shake. The avatar moves left and right in a shakingmotion.D Yellow Swing. The avatar swings left and right as if there wasa pivotE Red-orange Wobble. The avatar moves left to right following aslightly circular pathTable 2.5: The colours and animations of avatars in the Grid applica-tion. The animations were taken from Eden’s Animate.css [30]and were selected for their distinguishability.22Figure 2.5: The Grid application. In this application, user avatarschange colour and animate in response to the buttons pressed ontheir respective clickers. The letter of which button was pressedis also displayed. The application is typically used to re-acquaintusers with the mechanics of the system.23Chapter 3The Classroom SynchronousParticipation SystemArchitectureWe have developed Classroom Synchronous Participation System (CSPS), anarchitecture for supporting interactive activities in classrooms. This genericarchitecture can be implemented in various ways. One such implementation,Rhombus, is discussed in Chapter 4. The architecture, summarized in Figure3.1, has five components: participant input, an identity manager, a sessionmanager, the instructor’s controller, and views of the application.3.1 The Participant Input ComponentThe participant input component of the CSPS architecture deals with thelow-level details of all incoming activity from students in the class. Thiscan be anything from clicks entered using SRS clickers, to SMS text mes-sages, tweets, or even interactions happening remotely on a student’s laptopthrough a web-based interface or a custom application. The data from thevarious sources is streamed in a uniform manner to the Session Manager,optionally after being interpreted and transformed by the Identity Managerto conform to the security and privacy policies that are in place.24Identity ManagerApplication ViewsParticipant InputInstructor ControllerApplicationSession ManagerApplicationFigure 3.1: The Classroom Synchronous Participation System archi-tecture. Participant input (e.g., clicker votes) flows to theSession Manager, optionally being transformed by the IdentityManager to change the ID used for the input (e.g., clicker IDsbecome names). Instructor controls flow into the Session Man-ager where they are directed towards relevant application views,along with participant input. Optionally, the Session Managercan direct data (e.g., student clicks or server state) towards theInstructor Controller. Application logic is loosely defined, allow-ing for it to be implemented in either the Instructor Controlleror the Session Manager depending on need and scale.The CSPS architecture isolates all of the device-dependent and commu-nications details such as device drivers and networking or wireless protocolsfrom other components so those components can be implemented withoutregard to the specific input devices that are being used. This means thatpedagogical issues can be dealt with without regard to the specifics of theSRS hardware being used or the interaction techniques that students utilize.While this separation of concerns could potentially cause usability problems,the underlying assumption is that a particular SRS (such as the i>clickerSRS described in Chapter 1) will largely determine the low-level interac-tion techniques leaving only the high-level logic of the in-class activity to bedefined by applications that are implemented within the architecture.The primary task of the Participant Input component is to encapsulateall of the low-level interactions with the devices that students use and present25those interactions as a stream of virtualized “clicks” in a uniform way to thedownstream components in the CSPS architecture. Virtual clicks may retainattributes that identify the particular hardware or software that originatedthem, but usually this will be ignored by later components in the systemother than perhaps being collected for statistical purposes to assess usagepatterns. In principle there would be no semantic differences between virtualclicks that come from an i>clicker, an SMS text message, or a web-basedinterface running on a student’s laptop computer. We describe these inputsas clicks simply due to the posturing of using i>clickers as the primary input,however the architecture supports any form of input being sent across thesystem, allowing implementations to enforce their own local restrictions asnecessary.3.2 The Identity Manager ComponentThe Identity Manager is a system that intercepts virtual clicks sent by theParticipant Input component enroute to the Session Manager componentand transforms the identity of the participant into something more usableby the Session Manager. For example, the Identity Manager might interceptclicks from an i>clicker base station that contain clicker IDs and transformthe clicker IDs into student names or student ID numbers, so that whenthey are displayed in the application they are more readable or so theycan be used to interface with marking software that expects student IDnumbers, not clicker IDs. The Identity Manager should be able to acceptmultiple streams of input from different instances of the Participant Inputcomponent and combine them into a single output stream that is sent to theSession Manager, enabling it to serve as an aggregator even in cases whereidentity mapping is not needed. In simple systems where there is only asingle Participant Input component, and thus no need for aggregation, andwhen there is no need for identity mapping, the Identity Manager can beexcluded. In these cases, the Participant Input component connects directlyto the Session Manager.This component of the CSPS architecture is a mechanism for a university26registrar to ensure that students receive consistent identifiers across multiplecourses throughout their academic careers and to isolate the administrativeaspects of this from all of the other components in the architecture. Havinga single location where this mapping is done reduces the need for studentsto reconfigure their identities for each use of a CSPS while avoiding theneed for a university to adopt a single approach to either the low-level inputmechanisms or the high-level pedagogical approaches, which are the concernof other components in the architecture.A second benefit of having the Identity Manager centrally located is thatuniversity information technology staff can be assured of the security andprivacy capabilities of the system by inspecting only one trusted compo-nent. This could be especially important if in-class activities are not justpedagogically motivated, but are also part of research activities that requireapproval by behavioural research ethics boards. A suitably designed Iden-tity Manager can allow researchers to see only anonymized identifiers forstudents who participate in research studies, without having any access atall to the actual identities of students. This would relieve researchers of theneed to repeatedly convince a university ethics board that their software canbe trusted.3.3 The Session Manager ComponentThe Session Manager component oversees the coordination of participant in-put and instructor controls for specialized applications that support in-classactivities. It receives the input from participants, possibly transformed viathe Identity Manager, and passes it along to the relevant application con-trollers and viewers. A Session Manager could be designed to support onlya single session, but the architecture is intended to scale up to simultane-ously support multiple concurrent sessions across an entire university in anenterprise system.In the enterprise setting, participant input and instructor controls wouldcome bundled with information about which session they belong to andwould be routed accordingly to the applications and viewers associated with27the session. There are multiple levels of granularity that could define a ses-sion, which is left up to individual implementations to decide. For example,a session could be an in-class lecture in a specific classroom at a specifictime for a specific course, or it could be a distributed lecture across multiplelocations, in which case only a course and a time would be specified. In allcases, the session manager coordinates the student and instructor input sothat the applications that implement an in-class activity only receive datarelevant to their session.3.4 The Instructor Controller ComponentThe Instructor Controller component is a special type of input to the sys-tem, different from the typical participant input. It is where the instructorcontrols the applications active for a given session. Usually this will be donethrough a web-based interface or a dedicated application in order to pro-vide rich functionality beyond the rather limited input capability providedby SRS technology. Using the Instructor Controller, an instructor shouldbe able to select which application(s) will be loaded, set the configurationparameters for the application, and control the application and the variousApplication Views during the in-class activity. The Instructor Controlleraccomplishes this by sending commands through the Session Manager thatare then routed to the appropriate applications and viewers. This automat-ically updates the Application Views appropriately without the InstructorController needing to deal with the low-level coordination of the variousapplications and viewers.Each in-class activity is implemented by a mini-application that is specificto the activity, with the idea being that the mini-apps run inside of the overallactive system (i.e., they do not require switching to new software to activate).The architecture does not fully define where the high-level logical flow of anapplication is located. It is possible to store application logic and state inthe Instructor Controller component. In this case, the participant inputs willflow through from the Session Manager to the Instructor Controller, so thecontroller can determine what to do with them and then update the views28accordingly by communicating back via the Session Manager. This methodreduces the complexity of the Session Manager by off-loading some work intothe Instructor Controller. This choice is recommended when there is a singleSession Manager and multiple Instructor Controllers (e.g., at the enterpriseSession Manager level).The other option is to store application logic and state in the SessionManager and have the Instructor Controller only provide input to the ap-plication. This choice is recommended when the system has most of itscomponents bundled, for example on the instructor’s laptop.We advise against having the application logic placed directly in theviews because maintaining multiple instances of application state raises thepossibility of inconsistent behaviour being displayed when randomness isinvolved. This can lead to duplicate views not exhibiting identical behaviour.3.5 The Application Views ComponentThe Application Views component comprises the basic display modes of thesystem. Application views receive updates on what to display for an applica-tion via the Session Manager. They should contain as little application logicas possible to ensure consistency across multiple instances of the same ordifferent viewers (as explained at the end of the previous section). Typicallythere may be only a single view available that is projected onto the shareddisplay in a classroom, but in other use cases it may be that each studentor each group of students has access to a custom display tailored to theirparticular role or perhaps their invididual needs.One example of an application view is a display of how voting is progress-ing for a basic SRS multiple-choice question. The display on the classroomprojector would indicate the percentage of students who have voted andthe time remaining to cast votes. Optionally, the display might also showa histogram of how many votes have been cast for each answer and (oncevoting is done) the correct answer(s) for the question. This is the extent ofthe functionality provided by the vendor-provided software for the i>clickerSRS.29A more sophisticated application view might be a web-based interfacethat provides feedback tailored to a specific student about his/her perfor-mance, such as the the student’s cumulative “score” during an interactivequestion-and-answer session, the sequence of answers that the student pro-vides to each question, and an option to see an explanation of why the correctanswer is the correct answer. Each instance of this application view wouldbe available only to the specific student for which it was created, with ac-cess control determined by the Session Manager perhaps using informationprovided by the Identity Manager.3.6 A Simple Example of the CSPS ArchitectureAn implementation that provides the basic functionality of the i>clickerSRS would have a simple Participant Input component that had a softwaredriver for the USB-connected base station where clicks were collected andpassed on to the Identity Manager where the hexadecimal clicker IDs areconverted to student ID numbers using a simple .csv file that has one col-umn of clicker IDs and a second column that has the corresponding studentIDs. The Session Manager would turn voting on and off and would savethe votes in a second .csv file for subsequent processing by other softwarethat might score the votes and then upload the results to the university’senterprise learning management system. The Instructor Controller would belimited to starting and stopping votes, enabling and disabling the availableapplication views, and controlling the forward and backward progress of theslide presentation. There might be three Application Views: a status dis-play indicating whether voting is on or off and the time remaining to vote,a histogram of the votes cast so far, and information about the correctnessof each of the possible answers. The application logic could allow the in-structor to dynamically specify whether the histogram is displayed duringvoting, and to make a mode selection for whether the the correctness displayis automatically revealed as soon as voting ends.In this case the application logic and state would probably be within theSession Manager, which would be the main component of the system. Each30Application View would be isolated to a custom output-only GUI widgetimplemented using a standard view-controller paradigm, and the InstructorController would be a GUI widget resembling a standard dialogue box whereoptions could be selected using on/off buttons, menu selection, or text entry.The instructor would interact with the Controller using a mouse and key-board on her laptop. Auxilary control via the instructor’s i>clicker wouldbe achieved by the Session Manager recognizing the unique ID hardware ofthe instructor’s (set using the GUI dialogue box) and forwarding only thoseclicks to the Instructor Controller which would then simulate the correspond-ing GUI actions to turn voting on or off, show the histogram, or advance tothe next or previous slide by requesting that the Session Manager send theappropriate right-arrow or left-arrow keyboard event to the external software(such as PowerPoint) that is providing the slide presentation.Shi [65] provides an overview of how software similar to this examplewas implemented in a platform-independent manner for the i>clicker hard-ware to achieve functionality equivalent to the vendor-provided platform-dependent software. His implementation does not fully conform to the CSPSarchitecture, but could be readily adapted to it by refactoring some of thecode. It does already isolate the details of the i>clicker hardware into a low-level driver module, but the other components are more integrated with eachother and thus lack the full degree of modularity that the CSPS architectureenvisions.3.7 A More Complex Example of the CSPSArchitectureIn this example, we consider a more complex implementation of the CSPSarchitecture where there are multiple student inputs and an enterprise-levelSession Manager component. The example involves a system that can loadvarious applications via the Instructor Controller, but focuses on a singleapplication that gives students practice inserting into a binary search tree(BST), as inspired by Shi [65]. In this application, students are split intogroups based on lab section and have to work together to control the insertion31process. Each lab section has their own BST to work on, visualized onscreen. Given a node to insert into the tree, their inputs allow them tochoose whether to go to the root of the tree, go to the left child, go to theright child, insert the node as the new left child, or to insert the node asthe new right child. Once enough students in the lab section select the samechoice, passing a predefined threshold, that choice is made.This implementation would use two Participant Input components: onefor clickers and one for smartphone input. The clicker component would,similar to the previous section, use a software driver to receive input fromthe clicker base station. Additionally, it would encode with the clicks acourse identifier, which would be passed to the Identity Manager componentwith the click data. The smartphone component would allow students to usea mobile app to interact with the system. In the app, they would registeran authenticated account and select which courses they would be using thesystem in. When using the app, students would select the course they arecurrently in and then press buttons on screen representing choices to bemade in the active application (the equivalent of clicks on a clicker). Whena button is pressed on screen, the app would send the choice, the user ID,and the course ID to the Identity Manager component.The Identity Manager component would convert the clicker and app userIDs to student IDs via database lookup in a table consisting of clicker ID,app user ID, and student ID columns. Using the student ID and the courseidentifier, the data would be augmented with which lab sections of the coursethe student was enrolled in via lookup in a different table in the databaseconsisting of student ID, course ID, and lab section columns. The datapassed on to the Session Manager component would include all three parts(the student ID, the course ID, and the lab sections) for the given student.The Session Manager component would maintain a list of active sessionsin memory that are identified by course ID. Each session would have a con-nection to a single Instructor Controller component. When data arrives fromthe Identity Manager component, the course IDs are examined and are usedto route the data to the Instructor Controller component in the session withthe corresponding course ID.32The Instructor Controller component would require the instructor tospecify the course ID upon initializing. Once specified, the instructor wouldbe able to select an application to load in the system. Beyond loading theapplication, the Instructor Controller component would allow the instructorto activate different controls depending on which application was loaded, acommon one being opening Application Views that are associated with thesession. In the BST example, the instructor would be able to enable or dis-able student input, navigate between different BST scenarios, and to resetthe active scenario to its initial state. The participant input would flow intothe Instructor Controller component from the Session Manager component,and would be used to update the state of the active application. In theBST example, it would threshold the inputs, updating the state of each labsection’s BST when enough students had entered the same choice.The Application Views would depend on the active application in thesession. In the BST example, there would be two types: one displayinga single shared scoreboard showing the time elapsed and number of errorsmade by each lab section, and one that indicated a current lab section’sBST state. There would be one of each of the latter type for each section inthe class. These views would receive the data they used to draw from theInstructor Controller via the Session Manager.In this implementation, the Instructor Controller contains the applicationlogic, and the Session Manager is simply used to route data to the variouscomponents. The Instructor Controller would be a standard GUI interfacewith various widgets, such as buttons and text-input fields, for loading andconfiguring applications. It would primarily be interacted with by keyboardand mouse, but similar to the previous example, would also be able to becontrolled by the instructor’s clicker. The Session Manager component wouldbe a server hosted somewhere on the university’s network that was accessiblefor the Instructor Controller and Application Views to connect to. TheApplication Views would be a basic GUI interface that allowed connecting toa session and when connected, provided output-only displays of applicationstate. By indirectly connecting the Application Views with the InstructorController via the Session Manager, students in the class would also be able33to load the active application’s views on their own devices.3.8 Prior Work That Informed CSPSShi [65] created a standalone driver for the i>clicker base station that en-abled flexible use of clickers as input devices. He constructed a system calledWebClicker that allowed users to input clicker votes from various devices,aggregate them via the web, and then deliver them to client applications overa single network socket. This modularization of input informed our separa-tion of the Participant Input component from the application logic areas ofthe CSPS architecture. In our approach, the WebClicker functionality is re-placed by two components: the Participant Input and the Identity Manager.The Participant Input component maps to the various input modalities par-ticipants use to interact with the system (clickers, web, smartphones, etc.),while the Identity Manager component handles the aggregation of the input.However, the Identity Manager goes a step further, providing one locationwhere users assume a single identity, despite accessing the system via differ-ent devices.One of Shi’s initial extensions to i>clicker usage was to provide novelvisualizations of the responses students provided in class [65]. Instead ofusing the default histograms, which showed bars representing the numberof responses split by button, Shi created a histogram that displayed stackedbars for each lab section in the class. Each of these bars was split in two, withone half representing the proportion of users who answered correctly and theother side representing those who answered incorrectly. The purpose of thismodification was to alleviate the issue of students switching to whicheveranswer has the majority of votes, while still being able to provide visualfeedback about their responses. This work by Shi, along with his other novelviews for binary search tree insertion, linked list algorithms, and selectingportions of the screen (all described in [65]) inspired us to be flexible inallowing varying views in CSPS architecture. In Shi’s work, the views weretied directly to the instructor’s laptop where the software was being run.Our architecture loosens this coupling by indirectly linking application logic34with the views, which allows us to support both having local views on aninstructor’s laptop, but to also support remote access.Newson’s Clic^in system [58], extended later by Shi [65], used clickerinput to drive a variety of activities, but focused primarily on a presentationarchitecture that integrated clicker activities with lecture slides to mitigatethe awkwardness of switching between the two.Another aspect of Shi’s work that informed our architecture was the in-troduction of roles associated with clicker IDs. Shi used a simple external fileto map clicker IDs to roles (e.g., Student, Instructor or Demonstrator) andto their lab section number if they were students [65]. This enabled supportfor having multiple users gain control of the applications by assigning each oftheir clickers the Instructor role, which may be useful for administering com-plex applications should there be additional course personnel (e.g., teachingassistants) available to assist. In our architecture, this can be achieved at anumber of locations, but most naturally at the Identity Manager and SessionManager components. At these junctures, auxiliary information of any sortcan be augmented to clicks or other user input prior to being interpreted bythe applications.An illustration of how the CPSC architecture described here can be em-ployed is provided in the next chapter, which describes our design and im-plementation of the Rhombus CSPS.35Chapter 4Rhombus ClassroomSynchronous ParticipationSystemRhombus Classroom Synchronous Participation System, or Rhombus for short,is an implementation of the architecture that was presented in Chapter 3. Adiagram of the system is shown in Figure 4.1. All of the activity-specific logicand visualization computations take place using a Web Framework level thatallows students and instructors to use platform-independent web browsersto view and control the applications. These browser-based components im-plement the roles of the Instructor Controller and Application Views in theCSPS architecture. An instructor typically places one web browser windowwith the main application view on the shared classroom projected display forthe students to see and another browser window on the instructor’s laptopis used to control the activity. The Web Framework is hosted on the WebServer, which is the Session Manager component in Rhombus. It facilitatescommunication between the web browser windows and the participant in-puts. The inputs are streamed to the Web Server over a network socket bythe Clicker Server, which connects to the i>clicker base station hardwarein the classroom and receives clicks from student clickers. The ID Server isan optional intermediate server that sits between the Web Server and the36Clicker Server ID Server Web ServerWeb Frameworki>clicker Base StationControllerViewersParticipant Input Identity Manager Session ManagerApplication ViewsInstructor ControllerA B C D E A B C D E A B C D E Figure 4.1: Overall architecture of Rhombus Classroom SynchronousParticipation System. The Clicker Server receives i>clickerclicks from the base station and transmits them over a networksocket to the ID Server. There, the clicker IDs are replaced withaliases, and the data is transmitted to the Web Server. Theapplications themselves are viewed in web browsers, which re-ceive data from the Web Server via the Web Framework overWebSocket connections.Clicker Server in order to translate clicker IDs into aliases if it is importantto anonymize the identities of students or to make the IDs more salient.4.1 Clicker ServerThe Clicker Server is a multi-client Java server whose primary purpose isto broadcast i>clicker clicks to all connected clients. It implements theParticipant Input component of the CSPS architecture: it transmits clicksreceived from the base station over a socket connection and forwards com-mands to the base station that are received as messages over the socket.This effectively isolates the i>clicker hardware from the rest of the systemand opens the possibility of using other types of devices instead of, or inaddition to, i>clickers. All messages sent over the socket to and from theClicker Server are done in JavaScript Object Notation (JSON) format. Thisformat was chosen due to its relative compactness (compared to ExtensibleMarkup Language or XML), its natural integration with web technologies37Command Descriptionchoices Send a collection of i>clicker ID, choice, and times-tamp tuples (outgoing only)enable choices Open voting on the base stationdisable choices Close voting on the base stationstatus Send the instructor i>clicker ID, whether voting isopen or closed, the current time, and the number ofservers connected over the socket.ping Send an empty response.choose Input to the server a choice tuple that is output as ifit were from an i>clickerinstructor Set the ID(s) of the instructor clicker(s).Table 4.1: The commands accepted by the Clicker Server. These canbe sent over a network socket or directly via standard input.(JSON format translates directly to JavaScript objects in browsers), andits robustness and flexibility (JSON supports many different types of data,including collections).The Clicker Server makes use of a custom clicker driver, originally devel-oped by Shi who reverse-engineered the vendor-provided i>clicker hardwareprotocol [65]. Shi’s driver has been extended to support live plug-and-playfor i>clicker base stations, auto-detection of the i>clicker base station ver-sion, and automatic system detection (supporting Mac OS X, Windows, andLinux) for loading the correct native libraries.The Clicker Server supports broadcasting clicks to multiple clients, com-mands to toggle the accepting of clicks from clickers, and a command tosimulate clicks. Being able to simulate clicks via a message received overthe socket allows the user to create test harnesses to simulate real clickersinteracting with the system (e.g., a web page with a virtual representationof a clicker on it, or a script that generates click commands over the socket)without those listening for clicks knowing anything is different about them.The commands the server accepts are shown in Table 4.1. These commandscan be sent to the Clicker Server as messages over the socket, or directlyinput to the server via standard input.38Unlike the vendor-provided software, the Clicker Server supports havingmultiple instructor i>clickers being used simultaneously (e.g., an instructorand one or more teaching assistants or other instructors can have administra-tive control over the classroom activities). The base station hardware allowsfor only a single privileged clicker ID, typically the instructor’s clicker, thatwill always have its clicks communicated to connected software regardless ofwhether the voting window is open or closed. This design prevents errantstudent clicks from having any effect when votes are not being recorded, andallows the instructor’s clicker to function as an administration device thatcontrols slide navigation and toggles the voting window. When multiple in-structors are active on the Clicker Server, voting is always enabled on thebase station to work around the hardware’s limitation.Instead of relying on the base station hardware to be the gatekeeper ofclicks, the Clicker Server enables or disables processing student votes basedon the software’s knowledge of the current state of the system. This fil-tering allows the multiple instructors’ choices to be sent over the socket ascommands to the higher layers in the software (e.g., to the Web Server),emulating what the hardware does for a single instructor. This means thatstudent i>clickers work slightly differently when multiple instructors areenabled: clicks made when voting is disabled will show a green light eventhough the choices will not be accepted. This is an unfortunate side effect,but we considered this a good trade off because of the usefulness of allow-ing multiple instructors for large classes. When only a single instructor isconfigured, the i>clickers vote status light behaves the same as it does withthe vendor provided software. This inconsistent experience from the studentperspective is undesirable, and future versions of the software may defaultto always using the behaviour that supports multiple instructors.The Clicker Server accepts filters for affecting the inputs and outputs ofthe server that are added via Java’s service plugin system, with the multipleinstructor feature being supported by one of them. This architecture decisionallows the expansion of new features to the server without modifying the codeitself. The filters are implemented as Java classes that conform to a simpleinterface consisting of three methods: initialize, input, and output. The39initialize method is called when the Clicker Server is initializing itself, theinput method is called every time the Clicker Server receives input (eitherover the socket or from standard input), and the output method is calledevery time the Clicker Server sends output, allowing the filter to modify thedata the Clicker Server sends and receives. An example of another filterwould add additional roles to certain clicker IDs besides the instructor role,which may be useful for designating lesser administrative remotes such asthose of teaching assistants.Beyond filters, the server is configured by a properties text file config.properties, in which the port the server listens on, the ID of the instructor’si>clicker, and the channel the i>clicker base station listens on are specified.Multiple instructors can be configured in this file by specifying multipleinstructor clicker IDs separated by commas for the instructorId property.Future versions of the software may switch to a more verbose configurationrequiring explicit toggling of the multiple instructors option to ensure usersare aware of the changes to the student experience that take place whenmultiple instructors are enabled.All clicks received by the server and all data transmitted over the networkare logged. The main log, server.log, preserves the exact JSON messagesthat are sent over the network, including all student clicks. The clicks arealso separately logged in an abbreviated format in their own file, clicks.log,because they are typically the most interesting part of the log and are oftenused to record student performance. Both log files are automatically archivedand compressed after each day.4.2 ID ServerThe ID Server is an optional intermediate server between the Clicker Serverand the Web Server, filling in the role of Participant Identifier in the CSPSarchitecture. When present, all clicks received from the Clicker Server are in-tercepted, having their i>clicker IDs (eight-hexadecimal digit numbers suchas 371BA68F) changed to aliases (typically more readable identifiers such asnames or student IDs) through a mapping determined by a database pro-40vided by the instructor or the institution. If no mapping from a given clickerID to an alias is found in the database, the clicker ID remains untouched inthe data. After the aliases have been swapped in, the clicks are then sentto the Web Server. All others messages from the Clicker Server to the WebServer, as well as all messages from the Web Server to the Clicker Serverpass through without any modification.The reason for having the ID Server separate from the Clicker Server isto provide a clean separation of functionality to ensure security and privacyfor users. Theoretically, the ID Server can be hosted by a trusted third partythat handles the translation from raw participant IDs (e.g., clicker IDs) tothe appropriate alias (e.g., student ID, pre-assigned anonymous name, orthe “celebrity” aliases that we describe later). This feature makes it possibleto use Rhombus for behavioural research applications if the ID server isapproved for use by a research ethics board.Similar to the Clicker Server, the ID Server is also configured in a config.properties file. Administrators can configure the port the ID server listenson, and the host and port of the Clicker Server that the ID Server connectsto. Logging is done in the same fashion described for the Clicker Server.4.3 Web ServerThe Web Server is a combined HTTP (or HTTPS) server and WebSocket[35] server that connects to the Clicker Server for input. It is a Node.js [11]implementation of a Session Manager from the CSPS architecture. Wheninitialized, the Web Server continuously attempts to connect to the ClickerServer specified in the configuration until a connection is made, allowing thetwo servers to start and stop independently. The main views and controlsof the system are in windows in a web browser, hence the need for it tobe an HTTP server. The system needs to support multiple views all beingmanaged by a single controller, but the views are shown in web browser win-dows, which typically operate in complete isolation. This presented somechallenges, because we needed them to communicate with each other. Theeventual solution was to have the Web Server mediate communication be-41tween the main controller window and the multiple viewing windows throughWebSockets. For instance, if the controller needed to tell the views to showthe Play state of a game, it would communicate with web browsers showingthe views by sending a message to the Web Server over a WebSocket, whichwould then broadcast the message to the connected viewers over their ownWebSocket connections to the Web Server. Upon receiving the message fromthe Web Server, the viewers would update accordingly.Our initial design used URLs to map to different applications, so if youwent to http://localhost:8000/apps/pd, the Prisoner’s Dilemma wouldbe loaded and clicks would be interpreted with respect to whatever state theapplication was in in that browser window. While this design was simple,it did not support the usage of multiple browsers to display the same appli-cation. You could open up multiple browsers to the same URL, and eachwould receive clicks from the Web Server, but they would run their applica-tions independently. This meant that if any randomization occurred in theapplication at all, the two windows would not be in sync. We saw this as amajor flaw, since we commonly wanted to have multiple views for the sameapplication, and so we re-designed our solution.The new design required a clear distinction between a Controller, thebrowser that handles all application logic, and a Viewer, a browser that sim-ply receives data to display on screen using a pre-defined view. With thisdistinction, we could have a single Controller communicate the same datato multiple Viewers in order to have multiple windows all showing the sameapplication state. It also lets us show specific views of the application statein different windows, all corresponding to the same active application. Sinceweb browser windows cannot directly communicate with one another, infras-tructure at the Web Server was added to support this, as well as to supplya connection from the Clicker Server to the Controller. The component thatmanages all of this communication is called the Manager (Figure 4.2).42Controller ViewersManagerWeb Browser Web BrowserWeb ServerWebSocket WebSocketFigure 4.2: The relationship between the Manager, Controller, andViewers. In the Web Server, the Controller communicates withthe related Viewers through the Manager over WebSocket con-nections. The primary flow of information is from the Controllerto the Viewers, but the Viewers can occasionally send updatesto the Controller if they have inputs on their screen (atypicalfor clicker applications). The Manager also sends informationfrom the Clicker Server to the Controller over the WebSocketconnection.4.3.1 Application Managers, Viewers, and ControllersManagers mediate all communication between Viewers and a Controller,using WebSocket connections to exchange messages between them. Whena Viewer or Controller initializes in a browser, it sends a “register” messageover the WebSocket connection to the Web Server, specifying the ManagerID to use, the type of browser it is (“viewer” or “controller”), and its name ifit’s a Viewer. With this information, the WebServer looks up the Manager byID and adds the new WebSocket connection to it if it exists. If no Manageris found by the specified ID, a new Manager is created with the appropriateassociation to the WebSocket connection. This is akin to the idea of creatinga session, discussed briefly in the CSPS architecture description in Chapter3.When a Manager is first created, it establishes a connection to a pre-specified Participant Server, which is either the ID Server or the ClickerServer, but the Web Server does not know the difference. It simply connects43to a network socket and communicates following the interface specified inTable 4.1. Because all the ID Server does is change clicker IDs into aliases,it can be connected to by the Web Server in place of a Clicker Server.If the connection fails, it is retried every 5 seconds, allowing the serversto initialize independently (i.e., you can start the Web Server before theClicker Server or vice versa). Once a connection is made, pings are sentto the Participant Server every 5 seconds to ensure the connection is stillalive. If a ping fails, a disconnection must have occurred, so the Controlleris notified and attempts to reconnect every 5 seconds until the connection isrestored.Viewers use the socket message type “app-message” to send messages tothe Controller. This can be useful when there are inputs to individual viewsrequired to send updates to the main application state. There are no re-strictions on what JSON data can be sent with this message type to allowas much flexibility as possible in communication between Viewers and Con-trollers. Viewers primarily are used to simply receive data and update whatthey are displaying on screen, so it is common to only use their WebSocketconnection to the Manager to receive updates from the related Controller.Controllers can also send messages to Viewers with the “app-message”type. If the message specifies a “viewer” property, then only the Viewer thathas that name will be sent the message, otherwise the Manager broadcaststhe message to all Viewers associated with the Controller. Additionally, theController has more functionality it can use to communicate with the con-nected Participant Server. It can send “enable-choices” or “disable-choices”messages to enable and disable voting at the server level, can ask for aserver status update with “status”, and can submit choices (i.e., the equiva-lent of clicks on an i>clicker) via the “submit-choice” message. It can alsorequest a list of all connected Viewers with “viewer-list”, and will receive up-dates when new Viewers connect and disconnect with “viewer-connect” and“viewer-disconnect” messages.444.4 Web FrameworkThe Web Framework provides infrastructure to build Rhombus applicationsthat operate in web browsers. It provides many visual assets and base classesfrom which custom applications can be built, and also provides the generalrouting capabilities required to get applications loaded and in communicationwith the Web Server (and consequently, the Clicker Server).Most applications require a Controller and a Viewer, which are easilyregistered with a Manager on the Web Server by loading carefully format-ted URLs. For example, to load a Controller in Rhombus, you go to theURL http://host/managerId/controller (e.g., http://localhost:8000/m1/controller), and to load a Viewer, you go to http://host/managerId/viewer/viewerName (e.g., http://localhost:8000/m1/viewer/main). Inthese examples, the Manager is identified by “m1” and the Viewer is usingthe typical name “main”. The Web Framework interprets the URL to sendthe appropriate WebSocket “register” message to the Web Server to indicatea new Viewer or Controller needs to be registered. If a Controller is alreadyregistered for the Manager specified in the URL, it is replaced.4.4.1 ControllerThe main Controller interface, shown in Figure 4.3, offers the instructorcontrol over Rhombus by providing the following:• the ability to load an application from a list• review a list of current participants• navigate and review the state machine of the active application• configure the active application• open new and refresh connected Viewers• control global configuration• simulate clicks via virtual clickers in the web browser45Figure 4.3: The Controller interface in Rhombus. This is the mainview the instructor interacts with to load and configure appli-cations. It provides information about the current state of thesystem and offers controls for navigating the active application’sstate machine. Global system state can be modified, and in de-bug mode, the instructor can simulate clicker input via virtualweb clickers.46Status BarThere is a status bar at the very top of the Controller with three areas:zoom controls, the Controller ID, and Participant Server status (see Figure4.4). The zoom controls allow the instructor to adjust the zoom level ofthe Controller window independently of the other browser windows openin Rhombus. Because Rhombus runs in a web browser, the screens canalso be zoomed in with the built-in browser zoom, but all windows openin Rhombus will be affected, which is often not the desired behaviour, sinceViewer windows are commonly zoomed to different levels than the instructorcontroller. The Controller ID is primarily for debug purposes, and displaysthe manager ID (e.g., m1.controller), which is also retrievable from the URL.The Participant Server status area indicates whether or not the Web Serveris connected to the configured Participant Server, and whether or not theParticipant Server is accepting choices from participants.Figure 4.4: The status bar in the Controller interface.App SelectorIn the App Selector, all available Rhombus applications are shown as largebuttons that can be clicked to have the corresponding application loadedinto the system. Doing so will activate the initial state of the applicationand cause all connected Viewers to display the first screen of the selected ap-plication. Rhombus currently has two utility applications: Grid for warmingup (see Section 2.7) and Question for asking multiple choice questions. Italso has a suite of Game Theory applications, such as Prisoner’s Dilemma,Stag Hunt, and the Ultimatum Game. It is important to note that the ac-tual state of the application is stored in the browser window the Controlleris running in. Navigating away from the page, refreshing the page, or closingthe window will cause the current state to be erased. To prevent users fromaccidentally doing this, a warning dialog pops up asking them to confirmtheir desire to leave the page. The applications that exist are automatically47detected by the system by having directories in the standard location (e.g.,web/app/apps/[appName]/App.js). The Controller runs a web service callto the Web Server at /api/apps to get the list.Participants List and LatecomersThe Participants area of the Controller displays the aliases of all currentlyactive participants, along with their count, and if applicable, a list of queuedparticipants who have been recognized by the system, but are not yet activein the application. The reason for this separation is that sometimes it doesnot make sense to allow latecomers to join in applications midway through.For example, if all teams have been balanced and partnered already, addingin a new user and re-balancing the teams can have undesirable side-effects,depending on the application. To prevent this from happening, users thatclick in when not already active within the application may end up in aqueue, waiting for the next opportunity to be added in by the system.Each state of an application has the ability to automatically add in late-comers, but it is optional. If the current state does not support addinglatecomers, they get queued until a state of the application is reached wherelatecomers are added in on load. Should the instructor wish to forcibly addin the latecomers, as may be necessary if the state only adds those queuedupon loading and others join in later at a safe time, a button below the listof queued participants can be clicked to add them to the currently activeparticipants.Due to this behaviour, we have discovered that when secret partneringis required in an application, that it be done at the latest possible time. Inour current versions of the Prisoner’s Dilemma, we partner prior to the playphase, disabling our ability to add participants while the play phase is active.We could have just as easily partnered after the play phase had completed,but prior to scoring, to allow latecomers to seamlessly join the experience.48State MachineMost Rhombus applications are created using a state machine architecture(see 4.4.3). In the State Machine area of the Controller interface, each stateof the active application is shown as a pill-shape with arrows pointing inthe direction that they flow in. Currently, Rhombus only supports a singlenext and previous relationship, although it is possible to develop states thatdetermine their next state on the fly. The active state is shown in green.Should a state contain multiple states nested within it (e.g., a phase statecontains multiple round states), they are represented that way geometrically.Applications can move from state to state automatically or manually, bypressing the Next State or Previous State buttons below the state machine,or by pressing the button C for next state and D for previous state onthe instructor’s remote. This mapping was decided upon because it matcheswhat the default i>clicker software uses for advancing and backing up slides.When the next state is activated, if necessary, the state machine area willautomatically scroll horizontally to make it visible on screen.ConfigurationMany applications support configuration, which can be done in the Con-troller once the application has been loaded (Figure 4.5). The Configurationpanel is initially collapsed, as it can be quite long, but can be toggled byclicking anywhere on the header. Each configuration panel contains a mes-sage field to allow the instructor to write a reminder message as to whythis configuration was set, which will show up in the log files of the ap-plication. Otherwise, configuration fields can be manually specified by theapplication or automatically generated from a configuration object used bythe application already. Updating the configuration does not change previ-ous application state. All views that are currently active will update as soonas the “Update Configuration” button is clicked.49Figure 4.5: The configuration panel for team-based Prisoner’sDilemma in Rhombus. The instructor can configure the namesof the teams, the scoring matrix and the number of rounds foreach of the three phases of the game. This panel was constructedautomatically based on the properties of a configuration object,but can be modified to support custom styling and inputs (e.g.,placing the matrix inputs in a matrix format).Viewers ListThe Viewers section shows all of the Viewers that are actively connected tothe same Manager that the Controller is using. Typically, each Viewer listedrepresents an open web browser that is displaying a view of the application.They are listed by name with the plans to allow differently named viewsto receive different information from the Controller, or at least to displaydifferent things. Currently this is not implemented, so messages from theController are broadcast to all connected Viewers. A refresh icon is shownnext to each Viewer, which can be clicked to send a message to the Viewersthat causes them to redraw. Two convenience buttons are provided to createnew browser windows that open to a URL that maps to the main Viewerfor this Controller, or to an instructions Viewer. The instructions Viewer is50a specially named Viewer that was added to quickly create a window thatonly displays the application’s instructions and nothing else.Global Configuration and ToolsThe “Controls” area is a very important part of the Controller interface.Most importantly, it allows the instructor to enable choices on the ClickerServer by clicking the “Not Accepting Choices” button. The current state ofthe Clicker Server is shown in the status bar (whether it is Connected or not,and whether choices are being accepted or not), and is partially duplicatedon this button. A red circle indicates a negative state, and a green circlea positive state. After attempting to enable choices on the server, if it issuccessful, the button will have a green circle and read “Accepting Choices”,and the status bar will have updated correspondingly. This is necessary forany clicks to come into the system besides those from the instructor’s remote.Another useful tool is the “Instructor Controller Enabled” button. Whenenabled, the instructor’s controller is treated as a controller to the inter-face: A toggles accepting choices, C goes to the next state of an application,and D goes to the previous state. Sometimes, however, it is useful to usethe instructor’s clicker as a normal clicker, especially when demoing an ap-plication, since the instructor’s clicks will come through even when clicksfrom students cannot. This means that an instructor can leave acceptingchoices disabled, disable the “instructor controller” via the button, and thendemonstrate using an application.The only utility currently provided by Rhombus is a countdown timerthat allows the instructor to display a timer on all open Viewers in the upper-right corner. This can be useful when an instructor wants to let studentsknow how much time is left for them to enter their responses before movingon to the next state, causing those who have not yet responded to assumethe default choice.51Figure 4.6: When activating the instructor controller in Rhombus de-bug mode, virtual web clickers become available at the bottomof the page. The IDs of these clickers are of the form WebXXwhere XX is a number. Clicking the buttons on the clickerssends the clicks to the attached Clicker Server, which then sendsthem back up the chain to the Web Server just as if a real clickerhad submitted a vote.Debug ModeWhen testing out applications in Rhombus, it is typically inefficient to contin-uously test with actual clicker devices. To make the debugging process moreexpedient, a debug mode for the controller exists and can be accessed bygoing to a URL of the form: http://host/managerId/controller/debug(e.g., http://localhost:8000/m1/controller/debug). At the bottom ofthis page, a section entitled “Web Clickers” is visible, with convenience but-tons for adding web clickers and causing them all to vote one way or another(Figure 4.6). There are also keyboard shortcuts for accessing these buttonsas well as the next and previous state buttons, which expedite the processfurther. The Web Clickers themselves work by submitting choices all theway to the Clicker Server via the Session Manager, which then transmitsthem as if they are normal i>clicker inputs, ensuring an accurate debuggingexperience.524.4.2 ViewersAs described previously, a Viewer can be loaded by going to the URL http://host/managerId/viewer/viewerName (e.g., http://localhost:8000/m1/viewer/main), which is typically done by clicking the “Open New MainViewer” button in the Controller. Upon doing so, the browser will displaythe view for the current state of the active application. There is no set limitto the number of windows that you can open to create new Viewers. Typicalusage is to have the Controller open in a browser on the instructor’s screen,while a single Viewer window is opened and placed on an external display(e.g., a projector). If multiple external displays are available, they can eachhave their own Viewer window, possibly showing different parts of the appli-cation. Closing Viewers at any time does not have any impact on the stateof the application.If there is no active application when a Viewer is loaded, the browserwindow will display the name of the Viewer and an “awaiting view” message.Once an application has been selected, the Viewer will be automaticallynotified via the WebSocket connection and the new view will be loaded.The reason why we display the name of the Viewer when no application isselected is to facilitate placement of the different windows. When Rhombusis upgraded to support sending specific messages to specific views, it will behelpful to configure them accurately prior to loading the application. Theonly current situation where this is necessary is the distinction between the“main” viewer and the occasionally used “instructions” viewer.Similar to Controllers, Viewers also all display a status bar at the topwith a zoom control and the ID of the window (e.g., m1.viewer.main). How-ever, the Viewer’s status bar does not show the Connected status to theParticipant Server, it only shows whether or not the system is currentlyaccepting choices from participants.4.4.3 State ApplicationsThe primary structure used in Rhombus to construct applications is a statemachine. Each application consists of a series of states connected to one53another in a linear way. States are intended to handle all the logic of theapplication, and are only known to the Controller. It is common for statesto have an associated view, which Viewers use to show relevant applicationviews, but not all states have them. If a state solely exists to process data,it automatically moves on to the next state when it is complete, until a statewith a view, a ViewState, is reached. ViewStates wait until certain condi-tions are met before moving to the next state in the application, typicallythe instructor clicks the Next State button (or presses C on the instructorclicker), but it could also be programmatically determined.For example, in the simplest form of Prisoner’s Dilemma, the state ma-chine consists of the following states (Figure 4.7):1. Attendance (ViewState): Participants use their clickers to check-into play the game2. Botcheck: If there are an odd number of users, a bot is added to thelist of participants to ensure everyone has a partner3. Partner: All participants are matched with randomly selected partner4. Play (ViewState): Participants can now play the Prisoner’s Dilemma,clicking C to cooperate, and D to defect.5. Score: The results from the play phase are calculated for each partic-ipant6. Stats: Overall statistics for the game are computed7. Results (ViewState): The results from the game are displayed onscreen, and are logged to the server.Note that there are many ways to divide the states up, but we havefound having simple states that do a single operation allows for the greatestreusability.Rhombus provides a number of basic states that can be used to buildcomplex applications:54Figure 4.7: The state machine of the basic version of Prisoner’sDilemma. This version of the game only has three primarystates: attendance, play, and results. During these states, thestate machine pauses and allows participants to interact withthe system. In between them, logic states, botcheck, partner,score, and stats, are run to maintain the proper internal state ofthe application.• State: The basic State object that provides the bare bones structureneeded. This object is typically extended for use with states that onlydo processing, and do not have a view (e.g., botcheck or partner).• ViewState: An extension of State that adds in the correct behaviourof rendering a view on connected Viewers and does not automaticallymove to the next state in the machine (e.g., play or results).• MultiState: A state that essentially has its own internal state ma-chine nested inside of it, which is useful when you need to reuse blocksof states (e.g., a round state that contains play, score, and resultsstates).• RepeatState: A modification of the MultiState with conveniencesimplemented to allow repeating a single state a specified number oftimes. This is intended to be used with repeat occurrences of the samestate (e.g., trials in an experiment, or a phase state that contains anumber of round states nested within it).While attempting to support being able to move back to previously seenstates, we ran into issues with the participants being modified by futurestates and thus not properly repeating the previous states. To fix this, wecreated a StateMessage object, which is used now as the standard vesselof input and output between states. Each state holds on to the input itpreviously received as a copy, ensuring that when it is loaded again by the55Figure 4.8: The avatar representation of user with alias “jobs” in var-ious states during Prisoner’s Dilemma. From left to right, theavatar is successfully checked-in during attendance, has yet toplay during the play state, has played during the play state, andhas its score shown during the results state.instructor moving back to a previous state, it can reload the input it hadreceived initially and be in the same state as it was in the first time.4.4.4 User RepresentationIn Rhombus, one of the most common ways to depict users on screen isto use an avatar. Avatars, shown in Figure 4.8, are widgets that consistof a photograph as a backdrop, the user’s alias, and any relevant feedbackthe user may need for the given application (e.g., whether or not they haveplayed or their score). In many of the apps developed for Rhombus, avatarsof all active participants are displayed in a grid on screen. Users find theirplace on the screen by locating their alias and photograph and can directtheir attention to this smaller region of the screen to interpret their ownindividual feedback.The following chapter provides a description of an application we created,Sequence Aliaser, that allows a large number of users of Rhombus to registertheir clickers with aliases in parallel.56Chapter 5Sequence AliaserThe ID Server comes packaged with an application called Sequence Aliaser,which allows users to have their i>clicker -alias mapping created interactivelyby entering a predefined sequence on the clicker itself. The goal was toprovide a way to have clickers that are not known to the system prior tousage quickly assigned an identity to make using the system more enjoyable.It is easier to find a name on screen than it is to find a hexadecimal clickerID, and with the Sequence Aliaser, we can associate clickers with nameswithout gathering all the clicker IDs in advance.To use the Sequence Aliaser, each user must be given a pre-defined se-quence that has been mapped to an identity. The user then enters thesequence of buttons on their clicker, which effectively associates the clickerwith the identity in the system.We decided on using celebrities as the identities that are provided bySequence Aliaser, since there are so many of them, and they are a verydiverse group. In our implementation, we had a pool of 64 celebrities, withhalf of them being female, half male. A large number of the celebrities camefrom movies, music, or television, but there were some other popular figures,such as Steve Jobs, Bill Gates, and Michael Jordan. We did our best toavoid controversial figures and political figures, since we would be randomlyassigning them to participants and did not want them to feel uncomfortable.After selecting 64 celebrities and finding iconic photographs of them to use57in the system as avatars, we tested the collection for recognizability withseven graduate students of varying background at our university. Alongwith photographs, each celebrity was given a brief nickname or alias thateither matched their actual name, nickname, or character they were famousfor. The goal in assigning names was to keep them somewhat different fromthe actual celebrity’s identity and to add in humour (e.g., Queen ElizabethII had alias “liz”, Leonard Nemoy was “spock”).The sequences used for the 64 celebrities in our implementation consistedof four characters, from a selection of the letters A through D, mappingto four of the five i>clicker buttons. For example, the sequence BDACmapped to “arnie” (Arnold Schwarzenegger), and CACC mapped to “hova”(Jay-Z). A typical slip of paper indicating the sequence and celebrity identityassociated with it that is handed to users is shown in Figure 5.1. The buttonE was reserved for resetting input, allowing users to start over in case anerror was made either by entering the sequence incorrectly, or by accidentallyentering another user’s sequence. The sequences themselves were carefullydesigned to have a Hamming distance of 2 with every other valid sequence(i.e., any two sequences differ in at least two locations) to reduce the chanceof the latter issue occurring. Sequences of length 4 with Hamming distance2 were chosen to limit the number of buttons required to have an identityassigned, but it may be better to increase the sequence length and have ahigher Hamming distance because with a Hamming distance of 2, two validsequences may only differ by a swapping of two characters (e.g., BADC is“cera”, and BDAC is “arnie”).As buttons are pressed, unidentified gray boxes show up on the screenrepresenting each clicker that is being recognized, with white circles insidethe boxes representing the number of characters entered (Figure 5.2). Whena sequence is successfully entered, the gray box becomes an avatar, and theclicker buttons, instead of being used to enter a sequence, can now be usedto verify that the alias was registered to their clicker by making it changecolours and animate in various ways as described in Table 5.1. These changesare similar to those used in the Grid application (Section 2.7).After all clickers have successfully entered their sequences and the screen58Pseudonym:leoSequence:A A A DPlease wait 1 second between each button press.Press E at any time to cancel or start over.Pseudonym:marthaSequence:A A B CPlease wait 1 second between each button press.Press E at any time to cancel or start over.Pseudonym:jordanSequence:A A C APlease wait 1 second between each button press.Press E at any time to cancel or start over.Pseudonym:zooeySequence:A A D BPlease wait 1 second between each button press.Press E at any time to cancel or start over.Figure 5.1: An example slip of paper used with Sequence Aliaser thatindicates the celebrity identity and sequence required to asso-ciate a user’s clicker with it.Button Colour AnimationA Green Pulse. The avatar cycles between getting larger andsmaller.B Blue Bounce. The avatar moves up and down in a bouncingmotion.C Purple Shake. The avatar moves left and right in a shaking motion.D Yellow Swing. The avatar swings left and right as if there was apivotTable 5.1: The colours and animations of avatars in the SequenceAliaser. As with the Grid application, the animations were pro-vided by Eden’s Animate.css [30] and were selected for their dis-tinguishability.now displays celebrity avatars, the administrator can register the mappingsinto the database by clicking the Next State button on the control panel, orby pressing C on their i>clicker.The Sequence Aliaser and other applications were tested in a live uni-versity classroom environment over two consecutive terms. The method andresults of the first evaluation are covered in the following chapter.59Figure 5.2: Sequence Aliaser in use. Some users have entered between1 and 3 letters of their sequence, indicated by the white circleson unidentified gray boxes, while others have completed theirsequence and now see a celebrity avatar that they can animatewith clicker button presses. Contrary to other uses in the systemwhere avatars are shorted lexicographically by alias, the positionof the avatars on screen is determined by the order in whichtheir first button press was received. This behaviour preventslocations shuffling while aliases change.60Chapter 6Term 1 Evaluation and ResultsIn this chapter, we describe a field trial that was conducted to test theRhombus CSPS in a university classroom environment. The evaluation usedboth quantitative and qualitative methods to gain insights into operationalaspects of using the system and to understand the experiences of the instruc-tor and the students who used the system. The evaluation was conductedduring the iterative development cycle for Rhombus. The developers werepresent in the classroom and during classroom sessions and provided techni-cal support to the instructor and assistance to students, beyond what wouldbe expected for normal usage. This was done because we wanted to obtainconcrete in-lecture experience with the system as part of the design pro-cess. A second field trial in which there was less intervention by the researchteam is reported in the next chapter, and the results of the two studies arediscussed in Chapter 8.After explaining the method that was followed for the first field trial, wepresent the results of the study and some of the implications that we usedto improve the system.6.1 Pilot DeploymentPrior to using the system in a classroom, it was deployed during a keynotepresentation at the CompArch 2013 conference held in June in Vancouver,61British Columbia, where 58 attendees played Prisoner’s Dilemma, some hav-ing no prior experience using clickers. Everyone managed to play the gamesuccessfully. Informal observations and followup comments from participantsled us to believe they understood the results of the exercises and the under-lying game-theoretic principles that were being demonstrated. The audienceseemed engaged and the system worked without any issues, demonstratingits robustness in handling over 50 clickers at once.Based on this, we conducted a field trial in an actual classroom in theFall.6.2 MethodTo test the efficacy of using the interactions Rhombus provides, we evaluatedthe system in a classroom environment. The system was used as an integralpart of lectures in a single term offering of a third-year Cognitive Systemscourse at our university. The class makes regular use of clickers for answeringquizzes throughout lecture and historically has used the histograms providedby i>clicker software to play the N-Person Prisoner’s Dilemma. The follow-ing games were played with Rhombus: Coin Matching, Stag Hunt, Prisoner’sDilemma (single round per partner), Iterated Prisoner’s Dilemma (5 roundsper partner), and the Ultimatum Game. These five games were played oneper lecture over a series of non-sequential lectures. The system was used anadditional two times to familiarize the students with it and the game theywould play prior to the lectures where the actual full games were played.These two warm-ups took place before the Coin Matching game and beforethe Stag Hunt game, and are described in the results as session 1 warm-upand session 2 warm-up, respectively. Each session took between 15 and 20minutes and typically involved 15 rounds of play.The class in which the system was evaluated had two instructors whoalternated teaching every two lectures. Our system was used solely by oneof the instructors, henceforth referred to as “the instructor”. We refer to theother instructor in the course as “the co-instructor”. Prior to running thecourse evaluation, the game-theoretic exercises described in Chapter 2 were62developed in consultation with the instructor to ensure they met his needsfor the course.6.2.1 ParticipantsThe study had prior approval from the Behavioural Research Ethics Board.Students were required to opt-in to the study or their data was not included.There were 40 third-year students enrolled in the class, with 34 consenting tohave their data used in the study. No compensation was given for participat-ing. While all students in the class were expected to take part in the gamesas they were part of the class curriculum, students faced no consequence fornot consenting to have their data used for research purposes.6.2.2 Environment and ApparatusThe room had capacity for approximately 50 students and was arranged withseveral rows of tables split into two columns with 4 students per column. Theclassroom projector was directed towards the front-centre of the class and wasused for the main display of the games. An additional projector was broughtby the researcher and projected on a makeshift screen directly adjacent tothe main screen to display the instructions of the games. Both projectorsused a 1024x768 resolution. Rhombus was run on the researcher’s laptop, aMacBook Air computer (2 GHz Intel Core i7, 8GB 1333 MHz DDR3 RAM,11” screen) running Mac OS X 10.8.5 for the first 4 sessions and Mac OS X10.9 for the final. The main controller interface of the games was displayedon the 11” laptop screen and controlled via both trackpad and i>clicker.The i>clicker base station model TMX14 was used.6.2.3 Student RepresentationShowing 40 students individual feedback on a projected display greatly con-strains what can be placed on the screen. Each game represented individualstudents with the standard avatar representation discussed in Section 4.4.4:squares with their alias at the top, current score or action in the middle, andphoto of their celebrity as the background (see Figure 6.1).63Figure 6.1: The avatar representation of user with alias “jobs” in var-ious states during Prisoner’s Dilemma. From left to right, theavatar is successfully checked-in during attendance, has yet toplay during the play state, has played during the play state, andhas its score shown during the results state. (N.B. This is aduplicate of Figure 4.8.)6.2.4 ProcedureWhen playing the games, the researcher would enter the classroom and setup the apparatus. The instructor brought his own i>clicker base station tothe class with him for quizzes, which was unplugged for the duration of thegame, and replaced by the researcher’s own base station. It ran on the BBfrequency, the same channel used for the class.We decided to assign each student a celebrity as their alias in Rhombus,largely due to their sheer quantity and recognizability. The celebrities usedwere vetted for recognition by seven graduate students. On the first warm-up session, students were handed slips of paper with their assigned celebrityalias and a sequence of buttons to press for use in the Sequence Aliaser (seeChapter 5). After the students had all entered entered their sequences, thealiases were saved in the system for future use.In all other sessions, after setting everything up, the researcher initiatedthe Grid application as a warm-up activity (see Chapter 4 for details). Thepurpose for doing this was to give the students a chance to acclimatize tothe system and recall how everything works.Once everyone was confident that their clicker was working, the gameof the day would begin. An attendance screen would show up and studentswould see their squares with large check marks to confirm that they wereready to play the game. The first play state would follow and each studentwould be given a chance to make an action. For instance, in the Prisoner’s64Dilemma, the students could choose between pressing C to cooperate or Dto defect. When all students had played, the instructor would move to theresults state and the students would be able to see their scores in the middleof their avatars.Most games played in this field trial had 3 phases of 5 rounds each afterwhich their total accumulated scores would be displayed, with top scoreshighlighted. At the end of each game a log was produced that containedall the actions and scores in CSV format. To preserve the identities of thestudent participants, the logs were transformed, replacing aliases with clickerIDs, before giving them to the instructor.After each session, a digital questionnaire was administered via the sys-tem. Each questionnaire contained the same seven questions relating to theexperience they just had playing a game (see Appendix B for details). Thequestionnaires on the first and last session were augmented with questionsabout their thoughts on the system overall. All questions used a five-pointsemantic difference scale (e.g., from Strongly Agree to Strongly Disagree).Beginning on the third session, a 20 second countdown timer was used duringthe questionnaire period. This done in reply to a student comment that theywanted an option to not answer a question. There are only 5 buttons on thei>clicker, and each question had 5 possible answers, so there were no but-tons available to assign to those who wish to abstain. In previous sessions,we had waited until the number of responses matched the maximum numberwe had seen playing games that day, which forced all students to answer sowe could move on. With the countdown timer, we could show everyone onscreen when we would move on to the next question, so they would not feelunnecessary pressure to answer if they did not want to.After the final session, a written questionnaire was distributed to stu-dents with several open-ended short answer questions and the instructorwas interviewed to learn his original intentions for using the system, how itsupported him, and where it could be improved. This interview was audiorecorded and transcribed.656.3 ResultsWe report the results of the evaluation in this section, beginning with thosefrom the student questionnaires administered via the system after each ses-sion, followed by the open-ended student questionnaires, the interview withthe professor and any observations made by the researcher. Students whodid not consent to have their data used were pruned from the results beforebeginning analysis.6.3.1 Student QuestionnaireWe ran several statistical tests to detect if there were any differences in re-sponses as students gained more experience with the system, and similarly,to see if the various games were received in different ways by the students.The following results use the effect size measurement r where Cohen suggestsa value of .1 is small, .3 is medium, and .5 is large [26]. For all post-hoc com-parisons, the Bonferroni method was used to adjust p-values. All responsesare to five-point semantic difference scale questions, typically ranging fromstrongly disagree (1) to strongly agree (5). Results for many of these ques-tions are summarized in Figure 6.2.Q1 – It was easy to find myself on the screen. A Friedman test revealeda significant effect of session on ease of finding one self on screen (χ2(4) =16.816, p < .01). Post-hoc pairwise Wilcoxon Signed-Rank tests showed thatthere was a significant difference between session 1 and 4 (p < .05, r = .51),and session 1 and 5 (p < .05, r = .55), but not between the other sessions.Of the 34 possible participants considered for this analysis, only 16 wereincluded due to missing data from various sessions. Using all participantdata for a given session, the medians were 4, 4, 5, 5, 5 for sessions 1 through5 respectively, with a 1 being strongly disagree, and 5 strongly agree.Q2 – I understood the controls of the game. A Friedman test revealed asignificant effect of session on understanding controls of the game (χ2(4) =10.405, p < .05). Post-hoc pairwise Wilcoxon Signed-Rank tests showed nosignificant difference between pairs of sessions. Of the 34 possible partici-pants considered for this analysis, only 17 were included due to missing data660%20%40%60%80%100%Q1 Q2 Q3 Q4 Q5 Q6 Q7Percentage of Responses (%)RatingAgreeNeutralDisagreeFigure 6.2: The percentage of responses to various questions in thedigital questionnaire, accumulated across each session. Thequestions are as follows: Q1 – It was easy to find myself onthe screen; Q2 – I understood the controls of the game; Q3 –I understood the results of the game; Q4 – I liked playing thegame; Q5 – I felt engaged during the game; Q6 – I would liketo use this system in other classes; Q7 – It was satisfying to usethis system to play the game.from various sessions. Using all participant data for a given session, the me-dians were 4, 5, 5, 5, 5 for sessions 1 through 5 respectively, with a 1 beingstrongly disagree, and 5 strongly agree.Q3 – I understood the results of the game. A Friedman test revealed nosignificant effect of session on understanding the results of the game (χ2(4) =8.28, p = 0.082). Of the 34 possible participants considered for this analysis,only 15 were included due to missing data from various sessions. Using allparticipant data for a given session, the medians were 4, 4, 5, 5, 5 for sessions1 through 5 respectively, with a 1 being strongly disagree, and 5 stronglyagree.Q4 – I liked playing the game. A Friedman test revealed no significanteffect of session on liking playing the game (χ2(4) = 3.624, p = 0.459). Of the34 possible participants considered for this analysis, only 19 were includeddue to missing data from various sessions. Using all participant data fora given session, the medians were 4, 5, 4, 5, 4 for sessions 1 through 5respectively, with a 1 being strongly disagree, and 5 strongly agree.67Q5 – I felt engaged during the game. A Friedman test revealed no signifi-cant effect of session on feeling engaged during the game (χ2(4) = 3.117, p =0.538). Of the 34 possible participants considered for this analysis, only 14were included due to missing data from various sessions. Using all partici-pant data for a given session, the medians were 4.5, 4, 4, 5, 5 for sessions1 through 5 respectively, with a 1 being strongly disagree, and 5 stronglyagree.Q6 – I would like to use this system in other classes. This question wasonly asked on the first and last (fifth) session. A Wilcoxon Signed-Ranktest revealed no significant effect of session (W = 64, Z = −0.0971, p = 1).Of the 34 possible participants considered for this analysis, only 28 wereincluded due to missing data from various sessions. Using all participantdata for a given session, the medians were 4.5, and 5 for sessions 1 and 5respectively, with a 1 being strongly disagree, and 5 strongly agree.Q7 – It was satisfying to use this system to play the game. This questionwas only asked on the first and last (fifth) session. A Wilcoxon Signed-Ranktest revealed no significant effect of session (W = 44, Z = 0.90, p = 0.388).Of the 34 possible participants considered for this analysis, only 24 wereincluded due to missing data from various sessions. Using all participantdata for a given session, the medians were 4.5, and 4 for sessions 1 and 5respectively, with a 1 being strongly disagree, and 5 strongly agree.Q8 – How would you rate the system compared to typical iClicker us-age? This question was only asked on the first and last (fifth) session. AWilcoxon Signed-Rank test revealed no significant effect of session (W =20, Z = −1.0142, p = 0.35). Of the 34 possible participants considered forthis analysis, only 26 were included due to missing data from various ses-sions. Using all participant data for a given session, the median was 4 forboth sessions 1 and 5, with a 1 being much worse, and 5 much better.Q9 – How helpful are the in-class multiple choice questions with regardsto learning? This question was only asked on the last session, and had ascale ranging from very harmful (1) to very helpful (5). The median answerwas 4 (helpful), with 3.6% answering very harmful, 3.6% harmful, 10.7%neutral, 60.7% helpful, 21.4% very helpful. The results are summarized in680%20%40%60%80%Q9 Q10Percentage of Responses (%)RatingHelpfulNeutralHarmfulFigure 6.3: The percentage of responses in the last session to questionsQ9 and Q10, which asked how helpful the in-class multiple choicequestions (Q9) and playing the games (Q10) was with regardsto learning.Figure 6.3.Q10 – How helpful was playing the games with this system with regards tolearning? This question was only asked on the last session, and had a scaleranging from very harmful (1) to very helpful (5). The median answer was4 (helpful), with 0% answering very harmful, 0% harmful, 16.0% neutral,64.0% helpful, 20.0% very helpful. The results are summarized in Figure6.3.Q11 – It is worth taking class time to do multiple choice questions. Thisquestion was only asked on the last session, and had a scale ranging fromstrongly disagree (1) to strongly agree (5). The median answer was 4 (agree),with 3.7% answering strongly disagree, 7.4% disagree, 7.4% neutral, 66.7%agree, 14.8% strongly agree. The results are summarized in Figure 6.4.Q12 – It was worth taking class time to play games with this system.This question was only asked on the last session, and had a scale rangingfrom strongly disagree (1) to strongly agree (5). The median answer was 5(strongly agree), with 3.7% answering strongly disagree, 0% disagree, 18.5%neutral, 18.5% agree, 59.3% strongly agree. The results are summarized inFigure 6.4.690%20%40%60%80%Q11 Q12Percentage of Responses (%)RatingAgreeNeutralDisagreeFigure 6.4: The percentage of responses in the last session to questionsQ11 and Q12, which asked students if it was worth taking classtime for in-class multiple choice questions (Q11) and for playinggames with Rhombus (Q12).6.3.2 Student Short Answer ResponsesThis section reports on the results from analyzing the short answer surveysfilled in by participants at the end of the final session.R1 – Which input device(s) would you prefer to use with an interactiveclassroom response system? There were 31 responses to this question, whichallowed participants to circle multiple choices from the set: clicker, mobilephone, tablet, laptop, other. The responses were as follows: clicker 83.9%(26), mobile phone 22.6% (7), tablet 9.7% (3), laptop 25.8% (8), other 0%.Participants were asked to explain their choice, revealing that clickerswere selected for the following reasons: simple and easy to use (10), theyare already in use (7), affordability (6), everyone uses the same device (5),like using clickers (4). Reasons against using clickers included: forgettingto bring them (2), expensive (1), unreliable (1), would prefer not havingto buy another device (1). There were not enough answers for the othercategories to draw clear trends, but participants did mention a benefit tousing a mobile phone was that they always had it on them (3), and a benefitof using a laptop being that they already own the device (2).R2 – Describe any issues or problems you had with RPS. There were25 responses to this question. The most common response was about the70i>clicker devices not working, mentioned by 5 participants, which was nofault of Rhombus. These responses ranged from “sometimes clicker wouldturn off” to “low battery light doesn’t seem to work”. Others mentionedproblems finding themselves on screen (3), for example, “Sometimes it wasa bit difficult to find myself on screen - during check in because there weren’talways faces with the pseudonyms, also the tiles kept shifting so it was hardto track.” Another problem was with interpreting the results screen of thevarious games (3), for example, “Difficult to understand results/rules of thegame; relies on instructor heavily”. Two participants mentioned bugs in thesystem that were corrected on the fly or for the next session.R3 – Describe what you liked most about RPS. There were 31 responsesto this question. The most common response was praising the design of theinterface (12), with participants saying, for example, “A neat way to repre-sent responses,” “The results are easy to read,” and “The pictures, colours,real-time feedback... everything about it is pretty nice.” Individual feedbackwas mentioned by 8 participants, for example, “It’s nice to know that you110% sure you have inputted an answer”, and “You could keep track of yourvotes, you could see yourself respond, and you could track your “points” forthe session.” Individual engagement was mentioned by 7 participants, forexample, “It is interactive and forces students to pay attention.” We distin-guish individual engagement from classroom engagement (5), where studentsmentioned the social engagement of the whole class, for example, “I liked howwe interacted together as a class, like an in class activity.” Other categoriesincluded ease of use (4), having fun (4), faces of avatars (2), and comparingto others (2). One student mentioned the applicability of the games playedto the course material, saying “Mostly the application to game theory - if itwere just [multiple choice] questions, it wouldn’t be useful.”R4 – Describe any suggestions you have for new features or improve-ments to the system. There were 21 responses to this question, with littleconvergence into categories. The most common answers were that there wasnothing to improve (3), the order of names on screen could be “less arbi-trary” and be made “easier to find” (2), the results display could be sorteddifferently or made more “accurate and clear” (2), and identification of users71could be modified to support user customization via web or otherwise (2).R5 – Do you have any suggestions for other applications of the system?There were 11 responses to this question, with the most common answerbeing to use the system for polls (4), the same way the typical i>clickersoftware is used. There were no other converging categories, but responsescovered the following topics: cooperative applications, reaction time games,competitions, music rhythm practice, forming discussion groups, use in labsto “test possibilities with other groups”, and attendance.6.3.3 Instructor InterviewA semi-structured interview with the COGS 300 instructor provided insightsinto the value of playing games in class, the shortcomings of his previousefforts, and benefits of using our system.P1 – Pedagogical Value. The instructor described the primary pedagog-ical outcome of playing the games was to overcome the limitations of onlyteaching theory to students, stating, “I think teaching it just as theory youget a very small number who really get it.” He followed up by mentioninghaving direct experience as a subject in the games made it easier to under-stand psychological experiments that were discussed in class that involvedthe games. Beyond these reasons, he suggested a large portion of the learn-ing comes from experiencing a social environment in which not all agents actthe same way, and then reasoning about what is happening in reality andhow it differs from the theory.I would think the problem with university students in this kindof a class is, why would you come to class and not be lookingat something else on your screen? Why would you be engaged?One way you’re engaged is you’ve invested in this see-no-lightprocess and invested in and suddenly it’s like the expectationsyou invested in that didn’t work out in ways you don’t understandand now you’ve got to decide what resources to use. How do youexplain that? Are they all wrong? Are you wrong? Was thetheory misapplied? And I think that’s where lots of the learning72happens. The more that you can do more rounds, you can domore variations, there’s more chance for that to happen. I meanideally if we change this in this way, would this still happen?P2 – Multiple Games. His reasoning for using the system for multiplegames was that he felt it necessary to gradually bring more advanced con-cepts to students over time, saying,I doubt you could go in in one day, one lecture, and do “here’sall the cool things about game theory.” So it’s a little bit likegoing in and saying “here’s all the cool things about computers,or programming, or anything.” You have to have enough of atrack record so you can see why this thing is important.He went on to describe the succession of games as a “mini curriculumbased on games getting harder in one way, or bringing out different featuresof the social environment.”P3 – Games before clickers. Prior to using clickers for playing the gamesin class, the instructor had tried out doing a paper version of the Prisoner’sDilemma, where they wrote out a program on paper and exchanged withanother person who then ran their program against another to determinetheir scores. The information was collected in aggregate on the blackboard,photographed, and entered into a spreadsheet. This process had to be com-pleted between lecture sessions. Furthermore, there was “no comparativeanalysis at the individual level”. The instructor simply collected the papersand nobody ever saw the results of others. He noted that there were furtherproblems with the paper-based method, saying, “the papers go missing, thepapers aren’t always legible and the instructions are a little bit too compli-cated and so the failure rate was quite a bit higher.” He also mentioned thatit was confusing for students dealing with all the paper, and that “a lot oftime was spent on administrative stuff.” He said that “the bookkeeping got inthe way” of the actual goal of playing the game.P4 – Initial clicker attempt. The instructor had tried using the defaulti>clicker software in the past to play the N-person variant of the Prisoner’s73Dilemma, using the histogram to show the distribution of results. This levelof feedback was suitable for this game, as all cooperators received the samescore, and similarly for all distractors. He said, “the clickers let us both doit and talk about results right away.”. He described a multi-round processwhere students would vote with clickers in round 1, review the histogram ofresults, then vote again in for round 2, and so on. He noted that the builtin software was “quite deficient” for supporting gaming purposes, but it dideffectively allow them to play the N-person game.P5 – Problems with initial clicker system. The instructor lamented thatthey had only done the N-person variant of the Prisoner’s Dilemma becauseit was what they could do with the system they had in place. He describedit as “playing some weird game we could jigger into the classroom format,”instead of what the instructor desired to teach.So the problem there, it meant that, knowing that that worked,we tended to use that game and actually started doing readingsaround that game. But it’s a really hard game to analyze, sowhat you’re seeing now is an artifact of a difficulty in classroomprocedure. In a stadium we can do this, so I guess we’re goingto do a lot of this card flipping. Why are we doing that? Wellbecause that’s the thing we can do with a stadium full of people,well this is the N-person game we can do easily in the class, butthat’s not a good reason to choose the game. We should be able, Imean ideally, the instructor should have a palette of games theycan choose from and say “No, I’m really interested in this, I wantto use this game that has this shape.”He mentioned that one major limitation preventing using the defaulti>clicker software to run the games was that they needed to pair studentsanonymously in a group in order for the games to work. He described theindividual feedback mechanism as “crucial” for playing the games, which hefurther explained saying, “You can’t say you have a well-informed agent if[they are thinking] ‘I chose some things over a few times, I don’t know whathappened’.”74P6 – Clicker benefits and issues. The instructor speculated that onebenefit of using the clickers as the input device for the games was that “theclickers have a seriousness about them because they’re used for quizzes”.He admitted, however, that there are issues with using clickers as well,stating “you know you have to give people feedback, but it’s difficult to knowhow to give a room full of people feedback working with extremely limiteddevice with no individual level feedback on it.”P7 – Benefits of new system. The instructor described one benefit ofusing the system was that since playing the games was “easier and easier”,it freed him to think more liberally about what the “ideal curriculum” forthe course would be. He was already able to adjust the selection of gamessince what was easy and hard to do had changed as a result of using thesystem. He described having a freedom to experiment with different gamessince they were “cheap” to run.He valued highly the ability to anonymously pair individuals, saying, “Ithink by not being forced to do N-person games, with actually being able tonow, crucially, pair people up individually, then you can play the games youwant to play.” He went on to describe the benefits was not only in the abilityto pair people individually, but to be able to quickly and easily play gamesa repeated number of times, as they were intended to be done. He was nowable to play “standard games”, “the games that actually drove game theory.”Since we’re linking up with the literature, we’re not doing thisweird thing of reading papers of what you don’t want to read be-cause they’re linked to the game that we’re stuck playing and nowwe’re trying to explain complicated papers. We’re reading clas-sics.He described the increase in the amount of games they can complete ina session as “an order of magnitude more” than he previously could. Hedescribed completing the games in a third of a session, when previously withpaper it took the entire session. He said, “the administrative stuff was a lotless and the actual pedagogy was more.”75The anonymization ability of the system was seen as an important benefitto the instructor; he described it as being “clear to everyone,” and that it“puts the games in another space with separate pseudonyms.” This gave himthe confidence that they were “meeting the conditions of the literature.”He also found the comma-separated values (CSV) format the data wasprovided in was conducive to extra analysis after the games were over, sayingthe data was “already excel friendly and ready”, and “you have a finer level ofanalysis available because all the data is already in a CSV, standard form.”He noted that playing the games with the students “made a definitedifference” with student engagement, and the real-time individual feedbackwas a “huge improvement”. He elaborated, “you’re basically making availablehuge amount of information for them to do what they like with.”P8 – On registering clickers. The instructor noted that he was “very skep-tical” that using celebrity aliases would work with the students for playingthe games, but described the experience as having “exceeded expectations”and that it was a “really good solution to the problem.” He went on to statethat the way we registered clickers (using Sequence Aliaser) had a fun en-gagement factor. He said, “it wasn’t a chore” and described it as akin to “littlehalf-time exercises” where people have fun instead of becoming disengagedand bored with an otherwise administrative task.6.3.4 ObservationsThis section covers observations made by the researcher throughout eachsession.Registration with Sequence Aliaser. The class understood what to do,with the majority getting it right immediately. Around five or so partici-pants took longer than the rest, with 41 participating in total. One personaccidentally entered someone else’s sequence, but the problem was correctedafter the other person declared somebody had taken her pseudonym. I askedthe class to press E if you did not see your avatar on screen, which freed upthe accidentally taken avatar and the problem was resolved.Towards the end of registration, there was a single participant everyone76was waiting for who was having a hard time entering the sequence. It wasnot obvious who the person was, however, as many participants were stillpressing buttons to make their avatar wiggle on screen, and others werejokingly faking having a hard time entering the sequence. Eventually thefinal person registered and I moved to save all the associated aliases in thesystem by pressing the C button on my (the instructor’s) clicker, causing theapplication to move to the next state. However, I had disabled instructor’sremote mode and so in pressing a button on my clicker, a new gray box wasdisplayed on screen. This caused people to cry out in dismay that somebodynew had joined that they would now have to wait for. I assured them it wasme and saved the aliases.The last participant to register was under alias “spears”, but the slipfor spears was not handed out. It later became clear that this participantwas an outlier in the class, having very little experience with computers andtechnology in general, and often needed assistance in using the clicker.Game Playing. In their first time using the system to play Coin Matching,participants seemed to understand how to use the system with ease. Peoplegroaned, cheered, and laughed as results were displayed after each round. Abot was automatically added to one of the teams to balance the numbersand nobody said anything about it.During the second session warm-up, a student approached me and askedfor her sequence again. At this point, the clickers had been registered withthe system already and the sequences were no longer needed, but it was notclear to her. At the end of the session, the instructor noted there had been“really great engagement” in the class, and that the system had been receivedvery well. He said there was the “right balance of comical and educationalaspects” to the system, referring to the natural comedy that follows playinga celebrity in a game.When it came time to actually play Stag Hunt, there were a numberof bugs in the system with calculating results. This caused great confusionto the students who cried out “Hey, that’s not right!” when scores did notaccumulate correctly, and “Why do I have a previous score?” when it wasthe first round of a new phase.77One student had forgotten their clicker, so I provided them another andupdated the system to use the new clicker ID for their alias, allowing them toenjoy the same experience they would have had, had they used their regularclicker.Another person forgot her clicker on the third session and was loaned aclicker for use in the class that day.The channel the clickers used in the classroom was BB, which requiredeach clicker to be setup prior to use. In the third session, I was delayedin setting my clicker to the correct channel prior to starting the game, andended up setting the channel during the first round. It appears that whenchanging the frequency on an i>clicker, it sends the button press A when it isdone. Since my clicker was in instructor mode, the A signal was interpretedas a command to disable voting for the other students. For a moment, weall waited while nothing changed on screen until a student called out thatit was frozen. At that point, I realized what had happened and re-openedvoting.In the fourth session, the game was over in roughly 16 minutes, andthe questionnaire in another 2, returning the podium to the instructor 20minutes after class had started. During the game play, a participant camein late, joined a team (replacing a bot), and nobody seemed to notice thechange. A student forgot their clicker again, and this time I had broughtsome clickers with aliases already registered to them to expedite the processof giving them a replacement. This student ended up with alias “liz” and theclicker worked without issue. Some students noticed the new alias “liz” onscreen and commented on how they had never seen her before, saying thingslike, “Is that a transfer student?” The avatar “spears” had become known fortypically being the last person to play in each round, and the class beganmaking comments about it, but did not seem to direct them at any studentin the class. It appeared they did not know who spears was, and insteaddirected their attention to the representation on screen.On the final session, the class played the Ultimatum Game, where theywere partnered randomly with different people in each round. One studentexpressed feelings that he was not being randomly partnered, since he was78repeatedly getting the low offer of 1 (instead of 9), which was rare. Inactuality, he was being partnered differently each round. One student useda different clicker than was registered in the system and showed up on screenwith no photograph, and with alias being a 8 digit hexadecimal i>clickerID. He did not seem to have any difficult playing despite this irregularity.6.4 SummaryIn this chapter, the evaluation method and results from the first term fieldtrial of Rhombus CSPS have been described. The system was used to playfive games in a third-year cognitive systems course and was well-received byboth the students and the instructor. Students rated it to be on a similarlevel to clicker quizzes with regards to being worth taking class time and itshelpfulness with learning. They described themselves as being very engagedwhile using the system in class and noted that the individual real-time feed-back was a key feature. The instructor was pleased with the system as itallowed him to run the games in class that he had always wanted to, butcould not previously due to lack of technological support. Given the posi-tive response from the field trial, the instructor decided to use this systemagain during the following term, this time without the in-lecture assistanceof the researchers. The method and results of this second trial are coveredin Chapter 7, and an interpretation and discussion of the results from bothtrials is covered in Chapter 8.79Chapter 7Term 2 Evaluation and ResultsIn this chapter, we describe the second of two field trials conducted to testRhombus CSPS in a university classroom environment. In this trial, theinstructor used the system on his own, with the researchers only attendinglecture on the final usage of the system to collect survey data from the stu-dents. The evaluation was similar to the first trial described in the previouschapter, using both quantitative and qualitative methods to better under-stand the experiences the instructor and the students had using the system.We first describe the method we used, then present the results. An interpre-tation and discussion of the results from both field trials is givn in Chapter8.7.1 MethodThe following term after our initial evaluation, the same course was taughtagain. This time, the instructor that was using our system was teachingonce more, but the co-instructor was different and had little experience withclickers. In this term, the instructor used the system on his own without anyadditional support from the researchers beyond an initial hour-long trainingsession. We provided no rigorous procedure for the instructor to follow whenusing the system, allowing him to integrate it into the lectures as he saw fit.As such, we report on his procedure in the interview results (Section 7.2.3).80In this term, the system was used to play four games: Coordination,Prisoner’s Dilemma (single round per partner), Iterated Prisoner’s Dilemma(5 rounds per partner), and the Ultimatum Game.7.1.1 ParticipantsThe same ethics approval and procedure were used as in the earlier study. Wehad 23 students consent to have their survey data used, while there were 31students enrolled in the course. No compensation was given for participatingin the study.7.1.2 Environment and AppartusThe room was nearly identical in setup to that described in our initial eval-uation. The instructor did not make use of a makeshift screen to project theinstructions, and instead simply used the single projector display available.The instructor used his own laptop, which was also an 11” MacBook Air, touse Rhombus in the same way as the researcher did in the first evaluation.7.1.3 ProcedureIn this term, the instructor had knowledge of the students’ clicker IDs apriori, something the researcher did not have in the first term. This negatedthe need for using the Sequence Aliaser to register clickers with the system.Instead, the instructor ran a script prior to first usage that registered theclickers and used the Grid application to familiarize them with who theywere. He simply told them to press buttons on their clicker and due tothe synchronization between button presses and avatar feedback (animation,colour changing, letter displaying), students were able to figure out who theiralias was.The researcher only showed up on the final day of use of the system wherethe class played the Ultimatum Game. After the game had completed, theresearcher administered the same digital and short answer questionnairesthat were used on the final day of the first term evaluation. The instructorwas interviewed after the final session to learn about his experience using810%20%40%60%80%100%Q1 Q2 Q3 Q4 Q5 Q6 Q7Percentage of Responses (%)RatingAgreeNeutralDisagreeFigure 7.1: The percentage of responses to various questions in thedigital questionnaire. The questions are as follows: Q1 – It waseasy to find myself on the screen; Q2 – I understood the controlsof the game; Q3 – I understood the results of the game; Q4 – Iliked playing the game; Q5 – I felt engaged during the game; Q6– I would like to use this system in other classes; Q7 – It wassatisfying to use this system to play the game.the system on his own and with a different class of students.7.2 ResultsWe report the results of the second term evaluation in this section, beginningwith those from the student questionnaires administered via the system afterthe final session, followed by the open-ended student questionnaires andthe interview with the professor held after the evaluation was completed.Students who did not consent to have their data used were pruned from theresults before beginning analysis.7.2.1 Student QuestionnaireThe same as in the first term evaluation, all responses are to five-pointsemantic difference scale questions, typically ranging from strongly disagree(1) to strongly agree (5). Results for many of these questions are summarizedin Figure 7.1.Q1 – It was easy to find myself on the screen. The median of 23 responses82was 5, with a 1 being strongly disagree, and 5 strongly agree. The responsebreakdown was strongly disagree 0%, disagree 4.3%, neutral 8.7%, agree30.4%, strongly agree 56.5%.Q2 – I understood the controls of the game. The median of 21 responseswas 5, with a 1 being strongly disagree, and 5 strongly agree. The re-sponse breakdown was strongly disagree 0%, disagree 0%, neutral 14.3%,agree 23.8%, strongly agree 61.9%.Q3 – I understood the results of the game. The median of 22 responseswas 4 (agree), with a 1 being strongly disagree, and 5 strongly agree. Theresponse breakdown was strongly disagree 0%, disagree 4.5%, neutral 4.5%,agree 45.5%, strongly agree 45.5%.Q4 – I liked playing the game. The median of 22 responses was 4 (agree),with a 1 being strongly disagree, and 5 strongly agree. The response break-down was strongly disagree 4.5%, disagree 4.5%, neutral 31.8%, agree 36.4%,strongly agree 22.7%.Q5 – I felt engaged during the game. The median of 20 responses was 3.5(neutral/agree), with a 1 being strongly disagree, and 5 strongly agree. Theresponse breakdown was strongly disagree 0%, disagree 15%, neutral 35%,agree 30%, strongly agree 20%.Q6 – I would like to use this system in other classes. The median of 22responses was 4 (agree), with a 1 being strongly disagree, and 5 stronglyagree. The response breakdown was strongly disagree 4.5%, disagree 0%,neutral 22.7%, agree 36.4%, strongly agree 36.4%.Q7 – It was satisfying to use this system to play the game. The me-dian of 21 responses was 4 (agree), with a 1 being strongly disagree, and 5strongly agree. The response breakdown was strongly disagree 0%, disagree0%, neutral 42.9%, agree 23.8%, strongly agree 38.1%.Q8 – How would you rate the system compared to typical iClicker usage?The median of 19 responses was 4 (better), with a 1 being much worse, and5 much better. The response breakdown was much worse 0%, worse 10.5%,neutral 21.1%, better 57.9%, much better 21.1%.Q9 – How helpful are the in-class multiple choice questions with regardsto learning? The median of 19 responses was 4 (helpful), with a 1 being830%20%40%60%80%Q9 Q10Percentage of Responses (%)RatingHelpfulNeutralHarmfulFigure 7.2: The percentage of responses to questions Q9 and Q10,which asked how helpful the in-class multiple choice questions(Q9) and playing the games (Q10) was with regards to learning.very harmful, and 5 very helpful. The response breakdown was very harmful5.3%, harmful 10.5%, not helpful or harmful 15.8%, helpful 47.4%, veryhelpful 21.1%. This result is summarized in Figure 7.2.Q10 – How helpful was playing the games with this system with regardsto learning? The median of 22 responses was 4 (agree), with a 1 being veryharmful, and 5 very helpful. The response breakdown was very harmful 0%,harmful 0%, not helpful or harmful 45.5%, helpful 41.0%, very helpful 13.6%.This result is summarized in Figure 7.2.Q11 – It is worth taking class time to do multiple choice questions. ThisThe median of 22 responses was 4 (agree), with a 1 being strongly disagree,and 5 strongly agree. The response breakdown was strongly disagree 4.5%,disagree 13.6%, neutral 9.1%, agree 36.4%, strongly agree 22.7%. This resultis summarized in Figure 7.3.Q12 – It was worth taking class time to play games with this system. ThisThe median of 22 responses was 4 (agree), with a 1 being strongly disagree,and 5 strongly agree. The response breakdown was strongly disagree 4.5%,disagree 0%, neutral 27.3%, agree 36.4%, strongly agree 18.2%. This resultis summarized in Figure 7.3.840%20%40%60%80%Q11 Q12Percentage of Responses (%)RatingAgreeNeutralDisagreeFigure 7.3: The percentage of responses to questions Q11 and Q12,which asked students if it was worth taking class time for in-class multiple choice questions (Q11) and for playing games withRhombus (Q12).7.2.2 Student Short AnswerThis section reports on the results from analyzing the short answer surveysfilled in by participants.R1 – Which input device(s) would you prefer to use with an interactiveclassroom response system? There were 23 responses to this question, whichallowed participants to circle multiple choices from the set: clicker, mobilephone, tablet, laptop, other. The responses were as follows: clicker 69.6%(16), mobile phone 34.8% (8), tablet 8.7% (2), laptop 26.1% (6), other 4.3%(1).Participants were asked to explain their choice, revealing that clickerswere selected for the following reasons: simple and easy to use (4), clickersare academic, not personal tools (3), they can be anonymous (2), they’reaccessible to everyone (2), and they’re portable (2). However, the mostcommon sentiments were that students did not want to buy clickers (5), atleast not at their current price, and that phones were convenient since theyalways had them on their person (5).R2 – Describe any issues or problems you had with RPS. There were21 responses to this question. The most common response was that therewere no issues with the system (12, note that this does not include the 285blank responses). The next most common sentiment in the responses wasthat the issues revolved around the instructor’s usage and explanation of thesystem (6), including student difficulty understanding game instructions (3),for example,“It would be good to have a trial run, before actual. Otherwisemistakes are made. It might be not the problem with the game, but with howit was explained!” This also included difficulty the instructors had using thesystem and setting it up (3), for example, “Hard for the instructor to setup/run the first couple times. The first time playing the games took forever,and one of our 2 profs had a hard time with the standard software all term.”Two students mentioned having difficulty discovering their alias during thefirst session of the term, while another student mentioned forgetting theirpseudonym between classes.R3 – Describe what you liked most about RPS. There were 20 responses tothis question. The most common answer had to do with individual engage-ment (9), with participants saying, “It is far more interactive than the othericlicker system and funner to use”, and, “Interactivity helps me from fallingasleep or making bad clip art drawings in my notes.” The next most commonresponses were about real-time feedback (6) and ease of use (6). These werefollowed by anonymity (4), for example, “Being able to play games real-timeand having the alias to remain anonymous,” and use of celebrities (4), forexample, “The celeb alter-egos are fun anonymizers, it was engaging way toexperiment with game theory.” The rest of the responses fell into havingfun (3), individual feedback (2), visual design (2), easy setup (1), and gametheory (1).R4 – Describe any suggestions you have for new features or improve-ments to the system. There were 17 responses to this question. The mostcommon answer was that there was nothing to improve (6). There was littleconvergence beyond that category, with the following responses covering vi-sualizations of results (2), initial alias assignment (2), computer players (2),instructor proficiency (2), cartoon avatars (1), including instructions beforethe game (1), persistent scoring across sessions (1), and a way to “view scoresafterward”.R5 – Do you have any suggestions for other applications of the system?86There were 11 responses to this question, with the most common answerbeing “none” (4), followed by using the system for polls (3), the same waythe typical i>clicker software is used. One student in that group mentioned“it could be used for Q and As where history of answers matter (socraticmethod).” There were no other converging categories, but responses cov-ered the following topics: entertainment (e.g., “I would totally play this withmy friends at a house party”), strategic games, neuroscience, music rhythmgames, game theory games.7.2.3 Instructor InterviewA semi-structured interview was conducted with the instructor where hedescribed his preparation and procedure for using the system on his own inthe classroom, and his experience compared to the previous term, includingissues and ideas for improvement.P1 – Using system on his own. The instructor described the experienceas “a bit of plus and minus”: on one hand, being in complete control grantedgreater flexibility over his use of the system, freeing him to take breaks andgo on tangents as needed, while on the other hand, he felt more distractedsince he also had to pay attention to the operation of the system. He notedhowever that one factor that made the use of the system feel “a little looser”was that he had more experience with it, and that the system was well testedafter many bugs were corrected during the initial term.He suggested that it may have been a less intimidating experience forthe students since it was just the professor at the front of the class, not theresearcher. However, he mentioned that the class’ second professor this termwas much less interested in game theory than the previous term, which mayhave impacted student engagement.The instructor noted that it was a “fairly fun” system to use, contrastingit with the “nerve-wracking” use of animation or videos in slideshow presen-tation software, where he felt he often worried about having a connection orneeding a local copy of the video. The room he was in had a poor internetconnection, which further bolstered his contentment with the fact that the87system ran locally with “obvious connections”.He described initially having difficulty in controlling the system, findingit awkward to navigate between states using the keyboard and mouse on hissmall laptop. He said it was difficult to know what keys would affect whichsoftware when he was running slideshow software, the i>clicker software, avideo player, and Rhombus. However, once he recalled he could control thegames with the clicker, it became much easier for him. He said, “The pointis that with [the clicker] in hand, it’s very easy to click through it and youcan walk around.” He noted that walking around allowed him engage betterwith the students and notice if they were ignoring or misinterpreting anyportions of the screen.He had a positive attitude about the system after having used it on hisown for the term, describing his thoughts as “What new things could we dowith it?” rather than “How can we get it finally to work next term?”P2 – Preparation. Prior to the start of the term, the instructor ensuredthere were enough aliases to match to students in the class. When he receivedthe list of clicker IDs, he assigned the aliases to the students via a scriptprovided by the researcher.Before each of the times it was used in class, the instructor reviewed thegame in the system with the built-in debug mode, which allowed him tosimulate clickers via the web browser. He said the rehearsals typically took10 minutes.P3 – Classroom Procedure. The classroom procedure was similar to theprevious term in that students were given instructions at least one lecturein advance of playing the game. While in the first term, a warmup sessionwas done with the system, administered by the researcher, this time theinstructor did things differently. He said they ran warmups for two of thegames using the regular clicker software to save having to switch back andforth, something that was not perceived as being easy to do. He also saidthat for the purposes of the warmup, since the students were already com-fortable with using the system, the extra features the system provided werenot necessary; he said they could have had people raise their hands insteadof clickers to accomplish the same result.88On the actual day of gameplay, he typically started with the game first tominimize switching between the i>clicker software and the system. This wasnot ideal as sometimes students would “trickle in” during the game, whichhe described as “not the section of the class you want people drifting in.” Tomitigate this problem, he often would place administrative announcementsat the beginning to provide a buffer. He did not re-acquaint participantswith the system by having them play around in the Grid program prior toplaying the games as we had done in the first term. Instead, he began withthe games immediately.P4 – Communicating Aliases. In this term, the students were assignedaliases prior to their first usage of the system, but the instructor was chargedwith communicating these aliases to the students. To do so, he tried to tieit in to the topic of interactive robotics they were discussing at the timein the class. The idea was to try to use the feedback provided in the Gridapplication to determine which alias you were assigned. He was confident thiswould work given that the class only had 31 students. He said that peoplesuccessfully “figured it out” just by interpreting the movements and changesto avatars that took place in sync with button presses on their clickers. Hesaid, “I don’t think I sent out anything or announced it in any [way], andthere didn’t seem to be anyone saying I’m forgetting who my alias is.”P5 – Student Motivation. While last term students were under the im-pression that their scores in the games would translate to marks in someway, there was no such impression this term. The instructor said “there wassome vague discussion that maybe there would be some bragging points orsomething,” but nothing concrete and nothing relating to marks. He ad-mitted that this may have had an adverse effect on student motivation andin ensuring the proper methodology needed for experimental game theory,but felt constrained given that the system was still being used in a semi-experimental way as part of research. He was wary of violating researchethics in this case, and so decided to avoid using marks, but he said he ismotivated now to figure out a way to tie in marks in the future.P6 – Student Engagement. As mentioned earlier, there was a different co-instructor this term than previous, and he was less interested in game theory89than his predecessor. The instructor described his previous co-instructor as“a much more jump off and have an objection and raise a problem in everyclass” kind of instructor. He said, “other people pick up from that and think‘oh that’s okay to do’ and this term was much less of that. And so I thinkit carries over in all aspects of the course. So I think the games played outwell, and we got some really interesting results, but there was maybe lessof that interrupting and debating and that.” He said this “big difference instyle between the two terms” was likely the largest reason for the change instudent engagement, which he felt was lower than the previous term. Whilehe thought students may be less intimidated since there was no researchercontrolling the system this term, and so engagement might have increased,he thought the lack of interest in game theory from his current co-instructormay have overshadowed the other changes.P7 – Celebrity Avatars. The instructor believed that the use of celebritiesas avatars in the system had a large impact in the students enjoyment ofusing the system. He said that I solved the random assignment to unknownother players in the classroom in “a fun way,” and that the avatars “werepersonified enough that people had the sense that they were actually playinga definite other person.”He elaborated further by describing the celebrities as “a prop in an in-teractive performance.” He clarified, saying, “It’s not just collecting data,it’s not putting people in a room and collecting data from them, it’s actuallya part of interactive stage performance, a classroom.” The varied namingschemes and general quirkiness of the avatars provided material for the in-structor to use to smooth over times in rounds when the class is waitingfor the last few users to make their moves. He said, “There were things youcould play with a bit without really nagging how come this person [who] forall I know for very good reasons takes always 35 seconds rather than 10.”He continued, “You’re talking at versions of people that aren’t [them them-selves]. So I think that was a good thing, and numbers wouldn’t do it.” Hewas adamant that neutral alternatives to celebrities, such as shapes, wouldbe “really different” after having this experience with celebrities. He clarifiedfurther as follows:90It had a little bit of edge to it, and the thing is that’s good, it’sa performance. It’s good to have a little weird thing that youcan focus on because partly what you’re doing is, between roundsand stuff, giving people other things to come on folks. So thinkof whatever, it’s Craig Ferguson, and it’s weird chicken jokes. Imean, you’ve got a crowd that you’re trying to involve and I thinkthe interface worked really well for that.P8 – Issues. The primary issue the instructor had was in switching backto the i>clicker software after having used the system. He said that in doingso, the i>clicker software was not responding. He figured out that he neededto unplug and plug back in the i>clicker base station after stopping usingthe system before the i>clicker software would work again, but the initialtime he ran into this problem he ended up doing a paper quiz instead of onewith clickers.He said it was “not totally easy to drop in and out of [the i>clicker soft-ware]” since he would end up with two clicker files for the day. He felt thevendor-provided software was “unforgiving” and given that he was alreadyfrustrated by having two instructors use it which he said “it’s not designedfor”, he didn’t want to try it.Another issue he had with the system was a lack of visibility with re-gards to whether or not the data had been saved from the active game.He described himself as “less than fully confident that work was saved.” Hesaid he only exited too early once, resulting in him not receiving the over-all summary data file, but was able to recover the phase results from theintermediate data files, which contained all the information he needed. Hesaid “maybe if there had been some feedback that said ‘Stage 2 saved, Stage3 saved, Final Result saved.’ I think that was the only lack of confidence.”With regards to the data files themselves, while he said that they were“interpretable”, they “invited [him] to make mistakes” because the rows werenon-homogenous, containing results for a single participant intermixed withthat participant’s partner. This representation duplicates data in the fileand makes it difficult (but not impossible) to run row-wise formulas on the91data. He described the process as “you have to do these weird skipping overthings” to analyze the data.He noted a shortcoming of the system is that you cannot easily give yourclicker to another person and have them use it since they would also needto know your alias. He wanted to do this so that students could run scriptswritten by other students using the other student’s clicker.P9 – Improvements. The instructor had a couple of suggestions for im-provements to the system. First, he suggested having a suite of visualizationoptions that the administrator can select to analyze the results of roundsor phases immediately in the system. He suggested having a dynamic wayto customize which are shown while the system is in use, to help aid inanswering student questions and ad-hoc analysis.The second suggestion he had was about the results of the system. In-stead of only providing the data in raw CSV format, there should also beproduction of reports at both an overall class level and an individual studentlevel.7.3 SummaryIn this chapter the evaluation method and results in the second term fieldtrial of Rhombus CSPS have been described. The system was used to playfour games in class without the researcher present to assist the instructor,except for the final usage when the researcher came to administer a survey tothe students. While students had little difficulty understanding how to usethe system, their ratings of enjoyment and engagement, while positive, werelower than the previous term. The instructor maintained a positive attitudetowards the system and provided some explanation for the difference in stu-dent behaviour across the two terms. He expressed his desire to continueusing the system in the future and made suggestions on how it could be im-proved from an end-user standpoint. Interpretation and further discussionof the results found in both field trials can be found in the following chapter.92Chapter 8DiscussionOur findings indicate that the usage of Rhombus in the classroom was apositive experience for both the students and the instructor. Across the twoterms, students were easily able to understand the controls and results ofthe games, as well as find themselves on screen. While in both terms themajority of students answered they liked playing the games and that theyfelt engaged, there was a noticeable difference between the results of term 1versus term 2.Comparing the questionnaire results from the final session of term 1 toterm 2, we see that 75.0% of students liked playing the game in term 1 versus59.1% in term 2. Similarly, 71.4% felt engaged in the game in term 1 versus50.0% in term 2. Note that these final session percentages from term 1 arelower than the average percentage across term 1 (82.0% for liking playingand 82.3% for feeling engaged), so the lower values may be a result of theUltimatum Game itself, but this does not explain the difference betweenterms. The relative lack of enthusiasm for using Rhombus in term 2 showsup again when looking at the results of wanting to use the system in anotherclass; in term 1, 92.9% of students wanted to, while in term 2 this numberdropped to only 72.7%. This trend continues when comparing how studentsrated the helpfulness of learning the games had, 84.0% in term 1 to 54.5% interm 2, and if it was worth taking the time in class to play the games, with78.0% agreeing in term 1 to 63.1% in term 2.93To explain these differences, we first note that the students also hadlower ratings of the helpfulness of clicker quizzes in the class across theterms, with 82.1% finding them helpful in term 1 versus 68.4% in term 2. Asimilar difference is seen with whether the quizzes were worth taking classtime: 81.5% in term one agreed they were, and only 68.4% in term 2. Thiscan help explain the differences to some degree, as it seems the studentshad generally more negative opinions of the in-class activities, but there ismore behind the 29.5% decrease seen between terms 1 and 2 on how manystudents thought the games were helpful with learning.The instructor noted that one of the primary pedagogical outcomes ofplaying the games was the experience of using them in a real social environ-ment, and the ensuing discussion around the results seen during gameplay.However, the engagement was lower in term two and there was less debate inthe class compared to term 1, which may have had an effect on how helpfulthe students perceived the games to be to their learning. The instructorsuggested that one possible reason for the stark difference between the twoterms was the interaction of the second course instructor (the co-instructor).In the first term, the co-instructor stirred more discussion on a regular basisand was perceived to have set a tone in the class that fostered debate, whichwas missing in the second term. Seeing as this was a primary driver in thepedagogy of playing the games, it may partially explain the lower ratings interm 2.Beyond the change in co-instructor, there was also the effect of the in-structor having to get used to and gain confidence in running the systemon his own. His first use of the system was somewhat awkward until headopted using the clicker to control the games, which may have left a poorfirst impression on the students, compared to term 1 when I, the creatorof the system, was there to ensure it ran smoothly. This may have had anegative impact on student engagement, and was mentioned by a numberof students in the short answer questionnaire as one of the issues they hadwith the system.A final possibility for why students perceived helpfulness of the gameswas lower in term 2 was that the motivation behind the games had changed94between terms. While in the first term, students may have been under theimpression that their results in the games were going to translate into grades,during the second term there was no such misunderstanding. Consequently,students may not have had the proper motivation to care about the outcomesof the game and become involved more deeply in playing them. One studenteven said, “I do not find it to be an effective teaching tool. There is no reasonfor students to act rationally and this skews results.”Despite these differences, in both terms students rated the usage of thesystem to be better than typical i>clicker usage.Students in both terms preferred clickers as their input device of choice,although in term 2 there was 14.3% decrease in selecting clickers and a 12.2%increase in preferring mobile phones. Students had emphasized that it wasimportant everyone in the class could afford a device to use, and it was goodthat they were all on the same device as reasons for selecting clickers, but inthe second term there was a larger contingent of students who would havepreferred not having to spend money on clickers at all. Interestingly, nostudent mentioned that i>clicker provides a way to simulate clicks via amobile phone for their default software.Clickers are still a viable input device, however, as they are relativelycheap and ubiquitous across university campuses. Students praised their usein this context as they associate them as being academic tools, and likednot having personal devices be involved in academic work. This sentiment issimilar to the instructor’s speculation that clickers carry a seriousness aboutthem since they are typically associated with quizzes and marks, which mayimpact how students interact with the games.Across the terms, students consistently valued the engagement and in-teractivity the system provided, as seen in their responses to the freeformquestion asking what they liked about the system (38.7% of responses interm 1 and 45.0% in term 2). In term 1, the instructor noted that using thesystem for playing the games provided a substantial improvement in studentengagement from his perception compared to playing the games in previousterms.Another feature highly lauded by students across the terms was the in-95dividual and real-time feedback that was provided. Students valued being“110% sure” their clicks had been received by the system. They also likednot having to wait to see overall class results and to be able to compare thescores of other students in the class.By integrating the system into his class, the instructor was able to playthe games he felt were best for the education of the students, as opposedto the limited games he was previously restricted to. He was happy to havethis freedom and eager to begin developing the “ideal curriculum” for hisstudents. Along with this freedom, using the system also reduced the amountof administrative work involved with playing the games, another bonus forthe instructor who described Rhombus as having “exceeded expectations”.The usage of celebrities as avatars for students was received positivelyby both the instructor and the students. Several students described themas being a fun aspect of the system, and the instructor, despite his initialskepticism, lauded the celebrities for their secondary use as a prop in theclassroom to facilitate passing time in what may otherwise be awkward mo-ments waiting for stragglers to play. However, since these aliases are notwhat students naturally identify themselves with, at times they may forgetwho their assigned alias is. In term 1, we avoided any difficulties with this bybeginning each session by using the Grid program, allowing students to reac-quaint themselves with their avatar before gameplay began. In term 2, thiswas not the case, and at least one student complained of having forgottentheir alias for the first couple of turns in a game.There was a divergence in informing students of their celebrity alias forthe term. In the first term, students were given slips of paper with a sequenceon it for use with the Sequence Aliaser, while in the second term, theysimply had to press buttons in the Grid application until they figured outwhich avatar was their own. Using the Grid application worked for the mostpart, but some students mentioned difficulties with it in the questionnaires.It may be better in the future to explicitly communicate to students theiralias to ensure a smoother first experience with the system. Alternatively,even if clicker IDs are available in advance, as they were in term 2, usingthe Sequence Aliaser may be a suitable option to engage the students in96registering their clickers to an alias themselves.An interesting observation took place when using the Sequence Aliaserin the first term: at the very end when my clicker accidentally caused abox to show up on the screen, there was an outcry from the class aboutsomebody showing up at the last minute. This behaviour may suggest thatsocial pressure from other students in the class will dissuade misbehaving ordeliberately delays in gameplay by individual students.The major issue that arose from the system use came from term 2 whenthe instructor was using both the default i>clicker software and Rhombus inthe same lecture. The instructor’s experience indicates that once Rhombusestablishes a connection with the clicker base station, the base station mustbe unplugged and plugged back in again before the i>clicker software willrecognize it. This forces an undesirable user experience, which cannot beavoided unless an alternative to the i>clicker software that works with theClicker Server is used instead.In the following chapter, an experiment evaluating a novel display tech-nique to use with Rhombus is described.97Chapter 9No-onset Presentation ofSemi-Private Feedback on aShared DisplayThis chapter describes two experiments that were conducted to evaluate theeffectiveness of a novel feedback technique for users of a shared display. Thischapter duplicates some previously mentioned information to preserve itsform as a manuscript for a conference paper submission and it concludeswith acknowledgements specific to the research reported in this chapter.9.1 IntroductionWe explore the problem of providing individual visual feedback to usersof a shared display without clearly alerting other users to what feedbackwas given. We are motivated by a classroom application, where the shareddisplay is typically a projected screen in an auditorium or classroom andthe individual feedback confirms the responses provided by individuals in apotentially competitive situation in which it is desirable that users not knowhow others have responded.Large screen displays are often used in classroom environments, both forsharing course material and for sharing instructions for interactive activi-98ties. One common activity of this type is to have a quiz conducted in classwith the use of a student response system (SRS), such as the i>clicker [10],where a question is displayed on the shared display with a mapping of but-tons to answers (commonly labelled A through E) and students press thecorresponding buttons on their remotes to answer. SRS usage in classroomshas been shown to positively affect engagement, motivation, attendance, andunderstanding [37, 40, 70], encouraging further work to refine and enhancethe user experience.Older versions of the i>clicker, which still have wide use in institutionstoday, have very limited feedback capabilities, displaying only a single greenor red light indicating whether a student’s click has been registered with thesystem or not; newer versions of the i>clicker provide more sophisticatedfeedback with small LCD displays or LEDs for each button, but they aremore expensive, not in widespread use, and still suffer from student doubtabout whether the other end of the system properly received the input.In our experience, students often have low confidence that their click isbeing registered despite the green light flashing on their clicker. Evidencefor this can be seen by observing students repeatedly clicking the same but-ton multiple times to answer a question and by sometimes swapping theiranswers back and forth between the one they belief is correct and a differ-ent one when a histogram of results is shown because this will indicate tothem that their clicker is in fact working properly. Without some form ofconfirmation from the system receiving the clicks that your individual clickhas been correctly received, doubt remains, leading to students having a fo-cus towards the technology and away from the course material, which is anundesirable outcome pedagogically.Reducing this uncertainty is the problem we addressed. Our previouswork showed that students highly valued having individual, real-time feed-back to their clicks displayed on screen, which seemed to alleviate the afore-mentioned issues (see Chapters 6 and 7). We focused on a solution usingthe basic i>clicker functionality that would be suitable for in-class quizzesand similar activities where it was desirable to minimize the chances thatstudents could interpret each other’s feedback to gain an advantage.99In our solution, we provide each user with a section of the shared displaythat functions as their individual feedback area. This approach was taken tosupport the sizable audiences found in large classes, where serial presentationof user feedback is not an option; feedback must be provided in parallel tomany students at once. Each user’s feedback area contains an avatar, whichis a widget that consists of a photographic portrait (typically of a celebrity),an alias or identifier, a colour overlay, and a 7-segment display (Figure 9.1).When a user presses a button on their clicker, we display the letter on theuser’s avatar using the 7-segment display for a brief period of time beforehaving it disappear so that other users do not have much time to look at it.Figure 9.1: A sample user’s avatar. This user has alias “you”, and the7-segment display, which can show the letters A through E, isdisplaying the letter D. No colour overlay is active on the avatarin this figure.A simplistic approach to using this would be to have no letter displayedon the avatar until a button is pressed, then briefly display the letter, andfinally return to displaying nothing. However, perceptual research has shownthat objects that suddenly appear in view (said to have an abrupt onset pre-sentation) tend to draw attention [77], so students may find this distracting.Furthermore, the attention drawing nature of this approach may make ittoo easy to see how other students in the class are responding with theiri>clickers, which may not be desirable, especially if the activity underway isa quiz or other situation in which students are competing for marks.100To address this limitation, the letters displayed could be offset differentlyfor each user’s feedback with a privately known Caesar cipher (e.g., A meansC, B means D, C means E, etc). Another solution could be to assign eachstudent a private set of symbols that maps to the buttons they have pressed,allowing only them to interpret what was shown on screen. We decided toavoid these solutions because they required extra setup and cognitive effortfor the users, and the institution would need to manage a single mappingfor each student, because students could not reasonably be expected to learndifferent mappings for each course that uses the system.Instead, we leveraged the limitations of human visual perception to cre-ate a novel technique for semi-privately sharing feedback to users on a shareddisplay. We describe the feedback as semi-private. Our intention is to allowonly someone who is already focusing on the location at which the feedbackwill appear to be able to see the feedback. We expect that although oth-ers might notice something change, they will not be able to interpret whatthey have seen. We manage this by using a no-onset presentation, whichdescribes a presentation where a symbol is at first camouflaged on screenand the camouflage later disappears to reveal the symbol. To ensure thatthe intended user is focusing at the right time, an example application couldsynchronize the display of feedback with a user action, such as providing thefeedback immediately after the user presses a button on an i>clicker. Weassociate feedback with a user’s avatar to provide a distinct spatial focus foreach user’s feedback.In this chapter, we report on an evaluation of our technique in an experi-mental setting, designed to test the efficacy of being able to interpret lettersshown for briefly visible durations. We measured the accuracy of identifyinga target letter shown in a known position associated with a user’s avatar,as well as the accuracy identifying another distractor letter that was simul-taneously shown in a randomly located position associated with a differentuser’s avatar. We ran two experiments, one using a no-onset presentation,while the other used an abrupt onset presentation. Our results suggest thatthe no-onset technique offers a better balance of target versus distractoraccuracy, particularly when the visible letter duration is set at 80ms.1019.2 Related WorkWe break down the related work into two parts: by applications and by per-ceptual research. In the first part we discuss specific applications that makeefforts to provide private feedback to multiple users of a shared display. Inthe second part we discuss various results from the field of visual perceptionthat have led to the design of our experiments and our display technique.9.2.1 ApplicationsShoemaker and Inkpen introduced the concept of Single Display Privacyware,where a shared display supports contextually placed private output alongwith publically shared information [66]. The prototype they created involvedgiving users stereoscopic glasses where each lens was synced to show eitherthe even-numbered or odd-numbered frames, depending on the user. Bydoing this they could provide private information for one user on the evenframes, and for a second user on the odd frames. This technique, however,does not scale for many more than two users and requires the purchase andsetup of stereoscopic glasses for each user.Cao, Olivier, and Jackson introduced techniques for using crossmodalfeedback to provide private cues that augment public information on a shareddisplay [25]. Relevant information would appear on the displays and userswould be notified privately, possibly through a vibration on a mobile device,that what was displayed or highlighted on screen was related to their query.In this way, all information was publicly visible, but relevance of the informa-tion was private. The benefit of this approach was that by taking advantageof multiple modalities, users could remain focused on the screen and conse-quently have reduced cognitive load for their task. This mechanism couldbe adapted for our uses, but would require output synchronization betweenthe devices, as well as devices sophisticated enough to provide auditory orhaptic feedback. Currently i>clickers support neither of these capabilities.1029.2.2 PerceptionMuch prior research has been done to demonstrate the difference betweenabrupt onset presentation, gradual onset presentation, and abrupt no-onsetpresentation. Here, abrupt means that a sudden visual change takes place,gradual means that the visual change takes place over time, and abruptno-onset means that there is a sudden visual change to the camouflage.Todd and Van Gelder showed that abrupt no-onset presentations had slowerreaction times than abrupt onset presentations [73]. Yantis further refinedthese results by demonstrating there was no difference between gradual andabrupt no-onset presentations when compared to abrupt onset presentations,with results suggesting that abrupt onset automatically captured attentionbut neither no-onset presentations did [77]. This suggests that by making useof a no-onset technique similar to what Yantis used we can provide feedbackwithout drawing the attention of onlookers, but those who are already payingattention will be able to see the change immediately.Atkinson et al. showed that reaction times increase approximately lin-early with the number of items being displayed [13], suggesting that the moreavatars that are present on screen, the more difficult it will be to track downthe one that has changed to displaying feedback. However, by directing at-tention to locations on screen in preparation of a display, processing timecan be reduced [43], allowing a higher probability of reading the change inthe target at high speeds.Studies have shown that people cannot attend to multiple noncontiguousregions simultaneously [42, 60], furthering the expectation that it will bedifficult to interpret multiple users’ feedback at the same time if they arespaced sufficiently far apart spatially, and that the difficulty increases withseparation distance.However, Treisman and Gelade showed that if the targets differ substan-tially from nontargets in the display, they will be easily detected regardlessof the number of nontargets [75]. This suggests that users will be able toeasily identify that multiple targets have been provided feedback, becausethey will differ clearly from those that have not, but they may not be able103to interpret the feedback shown across the different targets. Interpreting thefeedback will be especially difficult if it is only shown briefly, because typicalnon-preattentive processing rates are more than 40ms per item [74].Broadbent suggested that abrupt onsets may increase the perceptual in-take of information from a sensory region while decreasing it elsewhere [20].This may lead to a higher distraction level should other targets display feed-back with abrupt onsets. However, Yantis later showed that if attention isfocused on a location in advance, abrupt onsets do not capture attention[78]. That is, people can voluntarily control their attention despite abruptonsets taking place elsewhere on screen.Yantis also demonstrated evidence for a model of perception that requiresthat processing of single onset objects be done one at a time, handling theattended-to location first and then serially scanning the rest in a random or-der [77]. With only two active displays (target and distractor), the distractorshould be found next, but may not be able to be seen at the same time as thetarget. Eriksen and Hoffman showed that the focus of attention is roughly 1degree of visual angle [33]. Yantis’ work was done with a 5.7 degree viewingangle between the point of fixation and the targets of interest, which maylimit how this finding applies in our context where the displays are between1.7 and 4.8 degrees apart. If the targets are closer together, it’s not clearwhether or not they will be able to be seen simultaneously.9.3 Experiment 1: No-Onset DisplayOur goal was to evaluate the effectiveness of a display technique that allowsa user to successfully interpret feedback displayed on screen without beingable to interpret the feedback shown simultaneously for other users. Wewanted to find a display duration that provided these properties withoutbeing too difficult or uncomfortable to use. We chose to use the durations400ms (long), 80ms (medium), and 16ms (short) after a period of pilotingthe technique.We also wanted a technique that provided a high degree of target accu-racy, allowing the user to correctly guess the target letter 95% of the time or104better, while limiting distractor accuracy to near random chance (20%). Weexpect that when users are mistaken in applied uses of this technique, thecourse of remediation will be simply to press the clicker button again andthen interpret the feedback once again, a relatively low cost activity thatjustifies allowing a 5% error rate.9.3.1 ParticipantsWe recruited 24 participants (11 female) between 19 and 29 years of age,all of whom were compensated $20 for their efforts. All participants hadnormal or corrected to normal vision and were not colour-blind. The partici-pants were recruited via mailing lists at our university and through in-lectureannouncements of the study.9.3.2 ApparatusThe experiment was conducted using a MacBook Pro computer (2GHz IntelCore i7, 8GB 1333MHz DDR3 RAM, AMD Radeon HD 6490M video card)running Mac OS X 10.9.3, with a 27” LCD monitor having a resolution of2560x1440. The monitor had a 60Hz refresh rate, limiting the fastest possiblereveal time to 16ms, which was used. The experiment was programmed injavascript as an application in the Rhombus Classroom Synchronous Partici-pation System. The web browser Firefox 29 was used to run the experiment.Participants entered their responses for the task via an i>clicker remote. Thei>clicker base station model TMX14 was used. The avatars were square,1.51◦ of visual angle (134px, 3.0cm) in width, spaced at 0.15◦ (12px, 0.3cm)apart from one another. The 7-segment displays were 0.60◦ (52px, 1.2cm)tall and 0.40◦ (35px, 0.8cm) wide. Participants sat approximately 114cmaway from the monitor. These are shown in Table 9.1.9.3.3 TaskWe used a 7-segment display to reveal the letters A to E to participants,using the formulations shown in Figure 9.2. We used these letter formsbecause they provided a high degree of distinguishability between the letters,105Element Visual Angle Pixels Physicalavatar height 1.51◦ 134px 3.0cmavatar width 1.51◦ 134px 3.0cmavatar spacing 0.15◦ 12px 0.3cm7-segment height 0.60◦ 52px 1.2cm7-segment width 0.40◦ 35px 0.8cmTable 9.1: The visual angles, and pixel and physical dimensions forthe elements of the avatars used in the experiment.and because displaying each required some change from the default state ofthe display that had all segments visible.Figure 9.2: The letters A to E as shown in the 7-segment display.The 7-segment displays were located in the lower right corner of an avatar,as described earlier (Figure 9.1). Each trial used 25 avatars arranged in a5x5 grid (Figure 9.3), with the target avatar always located in the middleand labelled with the alias “you”.At the start of each trial, the screen was blank and the avatars fadedin over 800ms. The user’s avatar was always the same, but the otheravatars’ aliases changed randomly for each trial. All fades or animationsused jQuery’s swing easing functionality: they progress more slowly at thestart and at the end than in the middle [48]. In this initial state, everyavatar’s 7-segment display had all segments visible. Immediately after fadingin completely, the target avatar’s 7-segment display briefly showed a singleletter from A to E then returned to having all segments visible. Simultane-ously, a single other avatar, known as the distractor, similarly showed a letterfrom A to E (Figure 9.3). When segments disappear and reappear from the7-segment displays, that happens immediately, without animation, becauseprior research has shown that having a gradual offset of the segments made106Figure 9.3: A sample 5x5 grid of avatars. Here the target letter is Aand the distractor, “bruce”, reveals letter E.no difference in the attentional demand the change produced [77] comparedto an abrupt offset, and we wanted to test our feedback technique using asfast a speed as possible.After the letters had been revealed and then hidden, the system wouldwait for the participant to press a button on the i>clicker corresponding tothe letter seen on the target avatar. There was no time limit on how long theuser could delay before pressing the button. Upon pressing the button, visualfeedback was provided with the target avatar animating to have a greenoverlay if they guessed correctly, and red otherwise. This colour feedbackpersisted for 750ms. The i>clickers used for inputting choices had physicallimitations as to how quickly they could send new button presses, whichrequired a delay of approximately 500ms. The feedback duration was set to107ensure that after having faded, the clicker would be able to send anotherbutton press.At this point, the system again waits for the participant to press a button,this time indicating the letter seen on the distractor’s avatar. Again, theparticipants had no time limit within which to press the button so they coulddelay if they chose to. No visual feedback was given as to whether they gotthe distractor correct. After receiving the button press for the distractor’sletter, all avatars would fade out over 400ms and the screen would remainblank for 600ms before the fade in of the next trial would automaticallybegin. The overall process the target avatar goes through in a single trial isdepicted in Figure 9.4.Figure 9.4: The sequence the target avatar moves through in eachtrial: fade in, reveal letter, hide letter, accuracy feedback, fadeout.9.3.4 ProcedureThe experiment was designed to take at most 90 minutes, with most partic-ipants finishing it in approximately one hour. Upon arriving, participantswere verbally instructed from a script in how the experiment would proceedand asked to sign a consent form, which was provided 24 hours in advance.108The instructions asked participants to perform the tasks as accurately asthey could and to prioritize getting the target correct over the distractor,but to try to get both correct when possible.Next, participants filled in a brief demographic questionnaire and thencompleted a warm-up session where they had to complete at least ten trialsof the experiment with both target duration and distractor duration at thelong setting, and also confirm that they understood how the experimentwould proceed before moving on.After completing the warmup, participants confirmed their understand-ing of the experiment and then completed three blocks of trials. In eachblock, participants had to complete 144 trials with a fixed target duration,which took approximately 10 to 15 minutes. The first block used the longduration (400ms), the second the medium duration (80ms), and the thirdblock used the short duration (16ms). The participant controlled startinga new block by pressing any button on the i>clicker, which would presentthe first trial. The button presses required to complete a trial automaticallystarted the next trial until the block was complete. At this point, the par-ticipant was informed that the block was complete and all subsequent clickswere ignored for 1 minute. This served to prevent a participant from acci-dentally starting the next block and also to enforce a minimum break timebetween blocks. Between the blocks, a brief questionnaire was administeredto collect difficulty ratings of detecting the target and distractor letters, con-fidence in interpreting the target, comfort during the block, as well as to noteany strategies that may have been used. Participants had the opportunityto take a break between blocks.After completing all blocks, participants filled in a final questionnairethat allowed them to provide additional open-ended comments about theirexperience.9.3.5 DesignThe experiment used a within-subjects 3 (target duration) × 3 (distractorduration) × 16 (distractor location) design. Each combination of these three109factors was seen 3 times by each participant, making for a total of 432 trials.The target duration was fixed during blocks of 144 trials, but the distractorduration and location varied from trial to trial.The distractor durations and locations for each trial within sub-blocks of16 trials were permuted in such a way that each location was only used oncein the 16 trials and two adjacent trials always had different durations andlocations, even across sub-blocks. The 16 distractor locations are depictedin Figure 9.5, where they are further classified as being in either the inneror outer ring to simplify analysis.Figure 9.5: The locations where distractors can show up in. Darkgray signifies a distractor location in the “outer ring” and lightgray signifies a distractor location in the “inner ring”. The blacksquare in the center is where the target avatar was located in alltrials.Each trial involved two letters of five possible letters: one for the target,and one for the distractor, but there were not enough trials to allow for fullbalancing of these factors. We instead ensured that in each 144 trial block,four letters were used 29 times and the remaining letter was used 28 timesfor target, and again for the distractor. The letters were permuted with aconstraint that no letter repeat itself more than two times in a row.9.3.6 MeasuresThe primary dependent variables were target accuracy and distractor ac-curacy, determined by pressing first the i>clicker button that matched theletter shown on the target avatar and then the button that matched the110distractor avatar. We also assessed comfort, confidence, difficulty of target,difficulty of distractor, and the degree of distraction via questionnaires thatwere completed after each block. All ratings from the questionnaires wereself-reported and scored on a 5-point Likert-style scale.9.3.7 HypothesesThe following hypotheses were formed prior to running the experiment:H1. Target accuracy will be higher the longer the visible target duration,with greater than 95% accuracy on the long and medium durations,and worse than 75% on the short duration.H2. Distractor accuracy will be higher the longer the visible distractor du-ration, with greater than 50% accuracy on the long duration, and worsethan 25% accuracy on short and medium durations.H3. The shorter the target duration, the less comfortable the participantwill feel: comfortable during long and medium target durations, butuncomfortable during the short duration.H4. Target duration will have no impact on the degree of distraction causedby the distractor.H5. Distractor accuracy will be higher when located closer to the target thanwhen located farther from the target.9.3.8 ResultsFor all one-way repeated measures ANOVAs, if sphericity was violated weapplied a Greenhouse-Geisser adjustment, signified by ∗ on the p-value. Forall post-hoc comparisons, the Bonferroni method was used to adjust p-values.We report effect size for ANOVAs using generalized eta-squared (η2g) [14],and Cohen’s criteria that .02 is a small effect, .13 is medium, and .26 is large[26].111Target accuracyWe used a 3 (target duration: long, medium, short) × 3 (distractor dura-tion: long, medium, short) × 2 (distractor location: inner ring, outer ring)repeated-measures ANOVA to analyze the results on target accuracy. TheANOVA revealed both effects and two-way interactions.The ANOVA revealed a main effect of target duration on target accuracy(F (1.05, 24.14) = 43.02, p < .001∗, η2g = .460). The longer the targetduration, the higher the target accuracy. Post-hoc pairwise comparison t-tests of target duration on target accuracy revealed significant differencesbetween all durations (p < .001). There were a total of 144 trials per targetduration with mean accuracy: long 95.6% (m = 137.7, sd = 2.7%), medium92.5% (m = 133.2, sd = 4.6%), short 70.5% (m = 101.5, sd = 18.2%).The ANOVA revealed a main of distractor duration on target accuracy(F (2, 46) = 8.95, p < .001, η2g = .007). When the distractor durationwas long, target accuracy was worse than when the distractor duration wasmedium or short. Post-hoc pairwise comparison t-tests of distractor durationon target accuracy revealed significant differences between long and shortdurations (p < .05), and long and medium (p < .001), but not betweenshort and medium (p = .55). There were a total of 144 trials per distractorduration with mean accuracy: long 84.9% (m = 122.2, sd = 7.5%), medium87.2% (m = 125.7, sd = 7.3%), short 86.5% (m = 124.6, sd = 7.1%).The ANOVA revealed a main of distractor location on target accuracy(F (1, 23) = 15.49, p < .001, η2g = .012). When distractors were located inthe outer ring, target accuracy is better than when distractors are in the innerring. There were a total of 216 trials per distractor location (inner/outer)with mean accuracy: inner 84.9% (m = 183.4, sd = 7.7%), outer 87.5%(m = 189.1, sd = 6.8%).There was a two-way interaction of target duration and distractor dura-tion on target accuracy (F (4, 92) = 4.329, p < .01, η2g = 0.008), indicatingthat target accuracy was not affected by distractor duration when targetduration was long.There was also a two-way interaction of target duration and distractor112location on target accuracy (F (2, 46) = 4.234, p < .05, η2g = 0.004), withtarget accuracy being better when target duration was short and distractorlocation was in the outer ring than when located in the inner ring. In themedium and long target durations, distractor location had no effect on targetaccuracy.There was no interaction between distractor duration and distractor loca-tion, nor a three-way interaction between target duration, distractor durationand distractor location.Distractor accuracyWe similarly used a 3 (target duration: long, medium, short) × 3 (distractorduration: long, medium, short) × 2 (distractor location: inner ring, outerring) repeated-measures ANOVA to analyze the results on distractor accu-racy. The ANOVA revealed main effects and two-way interactions.The ANOVA revealed a main effect of target duration on distractor accu-racy (F (2, 46) = 83.6, p < .001, η2g = .291). The longer the target duration,the higher the distractor accuracy. Post-hoc pairwise comparison t-tests oftarget duration on distractor accuracy revealed significant differences be-tween all durations (p < .001). There were a total of 144 trials per targetduration with mean accuracy: long 43.3% (m = 62.3, sd = 4.2%), medium35.8% (m = 51.6, sd = 6.5%), short 25.2% (m = 36.3, sd = 6.3%).The ANOVA revealed a main effect of distractor duration on distrac-tor accuracy (F (1.46, 33.5) = 266.1, p < .001∗, η2g = .729). The longer thedistractor duration, the higher the distractor accuracy. Post-hoc pairwisecomparison t-tests of distractor duration on distractor accuracy revealedsignificant differences between all durations (p < .001). There were a to-tal of 144 trials per distractor duration with mean accuracy: long 61.4%(m = 88.5, sd = 10.6%), medium 24.4% (m = 35.2, sd = 4.5%), short 18.5%(m = 26.6, sd = 4.1%).The ANOVA revealed a main effect of distractor location on distractoraccuracy (F (1, 23) = 15.49, p < .001, η2g = .012). When distractors were lo-cated in the inner ring, distractor accuracy was better than when distractors113were in the outer ring. There were a total of 216 trials per distractor loca-tion (inner/outer) with mean accuracy: inner 36.8% (m = 79.5, sd = 5.6%),outer 32.7% (m = 70.7, sd = 3.9%).There was a two-way interaction of target duration and distractor dura-tion on distractor accuracy (F (2.56, 58.91) = 53.55, p < .001∗, η2g = 0.364),indicating that distractor accuracy was not affected by target duration whendistractor duration was short.There was a two-way interaction of target duration and distractor lo-cation on distractor accuracy (F (2, 46) = 5.59, p < .01, η2g = 0.010), withdistractor location only having an effect on distractor accuracy when targetduration was medium. In this case, disractor correctness was higher whenlocated in the inner ring than when in the outer ring.There was a two-way interaction of distractor duration and distractorlocation on distractor accuracy (F (2, 46) = 24.57, p < .001, η2g = 0.072),indicating that distractor accuracy was better when the distractor durationwas long and the distractor location was in the inner ring than in the outerring, and that distractor accuracy was not affected by distractor locationwhen distractor duration was medium or short.There was no three-way interaction between target duration, distractorduration and distractor location.Matched Target and Distractor DurationsTrials had a mix of durations for target and the distractor. In a real ap-plication of the technique the durations would be the same for targets anddistractors because one user’s distractor would be another user’s target (andvice-versa). Restricting the data to only include trials where target and dis-tractor durations were the same, the means for target accuracy were: long95.8% (m = 46.0, sd = 4.4%), medium 94.2% (m = 45.2, sd = 4.2%), short71.0% (m = 34.1, sd = 18.3%). The means for distractor accuracy were:long 83.2% (m = 40.0, sd = 8.8%), medium 22.5% (m = 10.8, sd = 7.5%),short 18.1% (m = 8.71, sd = 5.8%).114Random Chance0%20%40%60%80%100%Target DistractorPercentage of Correct ResponsesDurationLong (400ms)Medium (80ms)Short (16ms)Target and Distractor Correctness by DurationFigure 9.6: The mean accuracy percentage for the target and distrac-tor based on the duration of the target and distractor respec-tively. The error bars represent 95% confidence interval.LettersCombining the 5 letter choices with the 3 levels of target duration, 3 levelsof distractor duration, and 16 distractor locations would require 720 trialsin the experiment to cover each condition once, which is more than the 432trials that were completed, so including target letter as a factor in the mainANOVA was not done because there would be many combinations of lettersand conditions that were not tested. Instead, we tested the target letter inisolation. A one-way repeated measures ANOVA revealed a significant effectof letter on target accuracy (F (2.97, 68.38) = 15.59, p < .001∗, η2g = 0.20).Post-hoc pairwise comparison t-tests showed A was significantly easier thanC, D, and E (p < .001), but not easier than B (p = .095), and that E wassignificantly harder than A (p < .001), B (p < .01), and D (p < .001), butnot C (p = .57). The means for target accuracy for each letter were: A92.8% (sd = 5.8%), B 87.7% (sd = 10.0%), C 83.7% (sd = 9.9%), D 87.4%(sd = 7.6%), E 79.4% (sd = 10.7%).115QuestionnaireThe data from the questionnaires that were given to participants after eachblock of the experiment were analyzed using Friedman tests. All questionswere Likert-formatted on a 5-point scale. We report effect size as r and useCohen’s criteria that a value of .1 is a small effect size, .3 is medium, and .5is large [26].How difficult was the target? (BQ1) A Friedman test revealed aneffect of target duration on self-assessed difficulty in interpreting the tar-get letter (χ2(2) = 37.76, p < .001). Post-hoc pairwise Wilcoxon Signed-Rank tests showed that there was a significant difference between short andmedium (p < .001, r = .60), short and long (p < .001, r = .62), and longand medium (p < .01, r = .46). In this question, ratings ranged from 1,very difficult, to 5, very easy. Participants had the most difficulty duringthe short duration, and least in the long duration, with medians: long 4,medium 4, short 2.How difficult was the distractor? (BQ2) A Friedman test revealed aneffect of target duration on self-assessed difficulty in interpreting the distrac-tor’s letter (χ2(2) = 23.26, p < .001). Post-hoc pairwise Wilcoxon Signed-Rank tests showed that there was a significant difference between short andlong (p < .001, r = .58), and long and medium (p < .01, r = .44), but notbetween short and medium (p = .105), In this question, ratings ranged from1, very difficult, to 5, very easy. Participants had the most difficulty duringthe short duration, and least in the long duration, with medians: long 2,medium 1, short 1.How distracting was the distractor? (BQ3) A Friedman test re-vealed no significant difference in self-assessments of how distracting thedistractor was across different target durations (χ2(2) = 1.41, p = .494). Inthis question, ratings ranged from 1, no effect, to 5, very distracting. Par-ticipants found the distractor had little impact on their ability to interprettheir own avatar’s letters, with medians: long 2, medium 2, short 2.How confident were you? (BQ4) A Friedman test revealed an effectof target duration on self-assessed confidence in interpreting the target letter116(χ2(2) = 36.64, p < .001). Post-hoc pairwise Wilcoxon Signed-Rank testsshowed that there was a significant difference between short and medium(p < .001, r = .52), short and long (p < .001, r = .62), and long and medium(p < .001, r = .52). In this question, ratings ranged from 1, very doubtful, to5, very confident. Participants were most confident during the long duration,and least in the short duration, with medians: long 4, medium 3, short 2.How comfortable did you feel? (BQ5) A Friedman test revealed aneffect of target duration on self-assessed level of comfort (χ2(2) = 24.4, p <.001). Post-hoc pairwise Wilcoxon Signed-Rank tests showed that there wasa significant difference between short and medium (p < .01, r = .47), shortand long (p < .001, r = .57), and long and medium (p < .05, r = .43). In thisquestion, ratings ranged from 1, very uncomfortable, to 5, very comfortable.Participants were most comfortable during the long duration, and least inthe short duration, with medians: long 4, medium 3, short 2.Strategy. (BQ6) We asked participants to write down any strategythey employed in the experimental blocks. In the long duration block, 15participants mentioned primarily focusing on their avatar, and 5 participantsdescribed using a wider range of focus. For example, P07 said, “Not to focuson my avatar but spread the sight focus as wide as possible.”In the medium duration block, 15 participants continued on with thestrategy that they used in the long block. One participant, P09 switchedfrom taking a wider view to taking a more focused look at the target, saying“Only focusing on the avatar, no way to get the distractor, attempting onlycauses me to miss everything!” Four participants began interpreting theletter feedback at the segment level, a strategy that was crucial for the shortblock. P04 described the strategy as, “I wasn’t reading the letters anymore;just looking for which segments disappear or even less, which part of the 8disappears.”In the short duration block, the strategy of interpreting the flickeringsegments was widely adopted, with 18 participants describing it explicitly.For example, P22 said, “Could only watch for the flicker and subtract thenegative space from the 8 to determine the expected letter. Did not watch fordistractor at all.” Five participants did not mention using this strategy, and117instead tried to see the letter the best they could. P19 said, “I closed myeyes after the flash and hope for the flash to appear.”The mean target accuracy on the short block was 70.5%. If we separatethe accuracy scores based on those who used the missing segment strategyfrom those who did not, the mean accuracy goes up to 76.5% for those thatdid use the strategy, and down to 42.6% for those that did not.9.3.9 DiscussionOur findings indicate that displaying simple letter feedback to users by brieflydisplaying the letter on an avatar is a viable method of presentation.H1 was partially supported. Participants had little difficulty gettingthe target’s value accurately during the long and medium durations, withmean accuracy ratings 95.6% and 92.5% respectively, while in the shortduration accuracy dropped to 70.5%. The accuracy of the medium durationfell slightly short of the hypothesized 95% mark. It was common for usersto mention accidentally hitting the wrong button or entering the target anddistractor letters in reverse order, which may explain the lower accuracy.On trials where both target and distractor duration were the same, longaccuracy increased to 95.8% and medium accuracy to 94.2%, closer to ourexpected threshold and a more realistic estimate of performance in actualusage when target and distractor duration would always be identical becauseof their symmetry.Several participants surprised us with their target accuracy on the shortduration, with seven participants scoring above 85%, four of whom wereover 90%. Those that had already adopted the strategy of interpreting theflashing segments to determine the letter prior to the start of the shortblock tended to score higher, while some participants only figured out thestrategy part way through. Some did not figure out the strategy, with fourparticipants scoring under 50%, one as low as 30.6%. This variance explainsthe relatively large standard deviation seen in target accuracy in the shortduration block.H2 was supported. Distractor accuracy matched our expected thresh-118olds. However, we suspected the means for the distractors may have beenartificially lowered due to the final block of the experiment when the targetduration was shortest. In this block, participants had to focus entirely onthe avatar, almost completely giving up on looking for the distractor at all.If we exclude this block from the calculation of means across distractor ac-curacy by distractor duration, we get: long 74.3% (+12.8%), medium 25.7%(+1.3%), short 18.6% (+0.1%). Furthermore, if we compare the means forwhen the distractor and target durations were both long, the distractor accu-racy increases to 83.2% (+21.7%). Given this, we advise against making useof the long duration for targets and distractors if it is important to minimizethe ability to interpret other avatars’ feedback.For the medium target duration, the means were target accuracy 92.5%,distractor accuracy 35.8%. Reducing the data set to only consider trialswhere the target and distractor duration were both medium, the means weretarget accuracy 94.1%, distractor accuracy 22.5%. This duration thus pro-vides a highly accurate target presentation, while keeping distractor accuracyclose to random chance (20%).H3 was supported. Although over time, all users could be taught thestrategy needed to handle the short duration, users found it the least com-fortable of all the durations. Given this result, we advise against the shortduration as a viable speed for application use.H4 was supported. Results from the post-block questionnaires indicatedthat participants did not find the distractor made it more difficult to interprettheir own avatar, regardless of target duration.H5 was supported. While results indicated distractor accuracy increasedwhen the distractor was located closer to the target, we did not expect this tobe at a cost of target accuracy. Our findings indicate that when the distractorhad longer duration or was closer to the target, target accuracy decreased.We suspect this is due to an increase in distraction. Participants may thinkthey have a good chance of getting the distractor on one of those trials, and sothey perhaps attempt to interpret it prior to solidifying their comprehensionof the target’s letter. Notably, the effect sizes (.007 for distractor durationand .012 for location) were below the small threshold put forth by Cohen119(.02) [26], so it may be that these effects can be ignored.We found that there was an effect on target accuracy of letter, with amedium effect size (.2). After the experiment had concluded, several par-ticipants commented on how some letters were harder than others, with Abeing the easiest, which is consistent with our results. Many mentioned dif-ficulty in accidentally swapping the meaning of b and d, and especially indiscerning the difference between C and E in the short target duration block.With the other letters, it was possible to easily determine which was shownbecause of the locations where segments were missing, but the overlap be-tween C and E was too similar to confidently distinguish. This is likely adrawback of the 7-segment display that we used. We had thought that usinga 7-segment display would be familiar to participants, but instead it broughtissues with inconsistencies in capitalization (b and d were lower case while A,C, and E were upper case), and we suspect the unnatural formations of theletters increased the time required to interpret what was shown. The samecamouflage effect provided by the fully-lit figure-8 could be accomplishedusing more segments to provide more natural looking characters. It mayeven work by simply stacking all five characters overlaid on one another andsimply fading out all but the active letter when feedback is presented.9.4 Experiment 2: Abrupt Onset DisplayAfter completing Experiment 1, we wanted to verify that using the 7-segmentdisplay in the way that we had done, with no-onset techniques (a full figure-8, followed by the letter, followed by the full figure-8), was the reason wecould provide such a strong difference in target and distractor accuracy whilekeeping distraction levels low. To test this, we ran the experiment againwith the variation being that the letter displays would use abrupt onset bybeginning blank, displaying the letter, then returning to being blank.9.4.1 ParticipantsWe recruited 12 participants (7 male) between 20 and 29 years of age, allof whom were compensated $20 for their efforts. This experiment used the120same recruitment methods and inclusion criteria as Experiment 1. None ofthese participants were participantes in the first experiment.9.4.2 Apparatus, Procedure, and DesignThis experiment used the same apparatus, procedure, and design as Exper-iment 1.9.4.3 TaskThe task in this experiment was very similar to that of Experiment 1, withthe only change being in how the letters were displayed on avatars. Insteadof having a full 7-segment display, flashing the letter, then showing the full7-segment display, as in Experiment 1, we had the 7-segment display beblank before and after showing the letter, as depicted in Figure 9.7. In the5x5 grid, no 7-segment displays were visible except those of the target anddistractor avatars for the brief moment they simultaneously displayed theirletters.Figure 9.7: The sequence the target avatar moves through in each trialin Experiment 2: fade in, reveal letter, hide letter, accuracyfeedback, fade out.1219.4.4 ResultsFor all post-hoc comparisons, the Bonferroni method was used to adjust p-values. We again report effect size using generalized eta-squared (η2g) [14],where .02 is a small effect, .13 is medium, and .26 is large [26].Target accuracyWe again used a 3 (target duration: long, medium, short) × 3 (distractorduration: long, medium, short) × 2 (distractor location: inner ring, outerring) repeated-measures ANOVA to analyze the results on target accuracy.The ANOVA revealed no interactions amongst target duration, distractorduration, and distractor location on target accuracy.The ANOVA revealed a main effect of target duration on target accuracy(F (2, 22) = 30.04, p < .001∗, η2g = .273). Target accuracy was lower thanotherwise when the target duration was short, but under all durations targetaccuracy was still over 90%. Post-hoc pairwise comparison t-tests of targetduration on target accuracy revealed significant differences between shortand medium, and short and long (p < .001), but not between medium andlong (p = 1.0). There were a total of 144 trials per target duration withmean accuracy: long 97.7% (m = 140.8, sd = 1.3%), medium 98.0% (m =141.2, sd = 1.2%), short 93.1% (m = 101.5, sd = 2.5%).The ANOVA revealed no main effect of distractor duration on targetaccuracy (F (2, 22) = 1.13, p = .340). There were a total of 144 trials perdistractor duration with mean accuracy: long 95.7% (m = 137.8, sd = 1.9%),medium 96.4% (m = 138.8, sd = 1.3%), short 96.8% (m = 139.3, sd = 2.0%).The ANOVA revealed no main effect of distractor location on targetaccuracy (F (1, 11) = 4.30, p = .062). There were a total of 216 trials perdistractor location (inner/outer) with mean accuracy: inner 95.99% (m =207.2, sd = 1.3%), outer 96.6% (m = 208.8, sd = 1.2%).Distractor accuracyWe again used a 3 (target duration: long, medium, short) × 3 (distractor du-ration: long, medium, short) × 2 (distractor location: inner ring, outer ring)122Random Chance0%20%40%60%80%100%Target DistractorPercentage of Correct ResponsesDurationLong (400ms)Medium (80ms)Short (16ms)Target and Distractor Correctness by DurationFigure 9.8: The mean accuracy percentage for the target and distrac-tor based on the duration of the target and distractor respec-tively for Experiment 2. The error bars represent 95% confidenceinterval.repeated-measures ANOVA to analyze the results on distractor accuracy.The ANOVA revealed a main effect of target duration on distractor ac-curacy (F (2, 22) = 3.80, p < .05, η2g = .021). Post-hoc pairwise comparisont-tests of distractor duration on distractor accuracy revealed no significantdifferences between durations, but suggested that medium was trending tobe better than long (p = .11). There were a total of 144 trials per targetduration with mean accuracy: long 71.5% (m = 102.9, sd = 7.8%), medium75.2% (m = 108.2, sd = 5.9%), short 74.1% (m = 106.8, sd = 8.0%).The ANOVA revealed a main effect of distractor duration on distractoraccuracy (F (2, 22) = 188.74, p < .001, η2g = .780). The longer the distractorduration, the higher the distractor accuracy. Post-hoc pairwise comparisont-tests of distractor duration on distractor accuracy revealed significant dif-ferences between all durations (p < .001). There were a total of 144 trials perdistractor duration with mean accuracy: long 95.8% (m = 137.9, sd = 2.1%),medium 78.1% (m = 112.5, sd = 8.9%), short 46.9% (m = 67.5, sd = 11.5%).123The ANOVA revealed a main effect of distractor location on distractoraccuracy (F (1, 11) = 173.52, p < .001, η2g = .373). When distractors arelocated in the inner ring, distractor accuracy is better than when distractorsare in the outer ring. There were a total of 216 trials per distractor location(inner/outer) with mean accuracy: inner 81.9% (m = 176.8, sd = 6.5%),outer 65.3% (m = 141.1, sd = 7.6%).There was an interaction of target duration, distractor duration and dis-tractor location on distractor accuracy (F (4, 44) = 3.53, p < .05, η2g = 0.023),indicating that distractor accuracy was only affected by target duration whenboth distractor duration was medium and distractor location was in the outerring.There was an interaction of target duration and distractor duration ondistractor accuracy (F (4, 44) = 2.79, p < .05, η2g = 0.029), but post-hoc testsrevealed no significant differences among pairs.There was an interaction of distractor duration and distractor location ondistractor accuracy (F (2, 46) = 24.57, p < .001, η2g = 0.072), indicating thatdistractor accuracy was unaffected by distractor location when the distractorduration was long. When the duration was medium or slow, distractor ac-curacy was higher when the location was in the inner ring than in the outerring.There was no interaction of target duration and distractor location ondistractor accuracy (F (2, 22) = 2.76, p = .085).LettersAs in Experiment 1, we tested the effect of target letter in isolation dueto a lack of sufficient trials across all conditions. A one-way repeated mea-sures ANOVA showed a significant main effect of letter on target accuracy(F (4, 44) = 3.65, p < .05, η2g = 0.23). Post-hoc pairwise comparison t-tests ofletter on target accuracy revealed no significant differences between letters,but suggested D was trending to be worse than A (p = .16), B (p = .23),and C (p = .35). The means for each letter were: A 97.9% (sd = 1.6%), B97.0% (sd = 2.2%), C 97.6% (sd = 2.1%), D 93.7% (sd = 4.5%), E 95.3%124(sd = 4.0%).QuestionnaireThe following results use the effect size measurement r where a value of .1is small, .3 is medium, and .5 is large [26]. These results are analyzed fromthe questionnaires that were given to participants after each block of theexperiment. All questions were Likert-formatted on a 5-point scale.How difficult was the target? (BQ1) A Friedman test revealed aneffect of target duration on self-assessed difficulty in interpreting the tar-get letter (χ2(2) = 18.93, p < .001). Post-hoc pairwise Wilcoxon Signed-Rank tests showed that there was a significant difference between short andmedium (p < .01, r = .63), and short and long (p < .01, r = .63), but notbetween medium and long (p = .33). In this question, ratings ranged from1, very difficult, to 5, very easy. Participants had the most difficulty duringthe short duration, and least in the long duration, with medians: long 4.5,medium 4, short 2.How difficult was the distractor? (BQ2) A Friedman test revealedan effect of target duration on self-assessed difficulty in interpreting the dis-tractor’s letter (χ2(2) = 8.71, p < .05). Post-hoc pairwise Wilcoxon Signed-Rank tests showed no significant differences between pairs of durations. Inthis question, ratings ranged from 1, very difficult, to 5, very easy. Themedian ratings were: long 2, medium 2, short 1.5.How distracting was the distractor? (BQ3) A Friedman test re-vealed a significant difference in self-assessments of how distracting the dis-tractor was across different target durations (χ2(2) = 9.63, p < .01). Post-hoc pairwise Wilcoxon Signed-Rank tests showed that there was a significantdifference between short and long (p < .05, r = .55), but not between shortand medium (p = .80) or medium and long (p = .33). In this question,ratings ranged from 1, no effect, to 5, very distracting. Participants foundthe distractor had moderate impact on their ability to interpret their ownavatar’s letters, with medians: long 2, medium 3, short 3.How confident were you? (BQ4) A Friedman test revealed an effect125of target duration on self-assessed confidence in interpreting the target letter(χ2(2) = 10.89, p < .01). Post-hoc pairwise Wilcoxon Signed-Rank testsshowed that there was a significant difference between short and long (p <.05, r = .59), but not between short and medium (p = .19), or medium andlong (p = .36). In this question, ratings ranged from 1, very doubtful, to 5,very confident. Participants were most confident during the long duration,and least in the short duration, with medians: long 4, medium 4, short 2.How comfortable did you feel? (BQ5) A Friedman test revealed aneffect of target duration on self-assessed level of comfort (χ2(2) = 7.33, p <.05). Post-hoc pairwise Wilcoxon Signed-Rank tests revealed no significantdifferences across target durations. In this question, ratings ranged from 1,very uncomfortable, to 5, very comfortable. The median ratings were: long4, medium 4, short 3.5.Strategy. (BQ6) We asked participants to write down any strategythey employed in the experimental blocks. In the long durtion block, 7participants mentioned primarily focusing on their avatar. For example P12said, “Be alert - look at my own avatar first.” Four participants describedusing a wider range of focus. For example, P09 said, “I tried to use a softerfocus on the center of the screen to both initially see my avatar’s letter andthen hopefully jump to the distractor in time to see the letter. That didn’twork most of the time, I think.”In the medium duration block, 8 participants continued on with the strat-egy that they used in the long block. One participant, P06 switched fromtaking a wider view to taking a more focused look at the target, saying “Thistime the avatar’s timer was faster, so it was more important to focus on thatfirst.”In the short duration block, 7 participants again explicitly mentionedcontinuing with their previously used strategies, which typically involvedfocusing on the target primarily. For example, P01 said, “Same strategy asbefore. I focused on my own avatar a little more since it was at a fasterspeed.” There were no common strategies outside of generally concentratinghard on the target.1269.4.5 DiscussionIn this experiment, we saw that while there was a significant difference intarget accuracy across target duration, all accuracy levels were above 90%,even in the short duration. Participant strategies indicated they were ableto see the letters directly in the short duration, which helps explain the highpercentage. This is further supported by comparing the means of distrac-tor accuracy by distractor duration across all blocks (long 95.8%, medium78.1%, short 46.9%) with the same means excluding the short target dura-tion block (long 96.0%, medium 76.6%, short 47.3%). Here we see the meansare roughly equal, implying participants had enough time and cognitive re-sources to focus on and interpret both the target letter and the distractorletter even in the short duration block, in contrast to Experiment 1.While the questionnaire results suggest that participants found the dis-tractor somewhat distracting (median rating 3 out of 5, higher than 2 inExperiment 1), the actual results show that distractor duration and locationhad no significant effect on target accuracy. Perhaps it was the case thatdistractors felt like they were distracting, but reading the letters in the pre-sentation that was given was easy enough to not effect the target results.This explanation is further supported by the result that distractor locationdid not have a significant effect on distractor accuracy when distractor du-ration was long. In this case, there was enough time to see both targetand distractor, but when distractor duration was short or medium and thedistractor was in the inner ring, accuracy increased.While there was a significant difference found in the self-assessed comfortlevel participants expressed between experiment blocks, post-hoc tests didnot reveal any additional insights into the difference. The median ratingswere all relatively close (long 4, medium 4, short 3.5), with all coming abovea neutral rating of 3. This suggests that despite the very brief duration seenin the short block, participants did not feel overly taxed by the task.1279.5 General DiscussionIn our first experiment, we found a balance between target accuracy, distrac-tor accuracy, and participant comfort in the medium duration block (80ms).Our second experiment confirmed that the no-onset presentation style wasthe primary reason we were able to approach the desired accuracy levelswe observed in Experiment 1: 92.5% for targets, and 24.4% for distractors,roughly the same as random chance (20%). While participants were morecomfortable with the abrupt onset presentation used in Experiment 2, theincrease in comfort was modest and was overshadowed by the large increasein distractor accuracy.Furthermore, participants reported a higher degree of distraction in-curred by the abrupt onset presentation style. We wanted to limit this fortwo reasons: we feared it would decrease participants’ ability to interpret thetarget letter, and it would make it easier to notice another avatar’s feedbackwithout attending to it a priori. The first reason turned out to be a concern:we saw higher distraction ratings and higher target accuracy in Experiment2 than Experiment 1. The latter reason does seem to be a concern: we sawa large increase in distractor accuracy in the second experiment.The experiments uncovered issues with using a 7-segment display topresent letters, with multiple participants expressing trouble interpretingthe letters, especially at high speed. Other sources of errors participantsnoted were accidentally entering the letters in the wrong order (i.e., distrac-tor before target) or entering a B when they meant D due to the similarity intheir lowercase shapes. Issues with 7-segment displays are well documented[72], so perhaps it would be best that future work avoid their usage.Both experiments were limited by the timing precision afforded by Fire-fox, within which they were run. While we aimed for roughly 16ms, 80ms,and 400ms of display time, we logged the visible durations with mean val-ues: 23.1ms (sd = 5.2ms), 85.9ms (sd = 5.6ms), and 404.7ms (sd = 3.6ms).Given that we were using a 60Hz monitor, the granularity provided shouldstill have granted us 1 frame in the short duration, 5 frames in the mediumduration, and 25 frames in the long duration, despite the variances in actual128ms logged.9.6 Conclusion and Future WorkWe examined a novel technique for displaying feedback in such a way thatonly those attending to it a priori can reliably interpret the results. To dothis, we made use of a no-onset presentation style. We ran two experimentsto verify our hypotheses and discovered that the no-onset presentation hadsignificant advantages over abrupt onset presentation when it comes to in-terpreting a second randomly selected avatar’s feedback, while preservinga high accuracy rating for interpreting your own statically located avatar’sfeedback.Our approach required a trade-off between reliability and privacy. Whilelonger visible durations resulted in higher accuracy levels, they also madeit easier to interpret the feedback of another user’s avatar. Shorter visibledurations, on the other hand, resulted in higher error rates but reduced thepossibility of guessing another user’s feedback to slightly worse than randomchance. At a medium length duration (80ms) of feedback reveal time, wefound a balance of high reliability and fairly high security with over 90%target accuracy and below 25% distractor accuracy.We belive that we have demonstrated the efficacy of leveraging percep-tual limitations to provide semi-private feedback to users of a shared display.While we used a 7-segment display to present the letters, there were issueswith interpretation, so future work might explore more generalized camou-flaging of the feedback with no-onset presentations. We would like to explorethis further with different experimental protocols, for example, testing onlyrandom interpretation of letters, or investigating more than two avatars atonce. Other directions for future work include exploring equiluminant tex-ture variations to reveal feedback as opposed to using no-onset presentations.9.7 AcknowledgmentsThis research was supported by the GRAND Network of Centres of Excel-lence under the SHRDSP project and by the Natural Science and Engineering129Research Council of Canada under the Discovery Grant program. Facili-ties used for the research in the Institute for Computing, Information andCognitive Systems at the University of British Columbia were provided byfunding from the Canada Foundation for Innovation. We thank Ben Janzenand Francisco Escalona for their assistance in running Experiment 1, andRon Rensink for his insightful discussions about visual perception.130Chapter 10Conclusions and Future WorkDespite the rapid expansion of technology in our everyday life, the usage oftechnology in the classroom has seen very modest changes in recent years.To help facilitate the adoption and creation of rich technological interac-tive systems in classrooms, we developed an architecture for Classroom Syn-chronous Participation Systems (CSPS). These systems are designed to allowstudents to engage in interactive activities while in lecture, getting real-timefeedback, which is often a result of their individual actions. The architecturewe propose can be implemented on a local level, running on a single user’scomputer, or can be scaled to be supported by institutional servers accessibleacross multiple courses in a university.We demonstrated the efficacy of our architecture by implementing a sys-tem that followed it, called Rhombus Classroom Synchronous ParticipationSystem. This system makes use of a suite of servers and web technologies toallow students to use clickers as generic 5-button controllers to applicationsthat can be run in a classroom environment. The Clicker Server was devel-oped to function as a multi-platform standalone interface to stream i>clickerclicks across a network socket, negating the need for any software interestedin working with clickers as input from having to interface directly with thehardware. The ID Server stands between the Clicker Server and the WebServer to translate device IDs into more usable names for use in applications.The Web Server receives the clicks from the aforementioned servers, routes131them to the main Controller activated in the Web Framework, and coordi-nates communication between the main Controller and the various Viewersan application may be using. The Web Framework provides the infrastruc-ture to create stateful applications that can easily associate incoming clickswith individuals, simplifying the creation of interactive classroom activities.The web technologies allowed us to offer displays of applications to any com-puter connected to the Web Server over the network, without requiring anyadditional installation on their part.We developed a novel method of simultaneously and anonymously reg-istering a large group of clickers in the system with the Sequence Aliaserapplication. By providing each person a sequence of buttons to press, theywere able to associate their clicker to a predefined alias that matched thegiven sequence. This sped up the task of registering clickers in the system,and transformed it from rote administrative work into a fun, engaging activ-ity for participants. This technology should be explored further to learn itslimitations, both with regards to number of participants it can support andthe types of perceptual cues it requires to assure users they have correctlyassociated their clickers.Using Rhombus, we implemented a suite of game-theoretic exercises onit (e.g., Prisoner’s Dilemma, Ultimatum Game, etc.) that were then usedmultiple times in a third-year Cognitive Systems course during lecture. Inthe first term of use, Rhombus was very positively received by both the stu-dents and the instructor. The students understood the controls and results,enjoyed playing the games, and felt engaged while playing. The instructorwas pleased to finally have the ability to give students firsthand experienceplaying classic games, as opposed to the games he was using in previousofferings that were limited to what he could support with the technology hehad available to him, and not ideal for educational purposes.Encouraged by the results of the first term of use, the instructor tookon using Rhombus on his own in the following term. While the majority ofstudents still enjoyed playing the games and felt engaged, the margins weremuch smaller. The instructor was still very pleased with the system andlooked forward to continuing to use it in the future, with hopes to expand132his curriculum of games. He provided several reasons for the change instudent attitude this term compared to the previous, including a change ofco-instructor, a lack of student incentive, and an initial awkwardness withusing the system.We developed a novel display technique for providing semi-private feed-back to users of Rhombus. Our technique gives a balance of high accuracyin interpreting your own feedback, while preserving near-random accuracyin interpreting the feedback of another when that feedback is displayed atthe same time as your own.The results from the classroom evaluation of Rhombus were encouraging,but also revealed areas for future work. There should be usability testingon the instructor interface to the system to ensure less technologically savvyusers can easily use the system. This may include improving the web-basedinstructor controller, as well as the data output, and general system activa-tion controls.The focus of Rhombus is on providing a system to support more inter-activity in the classroom. As such, it is not solely tied to using clickersas input, although they were our initial choice. The architecture supportsa wide variety of inputs beyond clickers, such as mobile phones, laptops,tweets and text messages, all of which could be explored to learn how thestudents behaviour changes depending on their mode of input. Furthermore,the architecture of the system allows for every participant to have their ownpersonal display, which should be explored as it opens a wider variety ofinteractive applications and feedback mechanisms.Rhombus has only been used with game-theoretic exercises in a class-room, but we have demonstrated some of its flexibility by conducting twoexperiments in it (Chapter 9). Future work could explore using the systemin other domains, such as community discussions and negotiations for cityplanning, or algorithmic education (e.g., learning sorting or data structureslike linked lists). We foresee the system being useful whenever informationdisplayed on screen is informed by the inputs of a group of people.133Bibliography[1] Meridia audience response. Online at meridiaars.com, accessed inSep-2013. → pages 4[2] Poll everywhere, live audience participation. Online atpolleverywhere.com, accessed in Sep-2013. → pages 4[3] Coordination game. http://en.wikipedia.org/wiki/Coordination_game, .Accessed: 2014-08-04. → pages 14[4] Matching pennies. http://en.wikipedia.org/wiki/Matching_pennies, .Accessed: 2014-08-04. → pages 13[5] Prisoner’s dilemma. http://en.wikipedia.org/wiki/Prisoners_Dilemma, .Accessed: 2014-08-04. → pages 15[6] Stag hunt. http://en.wikipedia.org/wiki/Stag_hunt, . Accessed:2014-08-04. → pages 14[7] Ultimatum game. http://en.wikipedia.org/wiki/Ultimatum_game, .Accessed: 2014-08-04. → pages 16[8] Game theory. http://en.wikipedia.org/wiki/Game_theory, . Accessed:2014-08-02. → pages 3[9] einstruction. Online at eInstruction.com, accessed in Jul-2014. → pages4[10] i>clicker. Online at iclicker.com, accessed in Feb-2014. → pages 1, 4,99[11] Node.js. Online at nodejs.org, accessed Jul-2014. → pages 41[12] A. W. Astin et al. What matters in college?: Four critical yearsrevisited. Jossey-Bass San Francisco, 1993. → pages 3134[13] R. C. Atkinson, J. E. Holmgren, and J. F. Juoli. Processing time asinfluenced by the number of elements in a multielement display.Perception & Psychophysics, (6):321–326, 1969. → pages 103[14] R. Bakeman. Recommended effect size statistics for repeated measuresdesigns. Behavior research methods, 37(3):379–384, 2005. → pages111, 122[15] D. A. Banks. Reflections on the use of ars with small groups. Audienceresponse systems in higher education, pages 373–386, 2006. → pages 5[16] I. D. Beatty. Transforming student learning with classroomcommunication systems. EDUCAUSE Research Bulletin, 2004(3):1–13,2004. → pages 5[17] G. Bergtrom. Clicker sets as learning objects. Interdisciplinary Journalof E-Learning and Learning Objects, 2(1):105–110, 2006. → pages 5[18] A. A. Bostian and C. A. Holt. Veconlab classroom clicker games: Thewisdom of crowds and the winner’s curse. J. of Economic Education,44(3):217–229, 2013. → pages 6[19] C. A. Brewer. Near real-time assessment of student learning andunderstanding in biology courses. BioScience, 54(11):1034–1039, 2004.→ pages 5[20] D. E. Broadbent. Task combination and selective intake ofinformation. Acta psychologica, 50(3):253–290, 1982. → pages 104[21] D. Bruff. Teaching with classroom response systems: Creating activelearning environments. John Wiley & Sons, 2009. → pages 6[22] D. Bullock, V. LaBella, T. Clingan, Z. Ding, G. Stewart, andP. Thibado. Enhancing the student-instructor interaction frequency.The Physics Teacher, 40(9):535–541, 2002. → pages 5[23] R. A. Burnstein and L. M. Lederman. Using wireless keypads inlecture classes. The Physics Teacher, 39(1):8–11, 2001. → pages 5[24] J. E. Caldwell. Clickers in the large classroom: Current research andbest-practice tips. CBE-Life Sciences Education, 6(1):9–20, 2007. →pages 5, 7135[25] H. Cao, P. Olivier, and D. Jackson. Enhancing privacy in publicspaces through crossmodal displays. Soc. Sci. Comput. Rev., 26(1):87–102, Feb. 2008. ISSN 0894-4393. doi:10.1177/0894439307307696.URL http://dx.doi.org/10.1177/0894439307307696. → pages 102[26] J. Cohen. Statistical power analysis for the behavioral sciences.Lawrence Erlbaum Associates, 2nd edition, 1988. → pages 66, 111,116, 120, 122, 125[27] R. d’Inverno, H. Davis, and S. White. Using a personal responsesystem for promoting student interaction. Teaching Mathematics andits applications, 22(4):163–169, 2003. → pages 5[28] S. W. Draper and M. I. Brown. Increasing interactivity in lecturesusing an electronic voting system. Journal of computer assistedlearning, 20(2):81–94, 2004. → pages 5[29] S. M. Durbin and K. A. Durbin. Anonymous polling in an engineeringtutorial environment: A case study. Audience response systems inhigher education, pages 116–126, 2006. → pages 5[30] D. T. Eden. Animate.css. Online athttp://daneden.github.io/animate.css/, accessed Jul-2014. → pages 22,59[31] J. El-Rady. To click or not to click: That’s the question. Innovation:Journal of Online, 2006. → pages 5, 7[32] C. Elliott. Using a personal response system in economics teaching.International Review of Economics Education, 1(1):80–86, 2003. →pages 5[33] C. W. Eriksen and J. E. Hoffman. Temporal and spatialcharacteristics of selective encoding from multielement displays.Perception & Psychophysics, 12:201–204, 1972. → pages 104[34] A. P. Fagen, C. H. Crouch, and E. Mazur. Peer instruction: Resultsfrom a range of classrooms. The Physics Teacher, 40(4):206–209, 2002.→ pages 5[35] I. Fette and A. Melnikov. The websocket protocol. 2011. → pages 41[36] C. Fies and J. Marshall. Classroom response systems: A review of theliterature. Journal of Science Education and Technology, 15(1):101–109, 2006. → pages 5136[37] S. A. Gauci, A. M. Dantas, D. A. Williams, and R. E. Kemm.Promoting student-centered active learning in lectures with a personalresponse system. Advances in Physiology Education, 33:60–71, 2009.→ pages 2, 99[38] L. Greer and P. J. Heaney. Real-time analysis of studentcomprehension: an assessment of electronic student responsetechnology in an introductory earth science course. Journal ofGeoscience Education, 52(4):345–351, 2004. → pages 5[39] R. R. Hake. Interactive-engagement versus traditional methods: Asix-thousand-student survey of mechanics test data for introductoryphysics courses. American journal of Physics, 66(1):64–74, 1998. →pages 3[40] R. H. Hall, H. L. Collier, M. L. Thomas, and M. G. Hilgers. A studentresponse system for increasing engagement, motivation, and learningin high enrollment lectures. In Proc. Americas Conf. on Inf. Systems,pages 621–626, 2005. URL . → pages 2, 99[41] J. Hatch, M. Jensen, and R. Moore. Manna from heaven or “clickers”from hell: Experiences with an electronic response system. Journal ofCollege Science Teaching, 34(7):36, 2005. → pages 5, 7[42] J. E. Hoffman and B. Nelson. Spatial selectivity in visual search.Perception & Psychophysics, 30(3):283–290, 1981. → pages 103[43] J. E. Holmgren. The effect of a visual indicator on rate of visualsearch: Evidence for processing control. Perception & Psychophysics,15(3):544–550, 1974. → pages 103[44] H. M. Horowitz. Ars evolution: Reflections and recommendations.Audience response systems in higher education, pages 53–63, 2006. →pages 5[45] J. Hu, P. Bertok, M. Hamilton, G. White, A. Duff, and Q. Cutts.Teaching by using keypad-based ars. Audience Response Systems inHigher Education: Applications and Cases, page 209, 2006. → pages 5[46] M. Jackson, A. C. Ganger, P. D. Bridge, and K. Ginsburg. Wirelesshandheld computers in the undergraduate medical curriculum. MedicalEducation Online, 10, 2005. → pages 5137[47] C. Jones, M. Connolly, A. Gear, and M. Read. Group integrativelearning with group process support technology. British Journal ofEducational Technology, 32(5):571–581, 2001. → pages 5[48] jqueryEasing. Easings | jQuery UI API Documentation. Online athttp://api.jqueryui.com/easings/, accessed Jun. 2014. → pages 106[49] E. Judson and a. D. Sawada. Learning from past and present:electronic response systems in college lecture halls. Journal ofComputers in Mathematics and Science Teaching, 21(2):167–181, 2002.→ pages 4, 5[50] R. Kaleta and T. Joosten. Student response systems: A university ofwisconsin system study of clickers. Educause Center for AppliedResearch Research Bulletin, 10(1):12, 2007. → pages 5[51] R. H. Kay and A. LeSage. Examining the benefits and challenges ofusing audience response systems: A review of the literature.Computers & Education, 53(3):819–827, 2009. → pages 5[52] G. E. Kennedy and Q. I. Cutts. The association between students’ useof an electronic voting system and their learning outcomes. Journal ofComputer Assisted Learning, 21(4):260–268, 2005. → pages 5[53] G. E. Kennedy, Q. Cutts, and S. W. Draper. Evaluating electronicvoting systems in lectures: Two innovative methods. Audienceresponse systems in higher education, pages 155–174, 2006. → pages 5[54] A. Kristine. Is it the clicker, or is it the question? untangling theeffects of student response system use. Teaching of Psychology, 38(3):189–193, 2011. → pages 6[55] R. Latessa and D. Mouw. Use of an audience response system toaugment interactive learning. Fam Med, 37(1):12–4, 2005. → pages 5[56] A. Michotte. The perception of causality. 1963. → pages 21[57] R. B. Myerson. Game theory: Analysis of conflict. Cambridge MA,1990. → pages 12[58] S. Newson. Clic^in: A large-lecture participation system, 2012.Retrieved from http://bitbucket.org/sgnewson/clic-in-a-large-lecture-participation-system/downloads/ClicIn_Final_Report_2012.pdf. → pages 35138[59] D. J. Nicol and J. T. Boyle. Peer instruction versus class-widediscussion in large classes: A comparison of two interaction methods inthe wired classroom. Studies in Higher Education, 28(4):457–473,2003. → pages 5[60] M. I. Posner, C. R. Snyder, and B. J. Davidson. Attention and thedetection of signals. Journal of experimental psychology: General, 109(2):160–174, 1980. → pages 103[61] A. Pradhan, D. Sparano, and C. V. Ananth. The influence of anaudience response system on knowledge retention: an application toresident education. American journal of obstetrics and gynecology, 193(5):1827–1830, 2005. → pages 5[62] R. W. Preszler, A. Dawe, C. Shuster, and M. Shuster. Assessment ofthe effects of student response systems on student learning andattitudes over a broad range of biology courses. Life ScienceEducation, 6:29–41, 2007. → pages 5[63] M. K. Salemi. Clickenomics: Using a classroom response system toincrease student engagement in a large-enrollment principles ofeconomics course. J. of Economic Education, 40(4):385–404, 2009. →pages 6[64] T. E. Schackow, M. Chavez, L. Loya, and M. Friedman. Audienceresponse system: effect on learning in family medicine residents.FAMILY MEDICINE-KANSAS CITY-, 36(7):496–504, 2004. → pages5[65] J. Shi. Improve classroom interaction and collaboration usingi>clicker. Master’s thesis, University of British Columbia, Canada,2013. → pages 2, 9, 31, 34, 35, 38[66] G. B. D. Shoemaker and K. M. Inkpen. Single display privacyware:Augmenting public displays with private information. In Proceedingsof CHI ’01, pages 522–529. ACM, 2001. ISBN 1-58113-327-8.doi:10.1145/365024.365349. URLhttp://doi.acm.org/10.1145/365024.365349. → pages 102[67] K. Siau, H. Sheng, and F.-H. Nah. Use of a classroom response systemto enhance classroom interactivity. Education, IEEE Transactions on,49(3):398–403, 2006. → pages 5, 7139[68] V. Simpson and M. Oliver. Electronic voting systems for lectures thenand now: A comparison of research and practice. Australasian Journalof Educational Technology, 23(2):187, 2007. → pages 5[69] D. Slain, M. Abate, B. Hodges, M. Stamatakis, and S. Wolak. Aninteractive response system to promote active learning in the doctor ofpharmacy curriculum. American Journal of Pharmaceutical Education,68(5), 2004. → pages 5[70] J. R. Stowell and J. M. Nelson. Benefits of electronic audienceresponse systems on student participation, learning, and emotion.Teaching of Psychology, 34(4):253–258, 2007.doi:10.1080/00986280701700391. → pages 2, 99[71] S. A. Stuart, M. I. Brown, and S. W. Draper. Using an electronicvoting system in logic lectures: one practitioner’s application. Journalof Computer Assisted Learning, 20(2):95–102, 2004. → pages 5[72] H. Thimbleby. Reasons to question seven segment displays. InProceedings of the SIGCHI Conference on Human Factors inComputing Systems, pages 1431–1440. ACM, 2013. → pages 128[73] J. Todd and P. V. Van Gelder. Implications of atransientâĂŞsustained dichotomy for the measurement of humanperformance. Journal of Experimental Psychology: Human Perceptionand Performance, 5(4):625–638, 1979. URLhttp://psycnet.apa.org/journals/xhp/5/4/625/. → pages 103[74] A. Treisman and S. Gormican. Feature analysis in early vision:evidence from search asymmetries. Psychological review, 95(1):15,1988. → pages 104[75] A. M. Treisman and G. Gelade. A feature-integration theory ofattention. Cognitive psychology, 12(1):97–136, 1980. → pages 103[76] M. Uhari, M. Renko, and H. Soini. Experiences of using an interactiveaudience response system in lectures. BMC Medical Education, 3(1):12, 2003. → pages 5[77] S. Yantis and J. Jonides. Abrupt visual onsets and selective attention:Evidence from visual search. Journal of Experimental Psychology:Human perception and performance, 10(5):601–621, 1984. → pages100, 103, 104, 107140[78] S. Yantis and J. Jonides. Abrupt visual onsets and selective attention:voluntary versus automatic allocation. Journal of ExperimentalPsychology: Human perception and performance, 16(1):121–134, 1990.→ pages 104141Appendix AExperiment ResourcesThis appendix includes supplemental resources that were used in runningthe experiments described in Chapter 9. The same resources were used forboth experiments.A.1 Pre-experiment QuestionnaireThis questionnaire was given to participants prior to taking part in the ex-periment.142 Version 1.0 2014-May-25  page 1/1 Private Feedback on a Shared Display Pre-Experiment Questionnaire Participant #  ? ?  1. How old are you? ? ? years   2. What is your gender? (tick one)  ¡ Male ¡ Female ¡ Other  3. How much time do you spend per week using a computer? (tick one)  ¡ Less than 1 hour ¡ 1 to 3 hours ¡ 4 to 8 hours  ¡ More than 8 hours    4. How many times have you used an i>clicker? (tick one)  ¡ Never ¡ Once ¡ 2 to 5 times  ¡ More than 5 times  5. Have you previously used or seen Rhombus Participation System? (tick one) ¡ Yes ¡ No  6. Do you normally wear glasses or contact lenses? (tick one)  ¡ Yes If yes, what is your prescription? ¡ __________________  ¡ I don’t know  ¡ No   143A.2 Post-block QuestionnaireThis questionnaire was given to participants after each of the three blocksof the experiment.144 Version 1.0 2014-May-25   page 1/1  Private Feedback on a Shared Display Post-Block Questionnaire Participant #  ! !   1. How difficult was it to interpret your avatar’s letter? (circle one number) 1 - Very Difficult       2 - Difficult     3 - Neutral     4 - Easy     5 - Very Easy  2. How difficult was it to interpret the distractor’s letter? (circle one number) 1 - Very Difficult       2 - Difficult     3 - Neutral     4- Easy     5 - Very Easy   3. On a scale from 1 to 5, how distracting was the distractor? With 1 meaning it had no effect and 5 meaning it made it very difficult to interpret your avatar’s letter. (circle one number)    (No effect) 1 2 3 4 5      (Very Distracting)  4. How confident are you that you could reliably interpret your feedback at this speed? 1 - Very Doubtful     2 - Doubtful     3 - Neutral     4 - Confident     5 - Very Confident   5. How comfortable did you feel while completing this task? 1 - Very Uncomfortable    2 - Uncomfortable    3 - Neutral    4 - Comfortable    5 - Very Comfortable   6. Did you employ any particular strategy in completing the task? Please explain.  _________________________________________________________________________  _________________________________________________________________________  _________________________________________________________________________  _________________________________________________________________________  _________________________________________________________________________    7. Please write any other comments you have regarding your experience with this task:  _________________________________________________________________________  _________________________________________________________________________  _________________________________________________________________________  _________________________________________________________________________  _________________________________________________________________________   145A.3 Post-experiment QuestionnaireThis questionnaire was given to participants after they had completed allthe blocks of the experiment.146Version 1.0 2014-May-25   page 1/1  Private Feedback on a Shared Display Post-Experiment Questionnaire Participant #  ? ?   1. Overall, how much did you like using this display technique? (circle one number) 1 - Really Dislike     2 - Dislike     3 - Neutral     4 - Like     5 - Really Like   2. Please write any other comments you have regarding your overall experience with the display mechanism:  _________________________________________________________________________  _________________________________________________________________________  _________________________________________________________________________  _________________________________________________________________________  _________________________________________________________________________  _________________________________________________________________________     147A.4 Consent FormThis consent form was provided to participants prior to running the experi-ment. All participants that completed the experiment signed the form.148  Version 1.1 2014-June 03 Consent Form page 1/2  Private Feedback on a Shared Display  UBC Department of Computer Science ICICS/CS Building 201-2366 Main Mall Vancouver, B.C., V6T 1Z4  Consent Form  Principal Investigator Kellogg S. Booth, Professor, Department of Computer Science, (604) 822-8193  Co-Investigator Peter Beshai, M.Sc. Student, Department of Computer Science, (604) 339-4003  Project Purpose and Procedures The purpose of this study is to evaluate a novel method of providing private individual feedback on a shared display. You will be asked to interpret letters on a display and press the corresponding button on an i>clicker remote.  Confidentiality Your identity will remain anonymous and will be kept confidential. A computer will record performance as you perform the tasks, but no identifying information (such as your name) will be stored with this data, nor will it be associated with the data after it has been analyzed.  The results will be made public through publications; however, no identifying information will be included in any published disclosure of the research.  No audio recordings or photographs will be made of your participation.  Risks/Remuneration/Compensation There are no anticipated risks to you participating in this research. You are free to take a break or withdraw from the study.  You will receive an honorarium of $20 for your participation. You will be eligible for the honorarium even if you withdraw from the study.   149  Version 1.1 2014-June 03 Consent Form page 2/2  Contact Information about the Project If you have any questions or require further information about the project you may contact Peter Beshai (pbeshai@cs.ubc.ca or 604-339-4003) or Dr. Kellogg Booth (ksbooth@cs.ubc.ca or (604) 822-8193).  Contact for Concerns About the Rights of Research Subjects If you have any concerns or complaints about your rights as a research participant and/or your experiences while participating in this study, contact the Research Participant Complaint Line in the UBC Office of Research Services at 604-822-8598 or if long distance e-mail RSIL@ors.ubc.ca or call toll free 1-877-822-8598  Consent We intend for your participation in this project to be pleasant and stress-free. Your participation is entirely voluntary and you may refuse to participate or withdraw from the study at any time without consequence.  Your signature below indicates that you have received a copy of this consent form for your own records. Your signature indicates that you consent to participate in this study.  I, (print name) ___________________________ agree to participate in the project as outlined above. My participation in this project is voluntary and I understand that I may withdraw at any time.   ____________________________________________________________________ Subject Signature     Date   _______________________________ Printed Name of Subject  150Appendix BClassroom EvaluationResourcesThis appendix contains supplemental resources that were used when runningthe classroom evaluation of Rhombus CSPS.B.1 Student Clicker QuestionnaireThese questions were administered via the Question app in Rhombus, asshown in Figure B.1. Students responded by pressing buttons on their click-ers. A 20 second timer was displayed on screen to allow those who did notwish to answer questions the option of abstaining, as opposed to waiting forthe number of responses to reach the total number of participants available.1. It was easy to find myself on the screen.A: Strongly AgreeB: AgreeC: NeutralD: DisagreeE: Strongly Disagree2. I understood the rules of the game.A: Strongly Agree151Figure B.1: The Question app in Rhombus. The question is presentedin large text at the top with the answers mapping to clickerbuttons displayed on the left side. As students click in theirresponses, their names appear on the right side of the page di-rectly below a count of the responses. Pressing a clicker buttonafter one’s name is already visible causes the name to flash,indicating that the new button press has been received.B: AgreeC: NeutralD: DisagreeE: Strongly Disagree3. I understood the controls of the game.A: Strongly AgreeB: AgreeC: NeutralD: DisagreeE: Strongly Disagree4. I understood the results of the game.A: Strongly AgreeB: Agree152C: NeutralD: DisagreeE: Strongly Disagree5. I liked playing the game.A: Strongly AgreeB: AgreeC: NeutralD: DisagreeE: Strongly Disagree6. I felt engaged during the game.A: Strongly AgreeB: AgreeC: NeutralD: DisagreeE: Strongly Disagree7. I would like to use this system in other classes.A: Strongly AgreeB: AgreeC: NeutralD: DisagreeE: Strongly Disagree8. It was satisfying to use this system to play the game.A: Strongly AgreeB: AgreeC: NeutralD: DisagreeE: Strongly Disagree9. How was the pace of the game?A: Very FastB: Fast153C: GoodD: SlowE: Very Slow10. How would you rate the system compared to typical iClickerusage?A: Much BetterB: BetterC: EquivalentD: WorseE: Much Worse11. How helpful are the in-class multiple choice questions withregards to learning?A: Very helpfulB: HelpfulC: Not helpful or harmfulD: HarmfulE: Very Harmful12. How helpful was playing the games with this system withregards to learning?A: Very helpfulB: HelpfulC: Not helpful or harmfulD: HarmfulE: Very Harmful13. It is worth taking class time to do multiple choice questions.A: Strongly AgreeB: AgreeC: NeutralD: DisagreeE: Strongly Disagree15414. It was worth taking class time to play games with this system.A: Strongly AgreeB: AgreeC: NeutralD: DisagreeE: Strongly DisagreeB.2 Student Short Answer QuestionnaireThe short answer questionnaire was was administered to students on the finalsession of term 1 and term 2. The questionnaire was printed and studentsfilled in responses by hand.155Post-­‐Study	  Questionnaire	  	   	   	   	   	  	  	  	  	  	  	   	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	   	  	  	  	  	  	  	  Pseudonym:	  __________________	  	  	  Thanks	  for	  participating	  in	  a	  research	  study	  using	  the	  Rhombus	  Clicker	  System	  (RCS),	  an	  interactive	  classroom	  response	  system	  that	  allows	  inputs	  from	  iClickers	  to	  be	  used	  with	  real-­‐time	  visual	  feedback.	  Please	  take	  a	  few	  minutes	  and	  answer	  the	  following	  questions.	  	   [	  	  	  ]	  	  	   I	  consent	  to	  have	  my	  data	  used	  in	  the	  study	  and	  have	  signed	  a	  consent	  form.	  	   	  Which	  input	  device(s)	  would	  you	  prefer	  to	  use	  with	  an	  interactive	  classroom	  response	  system?	  	  clicker,	  	  	  	  	  	  mobile	  phone,	  	  	  	  	  	  tablet,	  	  	  	  	  	  laptop,	  	  	  	  	  	  other:	  ___________	  	  (please	  circle)	  Please	  explain	  your	  choice(s)	  below.	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  	  Describe	  any	  issues	  or	  problems	  you	  had	  with	  RCS.	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  	  Describe	  what	  you	  liked	  most	  about	  RCS.	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  	  Describe	  any	  suggestions	  you	  have	  for	  new	  features	  or	  improvements	  to	  the	  system.	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  	  In	  COGS	  300,	  we	  used	  RCS	  to	  play	  game	  theory	  games.	  Do	  you	  have	  any	  suggestions	  for	  other	  applications	  of	  the	  system?	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  	  Other	  comments	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  ________________________________________________________________________________________________	  Thanks	  for	  participating!	  	   	   	   	   	  	  	  	  	  	  	  	  	  	  	   	  	  	  	  	  	  	  	  	  	  	  	  	  Researcher:	  Peter	  Beshai	  pbeshai@cs.ubc.ca	  156B.3 Consent FormThis consent form was presented to students prior to any use of the systemin term 1. In term 2, this form was given to the students before having themcomplete the digital and written questionnaires. Students who did not signa consent form had their data pruned prior to analysis.157Version 1.2 2013-Sep-05 Consent Form Page 1 of 2       THE UNIVERSITY OF BRITISH COLUMBIA   Department of Computer Science  2366 Main Mall  Vancouver, B.C., V6T 1Z4   Consent Form   Principal Investigator: Kellogg S. Booth, Professor, Department of Computer Science, University of British Columbia, ksbooth@cs.ubc.ca 604-822-8193  Co-Investigator: Peter Beshai, pbeshai@cs.ubc.ca 604-339-4003  Recruitment: Participation in this project is open to all students enrolled in COGS 300.  Purpose: The overall purpose of this research is to evaluate the effectiveness of a novel system extending the use of typical classroom response systems (e.g., iClickers) with regards to student engagement, enjoyment, and comprehension.  What you will be asked to do: After you have read this document, I/we will respond to any questions or concerns that you may have. Once you have signed this consent form, you will be asked to:  - interact with a digital system (e.g., the iClicker system with custom software) - complete a questionnaire about the experience  This should take about 90 minutes and be completed over 6 lectures. The interaction with the iClickers is part of the regular classroom activity but the questionnaire and analysis of the data are not. Unless you consent, you will not be included in the questionnaire or the data analysis.  Risks: There are no anticipated risks to you by participating in this research. Your responses and performance will have no impact on your grade in COGS 300.  Compensation: There is no compensation for participating in this study.  Confidentiality: The clicker system is pseudonymous. You will register your clicker with the pseudonym randomly selected and provided to you, which will be the only link between you and the data that you provide. This link will be erased at the completion of the study. The data you provide will be kept in a secure database and will only be accessible to the research team. You may choose to stop clicking/responding to questions at any time. If you decline to give your consent, we will remove any of your responses prior to analyzing the clicker data for experimental purposes.  158Version 1.2 2013-Sep-05 Consent Form Page 2 of 2 The results of the research will be made public through publications; however, no identifying information will be included in any published disclosure of the research.  No audio recordings or photographs will be made of your participation.  Contact for information about the project: If you have any questions or require further information about the project, you may contact Peter Beshai (pbeshai@cs.ubc.ca or 604-339-4003) or Dr. Kellogg Booth (ksbooth@cs.ubc.ca or 604-822-8193).  Contact for information about the rights of research subjects: If you have any concerns about your treatment or rights as a research subject, you may contact the Research Subject Information Line in the UBC Office of Research Services at 604-822-8598 or if long distance, email ORSIL@ors.ubc.ca.  Pseudonym: Please enter the pseudonym (also known as an alias, handle, or nickname) that has been provided to you.   Your pseudonym: _______________   Consent: We intend for your participation in this project to be pleasant and stress-free. Your participation is entirely voluntary and you may refuse to participate or withdraw from the study at any time without consequence.  Your signature below indicates that you have received a copy of this consent form for your own records. Your signature indicates that you consent to participate in this study.  I, (print name) ___________________________ agree to participate in the project as outlined above. My participation in this project is voluntary and I understand that I may withdraw at any time.   ____________________________________________________________________ Subject Signature     Date   _______________________________ Printed Name of Subject     159B.4 Sequence Aliaser MappingThe sequence aliaser requires a mapping from sequences to aliases in orderto work. We used sequences of length 4 to map to 64 possible aliases, asshown in Table B.1. In term 1, we provided slips of paper that indicatedtheir assigned alias and the associated sequence to students, which they thenused to register with the system.Sequence Alias Sequence Alias Sequence AliasAAAD leo BBCA adele CCDC yeezyAABC martha BBDB gosling CDAD jobsAACA jordan BCAB jackson CDBC feyAADB zooey BCBA keanu CDCA owenABAA angie BCCC potter CDDB whoopiABBB perry BCDD cruise DAAC portmanABCD stiller BDAC arnie DABD juliaABDC pink BDBD diaz DACB albaACAC halle BDCB murray DADA lizACBD lopez BDDA cruz DBAB maddyACCB marilyn CAAB bee DBBA vaughnACDA spears CABA leia DBCC oprahADAB aniston CACC hova DBDD gagaADBA spock CADD scarjo DCAD ellenADCC freeman CBAC audrey DCBC marleyADDD pitt CBBD elvis DCCA fordBAAA will CBCB deniro DCDB bruceBABB lucy CBDA stark DDAA carreyBACD rihanna CCAA holmes DDBB bondBADC cera CCBB timber DDCD samuelBBAD swift CCCD gates DDDC milaBBBC deppTable B.1: The sequence-alias mapping used in Sequence Aliaser toallow participants to register their clickers to celebrity aliasesusing 4 letter sequences.160Appendix CRhombus Game DetailsThis appendix covers screenshots and explanations of the games that wereplayed in the classroom evaluation of Rhombus. The games are covered inthe following order:• Prisoner’s Dilemma• Iterated Prisoner’s Dilemma• Ultimatum Game• Stag Hunt• Coin Matching• Coordination161Figure C.1: The standard check-in screen that was used in all gamesplayed with Rhombus. When users press a button on theirclicker, their avatar appears on screen with a green overlay andbig check mark fading in. Users are inserted in lexicographicorder.162C.1 Prisoner’s DilemmaThese screenshots are of the Prisoner’s Dilemma game. In each round of play,users are randomly partnered, meaning that results from previous roundsmay not predict what happens in future rounds because users play withdifferent partners. A graph is displayed during play phases to remind usersof how the general population played in previous rounds.163Figure C.2: The Prisoner’s Dilemma play screen in round 1. Userspress C to cooperate and D to defect. The scores that will beassigned to each user depending on the outcome of their match-up are indicated in the Pay-off Matrix. Avatars of users thathave pressed a button on their clicker are darkened and displaythe word “Played”. Subsequent clicker button presses cause theword “Played” to flash to provide feedback to the user.164Figure C.3: The Prisoner’s Dilemma results screen after the firstround. Percentage breakdown of how the class played is givenby the bar above the avatars, with blue representing the num-ber of cooperators and orange representing the number of defec-tors. User action is represented by the hue of the avatar. Userscore is shown numerically and encoded in the lightness of theavatar. The two letters on the avatar represent the user’s ac-tion followed by their partner’s action. The total score for eachindividual user is accumulated at the bottom of each avatar. Ahistogram shows the average scores of cooperators (0.7) and de-fectors (2.3), as well as the overall average (1.5). Hovering overthe bars in the histogram with the mouse produces the tooltipshown.165Figure C.4: The Prisoner’s Dilemma play screen in round 2. The scoreeach user received in the previous round is displayed at thebottom of their avatar. The histogram from the previous resultsscreen persists below the avatars.166Figure C.5: The Prisoner’s Dilemma results screen in round 2. Insteadof a histogram representing the average scores, a line chart isused to demonstrate trends over the completed rounds.167Figure C.6: The Prisoner’s Dilemma play screen in round 5. The linechart persists from the previous results screen. Hovering overthe chart with a mouse produces a tooltip that shows summaryinformation.168Figure C.7: The Prisoner’s Dilemma total results screen. After com-pleting all of the rounds of the game, cumulative results areshown. The avatars are coloured with varying lightness to en-code their different scores, with the lighter colours representinghigher scores. The highest scoring user is outlined in yellow.169C.2 Iterated Prisoner’s DilemmaThis version of the Prisoner’s Dilemma is very similar to the Prisoner’sDilemma described in Section C.1, with the main difference being that usersmaintain the same partner for five consecutive rounds, as opposed to beingrandomly assigned a different partner each round. The class is split up intotwo teams of equal size and the partnerships are formed across the teams.The other difference was that this game was run in three phases where stu-dents alternating between playing as humans and playing with scripts “ascomputers”. Phase 1 was team 1 human vs. team 2 human, phase 2 wasteam 1 human vs. team 2 computer, and phase 3 was team 1 computer vs.team 2 human.170Figure C.8: The Iterated Prisoner’s Dilemma play screen in round 1.Users press C to cooperate and D to defect. The scores thatwill be assigned to each user depending on the outcome of theirmatch-up are indicated in the Pay-off Matrix. Avatars of usersthat have pressed a button on their clicker are darkened anddisplay the word “Played”. Subsequent clicker button pressescause the word “Played” to flash to provide feedback to the user.The subtitle for both teams was configured to be ‘Human’ toindicate to users they were in the human vs. human phase ofthe game.171Figure C.9: The Iterated Prisoner’s Dilemma round results screen.Total scores for each team are displayed in the team headings(14 for team 1 and 20 for team 2). Percentage breakdown ofhow the team played are given by the bars at the top of theteam groups, with blue representing the number of cooperatorsand orange representing the number of defectors. User actionis represented by the hue of the avatar. User score is shownnumerically and encoded in the lightness of the avatar. Thetwo letters on the avatar represent the user’s action followed bytheir partner’s action. The total score for each individual useris accumulated at the bottom of each avatar.172Figure C.10: The Iterated Prisoner’s Dilemma play screen in round 2.The totals for team score are indicated in the team headings(14 for team 1 and 20 for team 2) and the score each userreceived in the previous round is displayed at the bottom oftheir avatar.173Figure C.11: The Iterated Prisoner’s Dilemma phase 1 results screen.After completing all rounds in a phase, cumulative results areshown. The avatars are coloured with varying lightness to en-code their different scores, with the lighter colours representinghigher scores. The highest scoring user is outlined in yellow.174Figure C.12: The Iterated Prisoner’s Dilemma cumulative phase re-sults screen. After phase 2 and phase 3, users see a cumulativetotal of scores over all the phases so far completed. This figureshows the results after all phases are complete.175C.3 Ultimatum GameThe following screenshots are of the Ultimatum Game. In this game, theusers take on two roles in each round: as giver and as receiver. They firstplay the role of giver, where they must decide how much they wish to offertheir partner. After everyone has completed this step, all users play the roleof receiver, where they decide whether or not they wish to accept the offer agiver made them. The partnering in this game is asymmetric; the users forma directed cycle where their forward partner is the person they are makingan offer to (they are in the role of giver), while their backward partner is theperson they are receiving an offer from (they are in the role of receiver).This game used three phases, one where everyone played as humans inboth roles, one where the givers were played with scripts “as computers”, andone where the receivers were played with scripts “as computers”.176Figure C.13: The Ultimatum Game play screen, where the users areacting as givers. Users press A to offer 5 (they keep 5) or pressB to offer 1 (they keep 9). Avatars of users that have presseda button on their clicker are darkened and display the word“Played”. Subsequent clicker button presses cause the word“Played” to flash to provide feedback to the user.177Figure C.14: The Ultimatum Game play screen, where the users areacting as receivers. The offer made to users is displayed on theuser’s avatar. Users press A to accept or B to reject it. Thepercentage breakdown of how many users offered 5 and howmany offered 1 is indicated by the bar above the avatars, withthe blue colour representing those that offered 5, and orangethose that offered 1.178Figure C.15: The Ultimatum Game results screen, showing how usersfared when they played as givers. A green square indicates thatthe offer they made was accepted, while a red square indicatesthe offer was rejected. When an offer was accepted, the scorereceived by the giver is indicated by the number on the avatar.When an offer was rejected, the demand they made is shownwith an arrow to 0 (e.g., if a user offered 1 and was rejected,they would see “9 → 0”). The percentage of acceptances andrejections is indicated by the bar above the avatars.179Figure C.16: The Ultimatum Game results screen, showing how usersfared when they played as receivers. A green square indicatesthey accepted the offer they were given, while a red square in-dicates they rejected it. When an offer was accepted, the scorethey received is indicated by the number on the avatar. Whenan offer was rejected, the amount they were offered is shownwith an arrow to 0 (e.g., if a user was offered 1 and rejected it,they would see “1 → 0”). The percentage of acceptances andrejections is indicated by the bar above the avatars.180Figure C.17: The Ultimatum Game combined results screen. Here thenumber on the avatar represents the sum of the scores theyreceived playing as a giver and as a receiver. The avatars arecoloured with varying lightness to encode their different scores,with the lighter colours representing higher scores. The totalscore for each individual user is accumulated at the bottom ofeach avatar.181Figure C.18: The Ultimatum Game phase 1 results screen. After com-pleting all rounds in a phase, cumulative results are shown.The avatars are coloured with varying lightness to encode theirdifferent scores, with the lighter colours representing higherscores. The highest scoring user is outlined in yellow.182Figure C.19: The Ultimatum Game cumulative phase results screen.After phase 2 and phase 3, users see a cumulative total ofscores over all the phases so far completed. This figure showsthe results after all phases are complete.183C.4 Stag HuntThe following screenshots are of the Stag Hunt game. This game is similar inmechanics to the Prisoner’s Dilemma (Section C.1), but the pay-off matrixis modified, which produces different incentives.As with the Prisoner’s Dilemma, the game features three phases: humanvs. human, human vs. “computer”, “computer” vs. human.184Figure C.20: The Stag Hunt play screen in round 1. Users press Ato hunt a stag and B to hunt a hare. The scores that willbe assigned to each user depending on the outcome of theirmatch-up are indicated in the Pay-off Matrix. Avatars of usersthat have pressed a button on their clicker are darkened anddisplay the word “Played”. Subsequent clicker button pressescause the word “Played” to flash to provide feedback to theuser. The subtitle for both teams was configured to be ‘Hu-man’ to indicate to users they were in the human vs. humanphase of the game.185Figure C.21: The Stag Hunt results screen. Total scores for each teamare displayed in the team headings (28 for team 1 and 28 forteam 2). Percentage breakdown of how the team played aregiven by the bars at the top of the team groups, with greenrepresenting the number of stag hunters and purple represent-ing the number of hare hunters. User action is represented bythe hue of the avatar. User score is shown numerically andencoded in the lightness of the avatar. The two letters on theavatar represent the user’s action followed by their partner’saction. The total score for each individual user is accumulatedat the bottom of each avatar.186Figure C.22: The Stag Hunt phase 1 results screen. After complet-ing all rounds in a phase, cumulative results are shown. Theavatars are coloured with varying lightness to encode theirdifferent scores, with the lighter colours representing higherscores. The highest scoring user is outlined in yellow.187Figure C.23: The Stag Hunt cumulative phase results screen. Afterphase 2 and phase 3, users see a cumulative total of scores overall the phases so far completed. This figure shows the resultsafter all phases are complete.188C.5 Coin MatchingThe following screenshots are of the Coin Matching game. In this game, usersare split into two equal teams: half as matchers, and half as mismatchers.The matchers aim to enter the same choice as their anonymous (e.g., bothselect heads or both select tails), random partner from the mismatchers team,while the mismatchers aim to enter a different choice than their partner onthe matchers team (e.g., one selects heads while the other selects tails).There were three phases to this game, similar to the Prisoner’s Dilemma(Section C.1): human vs. human, human vs. computer, computer vs. hu-man.189Figure C.24: The Coin Matching play screen in round 1. Users pressA to choose heads and B to choose tails. Avatars of users thathave pressed a button on their clicker are darkened and displaythe word “Played”. Subsequent clicker button presses causethe word “Played” to flash to provide feedback to the user.The subtitle for both teams was configured to be ‘Human’ toindicate to users they were in the human vs. human phase ofthe game.190Figure C.25: The Coin Matching round results screen. Total scores foreach team are displayed in the team headings (4 for matchersand 8 for mismatchers). User score is shown numerically andencoded in the lightness of the avatar. The two letters on theavatar represent the user’s action followed by their partner’saction. The total score for each individual user is accumulatedat the bottom of each avatar.191Figure C.26: The Coin Matching play screen in round 2. The totalsfor team score are indicated in the team headings (4 for match-ers and 8 for mismatchers) and the score each user received inthe previous round is displayed at the bottom of their avatar.192Figure C.27: The Coin Matching phase 1 results. After completing allrounds in a phase, cumulative results are shown. The avatarsare coloured with varying lightness to encode their differentscores, with the lighter colours representing higher scores. Thehighest scoring user is outlined in yellow.193Figure C.28: The Coin Matching cumulative phase results. Afterphase 2 and phase 3, users see a cumulative total of scoresover all the phases so far completed. This figure shows theresults after all phases are complete.194C.6 CoordinationThe following screenshots are of the Coordination game. This game is similarin mechanics to the Coin Matching game (Section C.5), but the scoringsystem is different. If both partners select the same choice, they both get0 points, while if they select different choices, they both get 1 point. Thephases of the game function the same as in Coin Matching: human vs.human, human vs. “computer”, “computer” vs. human.195Figure C.29: The Coordination game play screen in round 1. Userspress A to choose “A” and B to choose “B”. Avatars of usersthat have pressed a button on their clicker are darkened anddisplay the word “Played”. Subsequent clicker button pressescause the word “Played” to flash to provide feedback to theuser. The subtitle for both teams was configured to be ‘Hu-man’ to indicate to users they were in the human vs. humanphase of the game.196Figure C.30: The Coordination game round results screen. Totalscores for each team are displayed in the team headings (4for matchers and 8 for mismatchers). User score is shown nu-merically and encoded in the lightness of the avatar. The twoletters on the avatar represent the user’s action followed bytheir partner’s action. The total score for each individual useris accumulated at the bottom of each avatar.197Figure C.31: The Coordination game phase 1 results. After complet-ing all rounds in a phase, cumulative results are shown. Theavatars are coloured with varying lightness to encode theirdifferent scores, with the lighter colours representing higherscores. The highest scoring user is outlined in yellow.198Figure C.32: The Coordination game cumulative phase results. Afterphase 2 and phase 3, users see a cumulative total of scores overall the phases so far completed. This figure shows the resultsafter all phases are complete.199Appendix DTerm 1 Interview TranscriptThis appendix presents the full interview transcript from the interview heldwith the instructor that used Rhombus during the first term. The interviewerwas the researcher and is denoted by R: in the script, while the instructor’sdialog is denoted by P:.R: Why did you decide to run games in the classroom in the first place?P: One of the things we did coming into this class, COGS 300, so this isIntroduction to Cognitive Systems, was to upgrade the game theory content.The program the course had used games informally on occasion, especiallyiterated Prisoner’s Dilemma in a kind of very informal way. And there wassome pressure, especially from the Computer Science department, that thegame theoretic aspect of the course be made a little tighter. Our [inaudible]had experienced using games both in Computer Science classes and in Phi-losophy classes and the experience was that unless people actually playedthe games, the theory didn’t really, was very difficult to teach, was harder toteach, unless people... So we began using games, in class games, a parallelstream that reinforced this was that we had decided in the class to use click-ers for quiz taking and so it was clear that for, this apparatus would also beuseful to support gaming, although the built in software was quite deficientfor this. There was no special support for gaming, but it let us collect movesfrom class and play N-person games, so we did that.R: So immediately you started using clickers?200P: I think probably we used clickers, I taught this course this is theseventh time, I think the first term we didn’t have clickers, right away thesecond term. Another stream, just to mentioned it, I had a student whowas interested in games who became the TA the second term and he waswilling to do the extra work of let’s figure out how to actually use clickers.So it was his class presentation that convinced me that the clickers wouldreally support gaming more than I, I had been using them quite loosely,just collecting move for this, move for that. So Eric Tulin(?), who was anundergraduate, became the TA and that was influential. To have a studentactually work it through, so then I think the next year, we both terms,used the clickers to play games. And altogether, the feedback we got werethe games were the good part. The third strand the stream in COGS, thestream of thinking is the original Turing idea that we should think througha computation in terms of what a humanly emulated computer could do.Could we write code that a human could understand? And so that streamedinto it. We began saying you should play this game both as a human andas some code you wrote out that you could then give to another personthat they then could execute. So we’re using games as a model for simplecomputer program creation. This is a mixed crowd, maybe only a third ofthem have any real programming experience. So it gave people who were atthe lower end of that range a chance to think through a problem in terms ofa series of instructions that someone else can understand.R: So then the reason that the games are pedagogically viable is thatpeople weren’t really grasping the games without playing them themselves?P: Well I mean, [inaudible] your own background, but he was not incomputer science, but one can teach you game theory as an axiomatic math-ematical theory, in which case it has almost no impact on most students Iwould say. Or you can teach it philosophically as puzzles, or as in out of..um, so, isn’t it amazing that the theory predicts this, but people want to dothis? Which doesn’t have much impact either because people haven’t boughtenough into the axiomatic theory to feel that there is a difference. OR youcould, you can actually get people to play the games, which often peoplewho do the theory say “well, oh that’s just stupid” I mean that’s certainly201a tradition I came from but when when you actually do it, it’s the theorythat’s [inaudible], but actually doing it and watching it, the people that fol-low the theory actually do better. Or the theory is predicting opportunitiesthat.. and so, I think that’s where you get the, a big plus, is that, and italso now you’ve thrown out in the class, you’ve made it their problem totry and figure out. You have people, you get the various possible solutionsbeing voiced by different people. Then you go, wait! no I think you shouldalways cooperate even if you get worse. Well then maybe these scores aren’tactually your utilities. Oh, well I see. So you work back through it that wayand I think that is probably the impact it has had. I think teaching it justas theory you get a very small number who really get it.Another thing, the games we use have also been used a lot in psychologicalresearch and so as a kind of follow on to understand those experiments, ithelps to have been a subject by having played the games, otherwise you’re,once again it’s like “they did something”, which doesn’t know, “there wassomething done in the experiment”, but y’know...R: And you said for you having them play as computers as humans,that’s so they can get an understanding of the Turing...P: Yeah. Well there’s two aspects. One of my, maybe my guilt is, thecourse used to always start with them reading Turing’s Computing Machin-ery and Intelligence and working through that which is quite a philosophicalpaper, and we’ve sort of dropped it out because they read it in other coursesand it’s gotten to be kind of a set piece, but it, as Jedediah Pearl remindedme when he came and talked, it’s a luminous paper that really holds up thisreally high standard that we should both understand computers completelyand this transparent mathematical way, but also they should do human- weshould always be thinking What can the computer do something? Whatwould it take to make the computers do humanly worthwhile things like un-derstanding causation? So I was going to drop this out, but this idea is reallyimportant, so how do you get the idea back is you give people a task that’sTuring like. So actually works out in two ways, the one Turing idea is you’vegot to be able to write code that another person would understand. And soyou’re already seeing code, and they’re reading Newell and Simon instead on202Symbol Systems and Search, and so, this is an idea back in kind of the 60searly 70s Computer Science idea [inaudible], Code is a symbol system withan interpreter, but the interpreter can’t be that smart. You have to writethis code so that just a dumb, just another student... so it gives us a chanceto talk about what they know about real cognitive abilities and boundedrationality by realizing that if you’re going to hand your script to somebodyelse, it’s got to be written in some way that, y’know, you don’t have anestablished common language for writing game programs, so. And this year,people, y’know, “is it all right to write is as a kind of diagram.” Well, youhave to, you have to figure out for this group. Will this group understandthat? So the idea of common code and how you get coordination around it,all of that stuff gets kind of bled into the course around that. So that’s onestream.The other stream is both in Cognitive Systems, but also especially in theEthics side, which fears largely the way I teach the course, is the questionkeeps arising: what things can computers do as well as people? And perhapsbetter? And what do we do with that cross-over? And what’s the weirdstuff around the crossing over point? And I don’t do it in terms of thetranscendence stuff, the singularity, whatever. I don’t care. I mean, I careabout the really practical way, as like, once driverless cars can drive betterthan people, then we have a real ethics problems because then human drivingbecomes worse than what we can... There’s a phrase that my mind isn’tjumping at the moment, but in certainly medicine, you’re not allowed to dosomething once there’s a practice that is better. And so, there’s experimentalliterature on that, and there’ll be that later on, but this is an opportunityto talk about that, which is, you’re writing these little programs, but you’realso playing these games as a full-fledged human. Which games, if any,will the computers do better than? Is the rigidity and limited repertoire ofthe computer program sometimes an advantage rather than a disadvantage?And especially you get to the Ultimatum Game and it looks like that mightbe, so that’s an edge to it that’s especially relevant to the way we’re doingCOGS, right? Because you’re not going to actually get inside of the designof a driverless car or drone, but they raise up the question really fiercely,203which is, what if these things are actually better than human soldiers orhuman drivers? Well you can raise it in, and once again, the game kind ofbecomes this little toy world that you can raise this question in. Which ofthese games would you bet the program might do as well.R: For example, you had the students play the ultimatum game. Is thereanything in particular you wanted them to learn playing that game?P: Well, the Ultimatum Game, is in a way the richest one, probably onlymakes sense to play it after you’ve played other games. I mean, backinginto the critique of the way it used to be done. It used to be done with“God, there’s this counterintuitive and hard game the Prisoner’s Dilemmaand look, you don’t expect people to cooperate, but they do” All of whichis wrong in theory. So what you want to do is, that’s not, none of that’strue. What’s true is that single play game is an easy to play game, it’sjust, it has a miserable outcome. Right? And the repeating game actuallyis easy to play, but it has this [inaudible: illian] outcome so there’s no, thegreat mystery, the Hofstadter kind of mystery about it is just confusing themost basic things that you shouldn’t confuse in game theory, between singleshot and repeated games, and so having done all that, so in a way, in thePrisoner’s Dilemma there’s no mystery left. In a way, the only mystery iswhat we generated, which was a good idea, I mean if you knew in advancewhen the game ends, then there is an interesting case because it should rollback logically or rationally, but people don’t roll it back, and so there’s that.But, I mean there’s, so you dispel the mystery of Prisoner’s Dilemma, but itopened up the whole folk theorem and social choice and all this interestingstuff, but there *IS* mystery about the Ultimatum Game because threatsare rationally very disturbing. Clearly, a core part of human and animalbehaviour repertoires. And utterly mysterious from the point of view ofgame theory which transparently says “no, why would I ever take anything?”“If I’m offered anything greater than zero, good deal.” So there’s this reallyclear contrast which doesn’t come up, so now the mystery is back. There’spsychological evidence that people do make threats and they accept threatsbetter from ... so that’s why the Ultimatum Game. The Ultimatum Gameis important, but I don’t think if you only played it, once again, it’s kind204of, you need a kind of game curriculum to give people enough of. And sothis is the case for the approach we’re taking. You have to do it enough sothat they’re not confusing, “God this is interesting. we’re putting numberson things” or “God, a series makes a difference”. All of each of these thingsis really important and has to be figured out. You don’t want them allpiled up into.. and I doubt you could go in in one day, one lecture, and do“here’s all the cool things about game theory.” So it’s a little bit like goingin and saying “here’s all the cool things about computers, or programming,or anything” You have to have enough of a track record so you can seewhy this thing is important. So the Ultimatum Game puts the, the way Isee it now, the Ultimatum Game brings these things together. Especiallybecause we then do evolutionary game theory and say “actually, in evolution,threats are selected for” Right so, now we’re beginning, maybe nature doesselect some times for non-rational agents. David loves [inaudible] and knowKrustov(?)’s shoe example and you connect this all up with politics so thatending was really good and it also motivated, evolutionary game theory iskind of a mystery if you don’t know regular game theory. So this time they’rethinking in terms of looking ahead and what .. they’re doing all that stuffand now wait you’ve got this approach that has very simple agents and itgets results, which can be related systematically to the Nash results andthat’s really interesting.R: So then, what you want them to learn by playing the game is kind ofthat it’s different from the theory, or ?P: Well, there’s multiple levels of learning. You want people to learnthe mathematical discipline, or the case where thinking in terms of insidea model pays off and is interesting, so there’s all of that. Taking symbolsseriously and really making sure the model fits and so doing all that stuff.And also that’s enabling you to raise all sorts of questions about “does thislittle model actually fit?” so on the exam there’ll be a little coordinationgame, but then it will say here’s [how the case?] these old friends don’twant to eat at the same restaurant, here’s this coordination game. Is thatthe game they’re playing? And people should become sophisticated enoughto say, no that doesn’t because you said they don’t ... And also secondary205preferences, does that capture ... there’s all of that. And taking that un-derstanding and putting a tiniest program, which is a strategy, and makingthat strategy available. And then, I think, more subtly, now they’re, I mean,just a background. COGS’ old frame was first you build an agent, then yougive it an environment and then you add other agents to it, which is kind ofan old timey way to build things, since you want an evolutionary approachthat would say there is actually always other agents. And the MIT waywould say at least there’s always an environment, so brains come after allthat stuff. So [bradically? -> practically?](?) for me the social would comebefore even ... what environment your stuck in is probably going to be aproduct of what group you’re actually, can afford to get..., Anyway, there’sgoing to be a reversal of that ordinary thinking. And so games are supersimple models of social environments that can let us pull that rehearsal off.The old way is that way(?) we go in, we just have one sensor, 1 bit sen-sor for light or dark, or touch or not, and that’s great, but in a game youcan say, well there could be a 1 bit sensor for success or failure and you’retrying a coordination problem. So you have a really simple model of socialenvironments. It also enables people to think about social environment inkind of that simple, COGgy simple minded sort of way, but also by doingthat thinking, they’re thinking that all agents aren’t going to be the same.I mean there’s no reason to think that all of us, 40 of us, are going to comeup with the same agent, and so variety and how do you deal with it falls outvery quickly from that, which is to my way of thinking, a good thing. Otherapproaches to COGS had all agents being the same because it fell out of akind of simplicity, I mean, how else would you build a suction(?) sensor, avery simple cognitive agent, this has [sid other](?) effectsR: So then, after they’ve played the game, or maybe during, how wereyou able to establish whether or not they learned what you had wanted themto?P: Question sort of presupposes I guess what we’re supposed to use,which is in the kind of learning framework, we establish goals, ... you couldtell David was probably being reviewed this year. Each class puts (side?)up, yknow, what are learning goals? Not to criticize that, but I think this206probably fits into a more exploratory form of learning. So the games aren’tdesigned to reveal failures to understand on the fly. They’re not on say theincremental testing mode that say you would get from some recent onlineteaching environment. I think they’re more done in the mode of throwingthe class into a highly interactive situation where the’s lots of chances forthings to actually go wrong. I mean, now hopefully not going wrong withthe plumbing level, but certainly going wrong in the sense that they’re notliving up to people’s expectations. So I would say, good signs that it is goingwell are that we have “WHAT? People are choosing ..? Don’t you peopleunderstand how this works?” That’s a good outcome. You can afford todo that in smaller classes and yknow you’re not dependent on this workingfor 200 or online. I mean, I say all that because there’s lots of environmentswhere it would be a lot harder to use that method, but I mean, I would thinkthe problem with university students in this kind of a class is, why wouldyou come to class and not be looking at something else on your screen? Whywould you be engaged? One way you’re engaged is you’ve invested in thissee-no-light process and invested in and suddenly it’s like the expectationsyou invested in that didn’t work out in ways you don’t understand and nowyou’ve got to decide what resources to use. How do you explain that? Arethey all wrong? Are you wrong? Was the theory misapplied? And I thinkthat’s where lots of the learning happens. The more that you can do morerounds, you can do more variations, there’s more chance for that to happen.I mean ideally if we change this in this way, would this still happen? Atthe end we saw that because the evolutionary stuff is bottled with alreadywritten software where you can change the population proportions. “Yknow,would this still happen if an invader came in, using this strategy. Yknowlets poke that there, oh that’s not what I expected at all” So that’s what,it’s designed with that kind of... Now, ideally, and we’re not there obviously.You should also be able to say to people, well, some things I’d like, “Well, let’ssee: did computers do better?” and Did people learn over time? Becauseyou get people coming up afterward saying, “I was looking up there and Ithink people changed over time. Will you go do the...” and I would runstats and bring it and yeah you’re right, over time, over those five rounds,207the proportion of cooperation did change in the way that you suggested. Soanything that would make that, would push that towards an open, publicset of data would also then give you a more reflective version of this.The one thing we paired this with that you wouldn’t have seen happen,there was running in my side of the class also a blog. So they got gradedfor two blog contributions. Certainly last year when I taught this alone, thegames figured a lot in the blogs. So people would, yknow, “why this gamedidn’t really, wasn’t really a game about cooperation” or “how could thishave happened?” And so, some of the learning there happens if you haveanother form for people to um yeah so.R: So it’s kind of like the learning maybe isn’t so evident from an indi-vidual’s play in the game, but kind of the group and the realizations of howthe group dynamics work?P: Exactly and this plays back to why you want to be able to link up aclass full of people. It’s because it is a social- it’s a game that has a socialcomponent, and otherwise it’s a puzzle. And individual exercise. Which isuseful, I mean it’s useful for things that are based on the pure agent side.I mean David has some pretty good exercises where people learn to builda Bayesian model of decision and that you don’t need a group for, but ifyou are trying to build a model of how many threateners will I face in anenvironment, then playing it out in a group is going to be much more useful.Yea, I would say, in asking that question, you could see in one direc-tion you might go, which is, it would help if people’s strategies were moreprogrammatic. And certainly I taught this in AI, that’s the way you did,you’d have to submit your strategy as runnable Lisp code and then there’smany more possibilities of re-running it with different populations and forthis course I think that wouldn’t work. I toyed with it, but it’s not goingto work. There’s not a common language even though they’re supposed tohave one. They take the.. you lose being able to run it in different ways,but actually having people basically enter their outcomes by the clicker, it’sa lot more involving than (so grain?).R: You taught several games. Do they all have the same type of objec-tive? The same kind of learning objective?208P: You can see that, it’s now come together, partly through the processwe’ve been through this term. Now since playing the games is easier andeasier and that’s dropping more into the infrastructure, you can think aboutwhat the ideal curriculum using them. I can give you the next term’s so youcan compare them. I mean, you’d like, externally you’d like this curriculumthis series of games to make sense and one way to make sense is to fit otherthings we’re doing in the course. And so one change we’ll make is the firstgame. The other thing they’re doing in the course is writing a cooperativerobot to play simple traffic games in the lab. So their robot is supposedto solve a 4-way stop sign or something like that. So ideally the first gameshould also be a coordination game. And so that’ s a change we’ll make fromthe zero-sum game we played this term. As it turns out the competitiveaspect didn’t, it was too random. But a coordination game has more, itcan be very rich with very simple, I mean. Just a game where you haveto choose A and B and A and B is the coordination pair is hard. I meaneveryone can’t choose that A because now half the people have to switch, butwhich half switches and how do you distribute that switching. All of that iscool stuff and it’s cool from a deeper computer science perspective as well.So now, here’s the case where it’ll be easy to play that as to play the gamewe played before, so we’ll just switch the game around in the apparatus,but we’ll get a much nicer external match to what they do in the rest ofthe course as well as hopefully another interesting game to play. Actuallytesting out, I put it on the final, so this is one way to pretest it is to putit on the final and then see what people come up with on the final. So Ithink the curriculum has a simple game that really more about getting aprogram out and comparing the imminent, but with no literature. I meanexcept maybe there’s a literature that says only people can solve problemslike this. The [Shelding?] literature in game theory. And then move to thePrisoner’s Dilemma single shot, then the iterated one and then talk aboutiterated game theory and then the ultimatum game. So a chance to talkabout bargaining as well. So there’s a little mini curriculum based on gamesgetting harder in one way, or getting, bringing out different features of thesocial environment.209R: So can you describe maybe in more detail how you ran these gamesin previous terms?P: The selection was different because what’s easy and hard to do. Sowe did one game different term to term, but I think an iterated prisoner’sdilemma where they wrote out a program on paper and switched with an-other person and then each person ran them and wrote out on the paperthe score and I think we collected information just on the blackboard andI think I photographed the blackboard with my camera and then enteredthe stuff in the spreadsheet, and so the feedback loop was from the end ofone session Tuesday and then Thursday. This was before clickers. Then wetried it with clickers with trying to enter. With clickers we definitely wentimmediately to, we used an n-person game. And that was easy and we gotthe results right away and entered in the spreadsheet right away and at onepoint we tried to use the clickers to collect data from the iterated game wherethey’d already written on the sheets saying how many people’s fell withinthis amount, but it was a little awkward. This worked, but we then the next,we were teaching that term, I was the only one teaching that term whichwas not supposed to be, supposed to be team taught. Next term we startalternating and then it becomes really a pain, because you’d do somethingand the other person would teach and then yknow you’d sort of “oh here’sthe results from that game you played a week ago”.R: But with the clickers it still took until the next session before youcould show the results?P: The clickers let us, for that one game, for the N-person Prisoner’sDilemma, let us right away look at some results, but for the other game,maybe we collected stuff by clicker, and then I had to analyze and cameback so. And the delay interacted poorly with coordination with the otherinstructor. So, that led to two things: one we don’t coordinate that wayanymore. David and I find it much [reduced?] to have a week, a week, aweek and given that week week week the old way wouldn’t be that bad, butit’s really great now because you get to build up to a game and play it or playit and analyze it all within a week. But the papers go missing, the papersaren’t always legible and the instructions are a little bit too complicated and210so the failure rate was quite a bit higher. I mean we’re talking about at least10%, you’re kind of guessing at what. Maybe it’s fair to say this, we haven’tbeen consistent about using them for giving grades or participation pointsor whatever for games, but I think when you use clickers, they’re associatedwith the seriousness of, the clickers have a seriousness about them becausethey’re used for quizzes. Now, whether that, it’s very difficult to know thedifference that makes empirically, but you have a sense that there’s a littlebit of, they know that when they click they’re actually linked into that andthat seems to be a good thing in terms of running the games.R: So you’re saying you had at least one game that you played thatdidn’t involve clickers.P: Yea typically we had the iterated Prisoner’s Dilemma and it didn’tinvolve clickers, except maybe once or twice and just for some data collection.And almost every time we played the N-person prisoner’s dilemma which hassuch a simple game output that the clickers let us both do it and talk aboutresults right away. So the problem there, it meant that, knowing that thatworked, we tended to use that game and actually started doing readingsaround that game. But it’s a really hard game to analyze, so what you’reseeing now is an artifact of a difficulty in classroom procedure. In a stadiumwe can do this, so I guess we’re going to do a lot of this card flipping. Whyare we doing that? Well because that’s the thing we can do with a stadiumfull of people, well this is the N-person game we can do easily in the class,but that’s not a good reason to choose the game. We should be able, Imean ideally, the instructor should have a palette of games they can choosefrom and say “No, I’m really interested in this, I want to use this game thathas this shape” not just, here the data is... This is an interesting theme.Certainly in the class we played this out, we said “look at this bias” and it’sjust like having an MRI and you can’t do certain things and you’re goingto get people doing other things in an MRI and there’s going to be yourexperimental tradition is going to look a certain way because of the bias ofthat apparatus. So here we go again, this apparatus can give us a bias orreduce the bias. So I think by not being forced to do N-person games, withactually being able to now, I mean, crucially, pair people up individually,211then you can play games, the games you want to play rather than the gamesyou’re playing because that’s ...R: So just to dig a bit deeper into how it’s actually working, so the N-person would be like you open voting, people click, you close voting, youexport the data or something and you analyze it and you run some scriptson it in excel or?P: Well basically. We’re getting bar charts in class, so we can get a quicktake on that, but then we can also come back and say I’ve analyzed thisand we can look at a little bit more complex ways. But mainly we couldget round 1, okay here’s the mix, round 2, and we’re putting bar charts upand copying the results. But you can’t do that individually because it’s anorder of magnitude more. So it’s a really simple game outcome and a reallysimple epistemology of everyone sees what the proportion was, but even thenwe were stopped from doing. What the literature has moved to saying thatgame itself is pretty much, the Nash equilibrium was not to cooperate at all,so unless you put people in smaller subgroups, and then change those groups,it’s unlikely, but now you’re, we can’t do that, so we’re back to yknow.R: So then, can you describe what improvements the system we usedthis term were able to let you do?P: I think the main improvement is that it enables you to play standardgames. That is, the games that actually drove game theory as like theoreticaldevelopment. So in particular, these are two person games people are playinganonymous individuals and often a repeated number of times and so couldn’tdo that before except on paper, so maybe once. And now we can do it,each person is playing in two different roles, as a program and as a person,you can stretch the games out, at least we have an order of magnitudemaybe actually more than that of compression of how much we can get inin a session. So what we have, sometimes we play three rounds where theyplay five or ten turns per round, so that’s a lot of game play. So I thinkthat anonymity is clear and clear to everyone and sort of puts the games inanother space with separate pseudonyms. So I think that’s all, I mean we’remeeting conditions of the literature. I mean, those games are supposed tobe played anonymously between pairs.212So we’re now able to play more games that way. Since we’re linking upwith the literature, we’re not doing this weird thing of reading papers of whatyou don’t want to read because they’re linked to the game that we’re stuckplaying and now we’re trying to explain complicated papers. We’re readingclassics. We actually read, when we were doing the Prisoner’s Dilemma, wegot to read Kahneman’s Nobel Prize winning lecture. That’s an importantthing. If you can play the classic game, then you can read the classic paperand look at very simple, but very influential results. And that means thegame side if closer to the experimental side when we’re doing experiments.I mean, we actually do Kahneman’s famous experiment you do in class andyou do the Trolley problem, the real trolley problem. And it was weirdthat for games that we weren’t actually doing the standard games. We wereplaying some weird game that we could jigger into the classroom format. Sothis is a huge plus. So they know now the standards. If they went from thisclass to Kevin’s standard game theory class, they wouldn’t “oh wait that’sall different from..” it would be more “no, we know those basic things” so Ithink that’s kind of the original goal we started with, which was this classshould feed the standard view used in Computer Science. I think that’s beena plus.So the other things we can do then, the data is now in a standard format,it’s already excel friendly and ready. I think at this term each time that Icould report out yet further “yes, what we were talking about in class didhappen” or “No, even though it looked like it, no computers actually didn’tdo better at this game.” So you have a finer level of analysis available becauseall the data already in a csv, standard form.R: So how big of an impact did the system have on the games - from ad-ministration to pedagogy to engagement? Maybe in comparison to previousterms.P: I think it doubled the number of games we played. I think we typicallyplay one on paper and one that N-person one, and so this time we playedfour different games or did we play even five. Oh yes actually we played fivebecause we added the limited Prisoner’s Dilemma and the Stag Hunt game.It let us experiment more and now, since it’s cheap to experiment we could213try more games and modify the curriculum. In the light of that we’ll goback to four games next time. So I think that’s, it’s made it a strand in thecourse. Something you could do easily. Once people figured it out, it wasfast to do. We could do it in a third of a session. Previously to do one onpaper it was the whole. It took, it’s paper, it’s confusing, and oh no wait,so it took a long time and a lot of time was spent on administrative stuff.The administrative stuff was a lot less and the actual pedagogy was more,so that was good. And even though it was in experimental mode, we wereprobably taking less time and distraction than we were doing when we weredoing it the other way.R: And in terms of engagement from the students, did you think therewas, from your perspective, a difference?P: We’ll know more in a week because the number of game, there’sgame questions on the final and we’ll see. You can be surprised - oh wowthey were really engaged playing the games, but they really got noth- theydidn’t obviously learn very much. My impression at this point is yeah,that they became it became more of the common framework of the course.This was almost an ideal case though because both sides of the course wereemphasizing similar things. Right I mean, David does a lot of decision theoryand making analysis and so back and forth, but I would say it definitely madea difference that way.R: Do you think that having the real time feedback and showing indi-vidual scores was a big improvement?P: Oh yeah, that’s a huge improvement. Before, there was almost nocomparative analysis at the individual level. I took back all the papers andno one ever got to see that. And in the N-person game you’re seeing groupresults and you’re missing that whole level. We don’t know what people arepicking out from there. For all I know, some people are tracking particularothers that they... That you know, you’re basically making available hugeamount of information for them to do what they like with, but yeah I thinkthat had a definite impact and certainly it added to an engagement factor.R: And now you’re playing more standard games. Is that part of thereason why? You couldn’t have played some of the games without that214feature? That is, without being able to show people their feedback-P: You can’t play standard games without people being paired anony-mously in a group. And all of that takes a way to identify multiple individ-uals. So that whole individual feedback function is crucial to playing thisgame. You can’t say you have a well-informed agent if they don’t, yknowI chose some things over a few times I don’t know what happened, maybeyknow.. and often I think frankly when we were playing the other way onpaper, we were probably giving false feedback. It seems like there’s morecooperation, but we know once you have the more or finer level of analysis,well wait no no that’s not true, we’re just looking again up there but youhave to take this run in and look over here it’s not what you think.R: What expectations did you have with regards to just getting the realtime feedback of the system?P: I think we had no firm expectations because as we began we didn’tknow how we were going to do that. So this was an open problem. In asense you know you have to give people feedback, but it’s difficult to knowhow to give a room full of people feedback working with extremely limiteddevice with no individual level feedback on it. So that cuts out.. So I thinkyou adapted that problem, solved it in an interesting way. I at first was veryskeptical that these celebrity id things would work. When I first saw thegroup sizing, I said is this really going to work? And I think that exceededexpectations. Expectations were very iffy and I think that worked out reallywell. It turned out to be a really good solution to the problem. I think itturned out, I mean the old way working on paper distracted from, I meanthere were people who just got lost in trying to figure out some one else’s,the bookkeeping got in the way. And here, instead, the problem of gettingthere thing connected was more engaging rather than, that kind of turnedpeople into the common problem rather than distracted them with their ownlittle yknow, am I doing this right, being the bookkeeper for another person.So I think there were lots of good outcomes of that, beyond expectations.R: In terms of your needs, would you say the system met your needs?P: Yea and it was Apple-ish in that way- it met needs I didn’t know Ihad, so that’s good. That means that, it certainly solved the problem where215not focusing on a set of problems was easy to do with a very much reducedapparatus. So now we have a much more flexible system. You immediatelystart thinking of what changes? How can that be changed and what thingsyou’d like to add to it and that’s a huge improvement.R: Was there anything about the system that was lacking to you?P: On the system side, it’s a little, the data is a bit difficult to anonymize.I’ve actually tried to do it, only because the pairing information is hard tocapture in a.. You want to go through and say this clicker gets replaced withsome yet other “player 1”, but that clicker is also the opponent in these otherplaces. I don’t know Excel well enough to do. So there are some things thatwould be nice to have. It would be nice to have an easy way to anonymizedata to make it available to students. It’s just a question of it’s in a formthat.R: It’s interesting because I get it anonymized but I change...P: Yeah, so I was going to say, it’s partly. It’s partly.. The other side is weare using it under research constraints and without the research constraints,if these were simply handles that were being used in class, then the problemwould probably solve itself. We would just use it in the raw form that isalready being anonymized. But if it’s not being used for grading and itdoesn’t ever have to be linked to something else then everything can besimple. And for all I know, people don’t mind using their clicker numbers asanonymous, but yknow they’re not like student numbers, they’re these weirdextra number that’s out there. Since I wasn’t sure what I could do with thedata anyway at this particular time it wasn’t really stopping me from doingsomething, but next time I would like to move it to “the data set is available”and we should be talking about that because that’s all a movement to opendata and so yes this data is available and anyone, let’s see who’s first to geta blog post finding something in that data we didn’t find yet and that wouldbe cool to do.R: Were there any parts that were unexpectedly useful in the system?P: Well, I think the, to give you credit, I think the whole engagementfactor that came from the way the pseudonyms were set up. Just turnedout to be fun, it wasn’t a chore, turned out to be, turned out those little216half-time exercises, fun that people were having. I think all of that wasunexpectedly good. And really I never had this wealth of data. I never hadthe problem of all this data to analyze. Before it was we had a screenshot ofthe whiteboard and no idea and it was lost to us what had happened. Andnow we have problems of a wealth of data and how to deal with that, whichis all, fits exactly with the themes of the course. Yknow given a firehose ofdata, what’re you going to do with it? Data you’re allowed to do this and soall of those questions. I mean the class, as well as David, were all surprisedand interested with the research ethics constraints. Oh, how does that work?R:Were the goals you set out to accomplish by using the system reached?P: Yea I think exceeded because we really didn’t expect that we coulddo this much. Even quite experimentally we got way more done this termthan expected. Exceeded.R: What are your thoughts of using multiple displays in the classroom?P: I think I’m sort of being converted to that. I never thought... I mean,I had it as a luxury in earlier versions of the course, hardly used them exceptto duplicate stuff. I think I don’t even remember a time, maybe once weused a solid object viewer on one side. But now, this made me appreciatehow lacking this is and practical concerns come up for next term whereI’m not sure whether the person I’m teaching with is actually comfortabledoing things like the quizzes. If we had multiple displays, I would simplyrun the quiz software on my machine and run the quizzes for him. Andwith a single display this is, I mean, the clicker software is not made to beused by two instructors. They didn’t ever think of that. So literally, Davidhas to send me his session records and then I have to blend them withoutgetting it wrong so it’s fooled into thinking one instructor. And when you’reconvincing someone to use clickers for the first time and then to say “oh, bythe way,” Well this is open source software, I have a mac but I don’t runOS X, but it’s got to be the instructor running it because yknow it wouldbe really nice if you simply you said “fine, there’s another display, I’ll justrun the module in your place. it’s easy to do” So I Think that bottleneck ishuge and has helped me appreciate, certainly for any bigger class it would beessential. I mean you can see it in this class. You can see how primitive the217single projector rooms are. I mean I don’t know if running two projectors inthat room is easy, but it’s a reminder that probably the base standard thatpeople now should be asking for is yknow two displays.R:How do you think having the multiple displays would affect the games?P: I don’t know I think it would probably open us up as thinking aboutit from a development standpoint to think what kind of more graphicalfeedback you could provide. I think we’re limited in thinking what kind offeedback we can provide. And I think it would probably open up our designspace. Now maybe more important is, now what would a full feedback gamecontroller, I mean that’s another space. This shouldn’t take all our thinking.What I really like about this project, it made a huge improvement on a verylimited, but in-place technology and that’s I think a really cool thing tohave done. I mean it’s there, and people want to use it and you can takeadvantage of that and you build on it and you don’t say “if only we had...”but yeah, given that we’ve got a non feedback providing controller, there’sinteresting questions about what else we could show people and how thatwould work. Well you and... I gather you’ve already worked with Kelly, howcould more group on group stuff work.R: Thanks for your time. Do you have any final comments or anythingelse you wanted to say?P: No, stay with it. No I mean, a thank you it’s been a great project towork on with you. Yeah, I think you took to working with the class reallywell and that made a difference.R: Thanks so much for the interview.218Appendix ETerm 2 Interview TranscriptThis appendix presents the full interview transcript from the interview heldwith the instructor that used Rhombus during the second term. The in-terviewer was the researcher and is denoted by R: in the script, while theinstructor’s dialog is denoted by P:.R: What do you think makes playing the games pedagogically valuable?P: This is a course context that we’re not stressing proof techniques or we’renot treating game theory very formally and second we’re stressing contrastbetween rational agents and human agents, and third, there is some dis-cussion of the computational side of rationality. So, actually playing gamesgives people, well kind of firsthand, it lets them actually write a strategyand execute it. And secondly, it lets use model the .. I mean we’re usingthe games as models of social interactions, so that lets us actually look ata game in a real social case rather than further modelling the social sidethrough further artefacts. That’s probably the main pedagogical advantageis that it shows the game in a real social context. You’re playing against realresponsive, but not identifiable other agents.R: Would you say you had that opinion before you used the system onyour own?P:Well I think that when you use it on your own, there’s a bit of plus andminus, it’s a little bit more flexible because you’re not ... well we’ll cut thisoff here, or I mean, probably a little bit more control over the situation so you219can use more flexibly. On the other hand, you’re operating the system and sothere’s further distraction. So I think maybe there’s a bit more interactionaround the games. There was no third party; you weren’t there, so there’sa bit of that Westinghouse, negative westinghouse. You’re not there, whenyou’re there it’s like stop and take. I would think it’s probably a little looser,but I didn’t measure. But you know that’s hard to say because it’s also thesecond time you do it, some of the modules are now pretty well tested, therewere things we expected as instructors, so there were a lot more moves tomake, so I think probably that made more of the factor, just that we hadmore experience with it.R: But you still had the same pedagogical goals last time that you usedit?P: Yes, yes. Same course, same style of teaching.R: Was there any evidence in the game or how they played the gamethat? You said the main goal was kind of this social experience from playingthe game actually, and do you feel that they actually achieved that fromplaying the games? What you wanted them to?P: Yea I think, seeing the course a better appreciation of the rationalityas a way of looking at how agents interact, as a theory of interaction. You’llsee people saying, ‘but yeah, but if humans were playing, would we expect...’Yknow if humans were interacting with a robot, a car, would we expectsome of these, some other problems come up? Yes, I could see the engineerlooking at just the rationality of the accident, with, and so I think thatcontrast between human agents and machine agents is enhanced by playingout the games, I think that’s true.R: I was wondering if you could describe the preparation you did forusing the system before the class.P: Before the term, we set up, we made sure there were enough pseudonymsfor the [inaudible], we made up the, we did the term wise stuff. In the begin-ning of the term, we got a list of clickers and exchanged,... I don’t rememberthe details, I think you actually probably matched up those clickers numbersI sent you to set up the... so I entered with a prepared group of, a preparedsystem. Um, basically I think I, we ended up with more games in the system220than I, we had kind of a surplus, we had to fork some things, change somethings, so for each of the times I used it I went through and made sure Iremembered which game we were actually going to use and what its detailswere and reviewed it for myself. I think a couple of times I actually put itup to make sure I understood it. So I did a bit of a rehearsal, maybe 10minutes.The only time I had backtrack because I screwed things up was thatI forgot that it made a difference whether you use the system first or theclicker system. And so I ended up with whichever way I did it wrong andsomething not responding. I think I must have done the games first andthen the clickers didn’t respond and I just didn’t think of the expedient ofunplugging it and plugging it in again. And this time my teaching partnerwasn’t a mac person, and so nobody thought of it on the spot and so weended up doing a paper quiz.R: So when you did the rehearsal did you use your own clicker for that?P: I used the built in, the automatic one.R: /debug?P: Yea, yea, the debug screen. So I just ran the debug screen.R: And so, was this preparation from what you did last term?P: Last term, I think, we often met before class to make sure we had theright game. It was different, Last term it was more, you were the chef and Iwould make the order and put together the dish and we’re in a cafeteria andI had to make sure is this. Oh right and this is called and... And so whatwould happen is I had re-written the, I had perhaps changed some detailof the game in the lecture and I want to make sure we have that very still.Because there was a couple of versions of the game and I wanted to makesure there was a version that matched that.R: So then let’s talk about the experience of using the system in theclassroom. What was the procedure that you followed to get the gamesrunning? What was a typical, if you were going to play games, what did youdo?P: So, typically there was some preparation. Typically they were given asheet or some instructions the time before and as before. So that influenced221the design of the course a bit. There was another factor saying lets do thecourse as weekly modules, because we’re two instructors because I wantedto be able to say on Tuesday, Thursday there is this game coming up ratherthan two weeks. So that typically they were either given a sheet or remindedof the last frame or frames of the lecture that present the games and theiroptions and remind them that it was coming up. They would have to eitherthink out or write out a strategy for the game. So there was that muchpreparation. And we had talked about experimental game theory in thecourse, so I’d explain the reason for this is I want to make sure you reallyunderstand the game and actually let’s, maybe a couple of times, we actuallyplayed out the game with regular clicker mode, just so they’d actually. yknowthis is the ultimatum game, it has this form, somebody does this, somebodyelse does this. And, so they had done that preparation and been given oftena handout (maybe 2 out of 4 times). And then that the day, I guess typicallywe started with the game. Trying to think, could look at the slides, but Ithink typically because of the awareness that it wasn’t, it’s not totally easyto drop in and out of the.. well the reason is that if you dropped in and outof it, you’d end up with two clicker files, as the clicker software is backgroundand stuff, but it’s unforgiving, especially with two instructor’s using it, it’snot designed for that. So and anyway we were sometimes [inaudible] leavingfiles, and I think I didn’t try it, but I didn’t want to end up with two filesthe same day because I didn’t know what Connect would do with it. Andthis isn’t, this is, it’s kind of background to.. So um, I always wanted to putthe games either in front of or after the, so I think typically we started withthem. And the only bad thing about that in terms of prep was that youhave this trickle in and it’s not easy to have people. It’s not the section ofthe class you want people drifting in. So typically I would load something infront of it, some administrative thing just to give myself a bit of a buffer. Ithink I found them that once they’d done it, once they got it pretty quickly,so then we’d through the sign-in thing and blah blah and put the game up.I think the first time it was probably a little shaky because I was using thekeyboard. and I think you had said remember the controller will do this.Once I was using the controller to step through stages, then it was easy and222I was back in control of the pacing.R: So you said when they did the, did you use the Grid program to startthem off, or did you just go to the program you wanted them to play andjust had them check in and then start playing?P: I think I immediately had, yea I didn’t even think of the Grid program.R: OK, and so then when you said on the previous day you sometimeshave them kind of play the game. Was that with the system or was thatwith just the regular clickers?P: No, without the system. One of the firm rules of experimental eco-nomics, you have to make sure that people actually understand what thepayoffs are and how the game is played, and so I would typically end withthat game they were going to play and maybe playing it just simply if whathappens if here’s your choices? You can choose C or D and let’s just trythis. I would see what people will think the thing to do is and talk out.People saying, “ohh I get it. No matter what I do I get screwed” So I hadthat preliminary discussion as a way to motivate people to understand thepayoffs. I should say the other thing that was, there was something differentthis term, which was last term when they played the games at least, theyat least were under the impression that the games would count. This termwe never, never, I mean there was some vague discussion that maybe therewould be some bragging points or something, so there was a difference. Andthat’s a difference and it does reflect on experimental game theory method-ology, because the methodology says it really should count. Otherwise, it’skind of under motivated. There seemed to be a lot of intrinsic motivation,but probably on the methodological side that’s something we would yknowshould... I mean I think because right there I . If I had this from you onsome other basis, not that you were in the middle of experimental work withit, I probably would have figured out a way to use it for grades, but I didn’twant to mess around with the BREB. So there’s a kind of conservatism thatcomes with it being a research project. I think that once it’s another kindof use you’d want to give more attention to, maybe make that a discussionpoint in the class, which is what’s a fair way to motivate people to...R: I’m just wondering is the reason that you used the regular clicker223software because it was easy to just do right then and would take too muchtime to set up the game? Last term we did a prep game the day beforeP: We did. I think it was simply a matter of not doing a change-over.And also, they knew how to use this software, so that wasn’t the question.The question was, really the prior one, aside from all of the extra stuff, doyou understand what makes this game different from the other games weplayed.R: And the basic clicker technology was enough to get them thatP: Yea, it was enough. You could have done it, you could have askedpeople to raise their hands, and so there wasn’t any special reason to matchthem up and have a random assignment or anything.R: So how many times in the term did you end up using the system?P: We used it four times. If you want the handouts I can dig them upand send them to you.R: At the start of the term I gave you some tools so that you could assignthe aliases to the students, I think there was an excel document, there wasa script. I was wondering, it just left wide open the question of how you aregoing to tell the students what their aliases are, and I’m wondering how youended up doing that?P: So we did it informally. We said play around with, which fit withwhat we were talking about, we were talking about interactive robotics atthat point, so just figure out a way to use feedback in this situation and Ithink everyone figured out what they were. But it’s a small class, 25 or 28at most normally, so I didn’t have a question, I mean I think the questionwould, .. we could get away with it being a small class and people figured itout, but if we were thinking, there’s talk of using it next year in 200 which isquite a bit bigger class, like 120 or something, I think you probably couldn’tdo that, we’d have to figure out a way. I think it’s, with a small prototypeyou can get away with, we had that discussion, but I didn’t, I don’t think Isent out anything or announced it in any, and there didn’t seem to be anyonesaying I’m forgetting who my alias is.R: And so, did you use the grid program for them to play around.P: Yea I think it was at that point, at the beginning, we had the intro-224duction to it, we used the Grid program.R: So just by them moving around,... P: Yeah, people figured it out. Wehad a discussion about that and it was interesting that you could do thatand people were trying to figure out what things were responding to what,but it was a particularly tolerant group given the subject matter. I don’tthink it, now that we’ve talked about it, I don’t think that if we did thisnext year it would work with 120 people. There’d be chaos.I think the only user constraint due to the way the user is designed is theoriginal plan was to have some people write scripts and have someone elseinterpret them and we figured out that was very hard to do. You could giveyour script and your clicker to someone, but they wouldn’t know your alias.You could give your alias to someone, but ... So that was, so we realizedthat on the fly. So we said, okay interpret your own, but that was too badbecause in being able to write a strategy that somebody else could interpretis a good skill and a check that you’ve actually written it rather than justfilling in “do whatever is smart to do” will work for yourself, but not for yourbuddy, but it also fits in a tradition of the Turing machine and it’s a scriptinvolved. That was I think one constraint, it would be nice if there was away to swap them without ... but the design works for everything else, butthere is where it pinches.R: Hmm, okay. I was wondering how confident you felt using the systemon your own?P: I think we didn’t get into any place where I thought oh, I mean maybeless than fully confident that work was saved. I think it always was, I thinkthe only time I exited too fast and didn’t get the summary, just got thestage result, but enough that I could reconstruct them. So there maybe ifthere had been some feedback that said “Stage 2 saved, Stage 3 saved, FinalResult saved”. I think that was the only lack of confidence.R: Ok yeah that’s a good point.P: Because it warns you not to leave, but at some point you can leave.I thought oh my god I have to leave this running all the time. I think I didleave the script always just sitting in the terminal window, and I just backedup and ran it every time, but that was just a simple thing to do. And this225is maybe the best case, I wasn’t trading off with someone else in the class, Imean we weren’t, I had the session each time and could arrange the lecturearound it.R: That’s a good point. I mean it saves after every round, but it’simportant I guess to give you the visible feedback that it’s saved.P: Yea, well it would be good I think, I think that would be a good.Always helpful to. I know I think of this because we’re in the middle ofdoing 4 people times 2 countries worth of taxes with very different interfaceson the two and it’s “has it saved this? has my daughter’s file?” And thatreally is a big source of..R: How was the experience in the classroom from your perspective dif-ferent this term from the last term, given that you were the one running theshow?P: I think maybe a bit more interaction because it’s less intimidatingfrom their point of view before there was like three senior people in the frontand them, but it also intermixed with a different person who had much lessinterest in game theory. Before there was a bit more of a general kibitzing,“Oh wait, is that really this?” because David a lot more interested in thegame theory, and so that may be overshadowed. It was a slightly differentenvironment, but it worked quite relaxed. It wasn’t an intrusion. I think theclickers are familiar and the interface is fun, so between those two things.R: Can you describe the student engagement level this term comparedto the previous term?P: Yea that’s probably totally overshadowed by the difference in thecourse. I think, I mean just why it’s overshadowed is because I think Davidis a much more jump off and have an objection and raise a problem in everyclass and other people pick up from that and think “oh that’s okay to do”and this term was much less of that. And so I think it carries over in allaspects of the course. So I think the games played out well, and we got somereally interesting results, but there was maybe less of that interrupting anddebating and that.R: Just the characteristic of the class was kind of different.P: Yea, I think it really did probably overshadowed, it was kind of a big226difference in style between the two terms.R: That’s a good point. Okay, so then almost finished, would you saythat the system kind of met your needs for the term?P: Oh yeah, it exceeded, exceeded my needs. Now we’re even able to pulloff results from the and then put them up on the website and having donethat first term it was quick and easy second term. It fully met and indeed asI said, I’m going to end up talking about it in Norway because there’s someinterest between classroom practice and game theory. So I’m going to givethe talk and feature the software there.R: What issues arose during the term with the system?P: The only two were my being confused about how to get the clickersystem restarted. I deviated from my whatever maybe before I always didit last and anyway I turned it around and didn’t realize. The difference wasthis, last term you ran it on your system and sent me files. This term Iwas running it on my own system and just forgot because I’m running it onmy own system, the clicker interface needs to be reset. The second was notknowing once whether it did save. I think because we were set up to runthree stages and decided after two that we had enough results, so there atthat point I stopped and then didn’t get the grand total.R: So then it sounds like that would be one of the aspects of the systemthat was maybe confusing, not clear. Where there any aspects?P: Well the data files, they’re interpretable, it just takes a bit of.. That’snot a confusing part, it’s just a raw interface.R: Can you elaborate a bit?P: I would confuse myself forgetting that the pairs, the way the pairs linedup, that was really getting data on each of the pairs, but that duplicatedother data. Dumping data like that into a spreadsheet and making senseof it is always... I think that’s not really a problem with the interface, it’sa prototype and that kind of gives you the data. It doesn’t feed into ananalytics program. You can’t really complain about that. And I think allthe data I wanted to get and all the presentations I wanted to get from it, Icould do in fairly short order, it wasn’t “oh my god I can’t figure this out”.R: Yea I kind of just put everything there.227P: What it did is it just invited me to make mistakes because now it’s alittle wider than the screen and you have to say, it’s this and that and this,because if you pick up every column you’re going to mix two different - theplayer and the various opponents together. So you have to do these weirdskipping over things and of course I might, I did sometimes get the columnsthat I ..R: Do you think it would be preferable to have.. because I think I didthe partner score as well as the is that what you’re saying?P: Yea, so you know, what you’ve got are basically non-homogenousrows, which means you can’t just do sums on them because some of thoserows, some of those columns.R: So it’s kind of just a pain in the butt to deal with, I see what youmean.P: Yea, so I mean it’s enough. So you could go through and remove allthem, but .. so that’s, I mean, a second look at that data would just showme one player’s homogenous would be easier to write excel stuff on.R: It’s tough because there’s a different partner each round for some ofthe games.P: Yea and it’s important for verification so you can take a look and saythere were really different partners. I think it can have too little informationand then you’re stuck and you really can’t say for sure that this is doing whatit’s supposed to do, or you can have too much and you’ve got to do somesifting and I think for some experimental product it’s certainly better to erron the side of too much. Because it would be really a pain to say maybe yougot paired with 10 different people, but maybe more like 8 because maybe,gee I don’t know, I have no way to tell who you were paired with. So I thinkfor things we talked about in the design of it, that it kind of should be a goodexperimental platform, that means you can answer those questions. I meanit’s much better to have, I’ve complained about the tables I’ve designed inthese experimental data because you can do an audit on it, it really does havean audit trail. It’s a pain, but it lets you say find out that you’re countingthe actual responses from people. The mistakes it makes are virtuous. Theawkwardness is kind of a virtue.228R: Would you say there’s anything missing or lacking in the software?Like you wanted to do something and you weren’t able to?P: I think we fixed in the design what displays of results you get andyou might want more flexibility than that I’m not sure. So we get a graphof results as we go along, but there are somethings I’d say, I’ll go processthis and show you this other thing. You’d have to really go through casesand figure out if there are things standard enough that you’d want to beable to show them. So now we’re saying, since it was humans computersthen computers humans, so it’s a little harder to tell just looking at whatwe’ve got whether the humans or the computers did better and did they dobetter consistently over the five rounds. But you can’t say in general whatthe other displays you’d like, but my suspicion is that probably one couldcook down a few other display options for end of round data.R: Yea I see what you’re saying. It would be nice if there was kind of asuite of components you could choose from to customize the results screen.P: Yea, yea, I think it seems like that, down the road a bit, that “Oh,I’d really like to look at this this way” and to have that ...R: And what would you say, which features or aspects of the system werethe most valuable for you?P: I think that you basically solved the random assignment to pseudony-mously other player really worked well. I mean that cracked the problemof how do you actually do this in the classroom in a fun way. It was notN40520 versus... They were personified enough that people had the sensethat they were actually playing a definite other person. I think that wasreally a good feature that worked, and kind of in a fun way, so that peopleweren’t irritated by. I mean the clicker software is sometimes annoying andit avoided that annoyance.R: So last term people were maybe a bit excited about being the celebri-ties, was there a similar reaction this term?P: Yea, yea I think. It’s also a prop in an interactive performance. It’snot just collecting data, it’s not putting people in a room and collecting datafrom them it’s actually a part of interactive stage performance, a classroom,so I mean typically yknow we’re waiting on somebody and yknow you say229“Come on Tony or Barb or Bond” and why are some of them celebrities andsome of them the actors? Why are some actors and some characters? Therewere things you could play with a bit without really nagging how come thisperson for all I know for very good reasons takes always 35 seconds than10. And so I think that worked out really well. So that design choice wasgood, it gave you a bit like a muppet thing. You could make fun of that themuppet is blue without anyone being yknow.. Do you feel nvidia(?) is beingpaired up with and that kind of stuff and nobody is, you’re not really, you’retalking at versions of people that aren’t. So I think that was a good thing,and numbers wouldn’t do it. As we’re tightening it up in an engineering wayand you’re going that style, I think it would be a mistake because partlywhat you’re doing is, it’s a prop and it works really nicely as a prop.R: That’s interesting. So do you think that say we used shapes, wouldit be totally different?P: As it turned out, with this experience, I think that would be reallydifferent. There might be problems extending this to larger groups. Andyou could imagine somebody with a principle complaint of “Why did I get aracially aberrant...” or there were no people of my gender. You could imaginestuff like that, but I’m really glad we didn’t imagine those objections andthen make it either more engineery or more bauhaus bland. It had a littlebit of edge to it, and the thing is that’s good, it’s a performance. It’s goodto have a little weird thing that you can focus on because partly what you’redoing is, between rounds and stuff, giving people other things to come onfolks. So think of whatever, it’s Craig Ferguson, and it’s weird chicken jokes.I mean, you’ve got a crowd that you’re trying to involve and I think theinterface worked really well for that.R: This term, was there anything that you didn’t expect to be useful butended up being useful or you didn’t realize how useful it would be?P: I think we got a sense of that last term. It continued on. It was fairlyfun to, it wasn’t, a mixture of it’s not a nerve-wracking thing to use, sayanimations in PowerPoint. Think of it often in a presentation and “oh mygod, I have to show a movie in the middle, will the connection work, or doI need a local copy?” and it didn’t have that feature. It was run locally,230it didn’t need, it turned out that room had a terrible internet connectionand it turned out running it locally and quite secure, yknow use obviousconnections once you understood the clicker thing and on the other hand itwas fun.R: And you mentioned that you found using the clickers to be morecomfortable than clicking and using the keyboard or whatever?P: Yea, I use a Macbook Air and it’s tiny. Y’know, just rememberingwhat the keymappings are because there’s three pieces of software in play.Typically I’m using PowerPoint, the clicker stuff and your thing and whichones want this? Is it return or space or? Or there’s VLC, or whatever allthe other things. The point is that with that in hand, it’s very easy to clickthrough it and you can walk around because then you’re walking around andsometimes you don’t notice this cell is really being ignored by people, what’sgoing on here? That kind of stuff. That worked well.R: Would you say overall that the goals you set out to accomplish werereached while using the system?P: Yea I think that more than reached because it also was suggestive ofthese new problems and further developments and using it in another course.R: What are these new problems?P: Led me to think more of how would you really try to crack the mo-tivation problem? What would be fair ways to grade games given that theyhave this random element? I think that it works so well makes you thenwant to use it further and you’d like to be able to say to students, yeah youcan propose a game and we can test people on it, but that would open upinteresting research ethics problems. I think that’s the sign of a good, morethan of “argh how can we get it finally to work next term?” it’s rather like“Oh, what new things could we do with it?”R: You said you’re using it in COGS 200 or thinking about it?P: There’s some thought of, I’m switching to teaching COGS 200 forvarious reasons but, one thing about COGS 200 is that the assessment hasbeen a little bit over, I mean too much writing, maybe not enough the mostuseful kind, and so what I proposed taking the quizzing the system I usethere which would have people have clickers. So once people have clickers231then, I would like to also use the game software on them. So the people sofar I’ve talked to about it, they’re happy about that, I mean at least one ofthe people that would replace me in 300 wouldn’t want to do much gametheory and people like that there is some game theory done in COGS, sothat would be mean... But that will change. Doing it with 120 second yearstudents than 30 third year students. They’ll be more bitchy problems “ohhhhow come my name is this? Or I forgot how to do this? I thought this?” Ithink probably for that the assessment problem is the more important side.You’re going to need to have participation grades or just that you played ornot or whatever.R: Do you have suggestions or directions that this system could move infor that expanding. You mentioned maybe having options for the differentresults screens, but beyond that do you have other suggestions that could bedone to improve on?P: I think that if anyone less geeky is to use it, I think probably theoutput of it is going to have to be a little more, a little simplified maybe.R: Like a report generated or something?P: Yea, I mean maybe there’s a complete report and there’s just byplayer because it would be handy if someone could generate. I think that’sprobably... The grading thing just opens up other stuff, I mean because ifyou actually used it with some kind of grading thing you’d also like it finallyto interact with the real clicker software, but that’s probably not worth it. Imean it’s not intended to be a grading system, that they’re separate maybeis smart.No I mean actually I haven’t sat and tried to put a game in it fromscratch, so that just says it. So probably if you’re going that direction itwould have to be at that level of interface.R: Yea, certainly that’s not ready yet.P: But I mean it’s just. You probably know about that it’s just. Soyeah.R: OK thanks. Do you have any other comments before we finish up?P: No I mean, it’s been really good partnering up. I hope that’s beenuseful to you. It’s certainly been a good experience. It wasn’t at all a cost232on using it in the classroom, it was much more an enhancement to a classand as I said I’m going to talk about it in a research setting this summer.And introduce it most likely to another, bigger class if I can next year.R: Actually I remembered I have one final follow-up question. Wouldyou prefer having a TA there operating the system? Or was the pressure nothigh enough to be significant to?P: The benefit of having someone not operate is it’s more flexible, thepacing, and that performance side works better I think. I guess depends onthe context. Aware as I am of the price of labour, you don’t want a systemwhere, I mean all the AV systems have now gone to expecting the instructorsimply to use them, right? But in another context you could imagine one ofthe students, you could imagine someone, an experimenter wanted someoneto go through a bunch of classes and try a game on them and in that caseyou would like some third party, not instructors. For the way we’ve thoughtof it as an instructor tool, I think it is better that the instructor gets to do it.The only thing is it is somewhat demanding on the analysis side. Somebodyhas to be comfortable figuring out what the data is saying.Yea I guess maybe the little slight, probably using the clicker raises ex-pectations that you have to deal with. Because the clicker could be usedby someone who never looked at the data files, just let the clicker softwarecrunch them for them. And so it’s operating in class, but it’s also a rawinterface for data.233Appendix FRhombus Instructor ManualThis appendix contains the manual provided to the instructor to guide hisindependent use of Rhombus. Note that at the time the ID Server in Rhombuswas referred to as the Clicker Aliaser.234I. Rhombus Participation System ♦1. Getting StartedRequirementsInstallationStarting the SystemStopping the SystemConfigurationClicker ServerClicker AliaserRhombus Participation System2. Registering AliasesDeleting Aliases3. Playing GamesController InterfaceViewer InterfaceGame ResultsAdditional LogsTesting Games4. References235Rhombus Participation System ♦Rhombus Participation System (RPS) is a web system that allows people to interact witheach other through a shared display. It requires a Participation Server to be hooked up to it inorder to receive choices from participants. In this package, the Participation Server is theClicker Server + Clicker Aliaser, which allows participants to make choices via i>clickers.System DiagramThe Clicker Server is the backend server that listens to the i>clicker base station and outputsclicks over a socket. The Clicker Aliaser is an intermediate server that stands between the webserver and the clicker server in order to translate clicker IDs into the aliases that show up onscreen. The Web Server is where RPS resides and it handles all the displays and applicationlogic.Getting StartedRequirementsIn order to run RPS, you need to have the following software installed:java development kit required for the Clicker Aliaser and Clicker Server (tested withv1.6.0_65)sqlite3 database required for the Clicker Aliaser and RPS (tested with v3.7.3).Most easily installed via Homebrew with the following command:brew install sqlite3node.js platform required for RPS (tested with v0.10.12)Download from the node.js website and install.236grunt tool required for RPS (tested with v0.1.9)Install via npm (comes with node.js) with the following command:npm install -g grunt-cliNote: The Clicker Server has only been tested on Mac OS X (10.8+), but should work on anysystem given the proper HID API library for the clicker driver.InstallationOnce all the requirements have been installed, to set up RPS, you simply need to unpackageclicker_package.tar.gz  to a convenient location. You will be accessing the expandedclicker_package directory via the terminal to start and stop the system.Starting the SystemOpen a terminal at the expanded clicker_package directory and run the start.sh script foundtherein../start.shThis script does the following in order:Starts the Clicker Server at localhost:4444 by default (See Configuration - Clicker Server)Starts the Clicker Aliaser at localhost:4445 by default (See Configuration - Clicker Aliaser)Starts the Rhombus Participation System (“Clicker Web Games” in this package) athttp://localhost:8000 by default (See Configuration - RPS)Starts the Clicker Aliaser web interface at http://localhost:8008 by defaultTypical Output Example237pbeshai: ~/Workspace/research/clicker_package $ ./start.sh> Starting Clicker Server ...13:23:07.820 [main] [INFO] ClickerServer - Starting Clicker Server...13:23:07.830 [main] [INFO] ClickerServer - Instructor ID: 371BA68A13:23:07.831 [main] [INFO] ClickerServer - Channel1: B13:23:07.831 [main] [INFO] ClickerServer - Channel2: B13:23:07.832 [main] [INFO] ClickerServer - Port: 444413:23:07.836 [main] [INFO] BaseClickerApp - Initializing for Mac OS X 64-bit...13:23:07.859 [main] [WARN] BaseClickerApp - Failed to start base station.13:23:07.872 [main] [INFO] BaseIOServer - Loading filters...13:23:07.882 [main] [INFO] MultipleInstructorFilter - Enabled with 2 instructors: 371BA68A, 0D93B52B13:23:07.883 [main] [INFO] BaseIOServer -   OK MultipleInstructorFilter13:23:07.892 [main] [INFO] BaseIOServer - Successfully listening on port 4444> Clicker Server now listening on port 4444...> Starting Clicker Aliaser ...13:23:11.131 [main] [INFO] ClickerAliaser - Initializing with port 4445, clicker server localhost:444413:23:11.153 [main] [INFO] BaseIOServer - Loading filters...13:23:11.169 [main] [INFO] ClickerClient - Client 1 connected.13:23:11.242 [main] [INFO] BaseIOServer - Successfully listening on port 4445> Clicker Aliaser now listening on port 4445...> Starting Clicker Web Games ...> Starting Clicker Aliaser Web ...Running "socket-server:dev" (socket-server) task13:23:12 - info: app webInit13:23:12 - info: initializing api_handlerRunning "socket-server:dev" (socket-server) task13:23:12 - info: initializing api_handler13:23:12 - info: api initialized, using dbConfig file=../aliaser.db, create=../sql/create.sqlListening on http://127.0.0.1:8008   info  - socket.io started13:23:12 - info: app webSocketInit13:23:12 - info: initializing websockets13:23:12 - info: websockets: initializing file=../aliaser.db, create=../sql/create.sql, port=444413:23:12 - info: app webInit13:23:12 - info: initializing app_results13:23:12 - info: initializing api_handler13:23:12 - info: api initialized, using dbConfig file=app.db, create=/Users/pbeshai/Workspace/research/clicker_package/clicker_web_games/framework/server/api/../sql/create.sql, filename=app.dbListening on http://127.0.0.1:8000   info  - socket.io started13:23:12 - info: app webSocketInit13:23:12 - info: initializing websockets13:23:12 - info: websockets: initializing filename=app.db, port=4445Stopping the SystemTo shutdown RPS, in the terminal window that is running the start.sh  script, kill this process238by pressing CTRL+C  at the command line.ConfigurationAll of the servers support a small level of configuration to change basic things such as the portsthey run on and databases they use.Clicker ServerTo configure the port the Clicker Server runs on as well as the i>clicker base station properties,edit clicker_package/clicker_server/config.properties . The following keys are supported:Key Default Descriptionport 4444The port the Clicker Server runs at. If thischanges, update the Clicker Aliaser accordingly.instructorId 371BA68A,0D93B52BThe clicker ID of the instructor’s i>clicker.Multiple instructors are supported by using acomma separated string. The instructor’s remotecan be used to toggle accepting choices and tomove between states. It can be disabled via theInstructor Controller button in the RPS admin.channel1 AThe first letter in the frequency channel for theclicker base station.channel2 AThe second letter in the frequency channel forthe clicker base station.Clicker AliaserTo configure the port the Clicker Aliaser runs on as well as which Clicker Server it connects to,edit clicker_package/clicker_aliaser/config.properties . The following keys are supported:Key Default Descriptionport 4445The port the Clicker Aliaser runs at. If this changes,update RPS’ configuration as well.clickerServerHost localhost The host where the Clicker Server is running.clickerServerPort 4444 The port the Clicker Server is using.239Rhombus Participation SystemTo change the port at which RPS connects to its participation server (in this case, the ClickerAliaser), update clicker_package/clicker_web_games/fwconfig.json . Change theparticipationServer.port  key accordingly.To change the port where RPS runs (default 8000), openclicker_package/clicker_web_games/Gruntfile.js  and change the keysocket-­‐server.dev.options.port .Registering AliasesA simple way of managing aliases is to go to the Clicker Aliaser Web Register Page athttp://localhost:8008/register (henceforth referred to as the Register Page). Note that thesystem has to be running for this to be accessible.To register a single alias, go to the Register Page and under Manual Registration enter theclicker ID in the Participant ID field and the corresponding alias in the Alias field then clickRegister. If successful, the new alias will appear at the top of the table on the right side of thepage.The simplest way to register a bulk amount of aliases is to use the companion Excel file(ClickerAliases.xlsm) to generate SQL (output filename from the Excel macro is aliases.sql).With the SQL handy, go to the clicker_package/clicker_aliaser  directory and run it on thealiaser database as follows:pbeshai: ~/Workspace/research/clicker_package/clicker_aliaser $ sqlite3 aliaser.db < /path/to/aliases.sqlYou can verify the aliases that are currently registered by going to the Register Page andreviewing the table on the right side.Deleting AliasesTo delete an individual alias, go to the Register Page and click the × beside the alias you’dlike to remove.To delete all registered aliases, click the Delete All Aliases button on the Register Page.Playing GamesTo use the RPS system, you need to start the servers then load the Controller interface in yourweb browser. The URL to access the controller is:240http://localhost:8000/m1/controllerThe general procedure is as follows:1. Start the servers using clicker_package/start.sh2. Go to http://localhost:8000/m1/controller3. Open a new main Viewer4. (optional) Open a new Instructions viewer if you have an extra display to solely show theinstructions5. Select the game by clicking the button under App Selector6. (optional) Modify the configuration7. Enable accepting choices by clicking the “Not Accepting Choices” button. You can also dothis on the instructor’s remote by pressing A.8. Have participants click to play9. Toggle through states by clicking the Next State and Previous State buttons. You can alsodo this on the instructor’s remote by pressing C for next state and D for previous. Be sureto reach the final state to get the final output log generated.Controller InterfaceController InterfaceThe Controller Interface is the main interface the instructor/admin uses while running241applications/games. In this interface you can do the following:select the app you would like to runreview the list of open Viewersopen new Viewerstoggle on and off accepting choicestoggle on and off the instructor’s controller. If disabled, the controller functions as a normalparticipant’s controller. If enabled, it uses the following mapping:A Toggles on and off accepting choicesB NothingC Goes to the next state of the applicationD Goes to the previous state of the applicationE Nothingadd or remove a countdown timerconfigure a selected application. When updating a configuration, whatever is entered in themessage field will be saved to the logs of the game for future reference. It exists so theinstructor/admin can leave a note for why they changed the configuration.review which participants are actively playingadd in latecomers through the queue. Note that not all states support adding latecomers.Play states will add in latecomers automatically when they are loaded, but if somebodyjoins in the middle of a play state, they will either have to wait until the next play statebefore they are added, or they can be added in manually via the Add New Participantsbutton that shows up on demand.242Controller Interface (annotated, no app selected)243Controller Interface (annotated, Prisoner’s Dilemma (teams) selected)244Controller Interface, Configuration (Prisoner’s Dilemma (teams) selected)Viewer Interface245Viewer Interface: Prisoner’s Dilemma (teams) Results screenViewers can most easily be opened up from the Controller interface by clicking the Open NewMain Viewer button. If there is not enough screen space to show the instructions along with themain view, you can open an Instructions only viewer by clicking the Open New InstructionsViewer button. This is handy when you have multiple displays at your disposal. Otherwise, youmay have to resort to sharing the instructions over different media (e.g., whiteboards,handouts).246Viewer Interface (annotated)The Viewer Interface will generally have three components: the Status Bar, the ApplicationPane and the Instructions Pane.The Status Bar will always be there regardless of the application being played. It displays thestatus of choices (whether participants are able to make choices or not) as well as a zoomcontrol. The zoom control is slightly different from the built-in web browser zoom in that it onlymagnifies the Application and Instructions panes in the current window. Built-in browser zoomwill magnify the status bar as well as all other browser windows that are open at the same host(e.g., the Controller interface will be affected too).The Application Pane is where all the application views show up. Typically this involves a visualrepresentation of each participant as well as what choice they’ve made and what current statethey have (e.g., their score). Note that in games where participants are paired and the choicesof both players are shown on the results screen with their score (e.g., “AB 5”), the choice on theleft (e.g., “A”) belongs to the player whose box it is in and the choice on the right (e.g., “B”)belongs to the partner. The score (e.g., 5) is for the player whose box it is in.The Instructions Pane shows the instructions for the current state of the application. This paneisn’t always static as you may have different instructions for different states (e.g., playing vs247results, or playing as role A then role B).Game ResultsAll logs and results from game play can be found in the directoryclicker_package/clicker_web_games/log . The contents of that directory are described below:Filename Descriptioncoin-matching/ Directory containing the results of the Coin Matching Gamelogger.js Configuration file for RPS logging (should not be modified)pd/ Directory containing the results of Prisoner’s Dilemmapdm/ Directory containing the results of Prisoner’s Dilemma (multiround)pdn/ Directory containing the results of Prisoner’s Dilemma (N-person)pdteam/ Directory containing the results of Prisoner’s Dilemma (teams)q/ Directory containing the results of the Question appstag-hunt/ Directory containing the results of Stag Huntultimatum/ Directory containing the results of the Ultimatum GameNote that intermediate logs are saved after each round in the rounds  subdirectory. Forinstance, to find an intermediate log for the Ultimatum Game, you’d look inclicker_package/clicker_web_games/log/ultimatum/rounds . The final results of the game arestored at the top level (e.g. clicker_package/clicker_web_games/log/ultimatum ).Additional LogsThe system logs every click and every action in the games. To find more verbose logs, look inthese locations:248Path Descriptionclicker_package/clicker_web_games/log/server.logThe verbose output from theWeb Server (RPS). Game statelogs will be shown here. Alsoavailable in JSON form asserver.log.jsonclicker_package/clicker_server/log/clicks.logThe log of all the clicks thatcame into the base station.clicker_package/clicker_server/log/server.logThe log of all the actions theClicker Server has taken.clicker_package/clicker_aliaser/log/clicks.logThe log of all the clicks thatcame from the Clicker Server tothe Clicker Aliaser.clicker_package/clicker_aliaser/log/server.logThe log of all the actions theClicker Aliaser has taken.clicker_package/clicker_aliaser/web/log/server.logThe verbose output from theClicker Aliaser web interface.Also available in JSON form asserver.log.jsonTesting GamesSometimes you’ll want to have a test run through games without having dozens of clickersavailable. To do this, you can access the debug interface at:http://localhost:8000/m1/controller/debugThis page lets you add in Web Clickers that will simulate regular clickers in games.ReferencesSQLite http://www.sqlite.org/Homebrew http://brew.sh/node.js http://nodejs.org/Grunt: The JavaScript Task Runner http://gruntjs.com/249

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0166030/manifest

Comment

Related Items