Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

On creating a student model to assess effective exploratory behaviour in an open learning environment Bunt, Andrea 2001

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2001-0337.pdf [ 4.02MB ]
Metadata
JSON: 831-1.0051691.json
JSON-LD: 831-1.0051691-ld.json
RDF/XML (Pretty): 831-1.0051691-rdf.xml
RDF/JSON: 831-1.0051691-rdf.json
Turtle: 831-1.0051691-turtle.txt
N-Triples: 831-1.0051691-rdf-ntriples.txt
Original Record: 831-1.0051691-source.json
Full Text
831-1.0051691-fulltext.txt
Citation
831-1.0051691.ris

Full Text

n Creating a Student Model to Assess Effective Exploratory Behaviour in an Open Learning Environment by Andrea Bunt BSCH (Computing and Information Science) Queen's University, 1999 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF T H E REQUIREMENTS FOR T H E DEGREE OF M a s t e r o f Sc ience in T H E FACULTY OF GRADUATE STUDIES (Department of Computer Science) We accept this thesis as conforming to the required standard The University of British Columbia August 2001 © Andrea Bunt, 2001 In presenting this thesis/essay in partial fulfillment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying for this thesis for scholarly purposes may be granted by the Head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Computer Science The University of British Columbia 2366 Main mall Vancouver, BC Canada V6T 1Z4 Date A b s t r a c t O p e n learning environments give users a h igh degree of freedom and control to explore. T h i s freedom and cont ro l is beneficial for some students, resul t ing i n a deeper understanding of the mater ia l than they would gain through t rad i t iona l means of ins t ruc t ion . For others, this type of environment is problemat ic , since for various reasons, these students are not able to explore effectively. One way to address this problem is to augment the environments w i t h ta i lored support . T o provide feedback ta i lored to the student 's difficulties, the environment must have some way of moni to r ing and assessing her explora t ion. T h i s thesis investigates the creat ion of a student model that assesses the effectiveness of the student 's exploratory behaviour . M o n i t o r i n g user behaviour i n an open learning environment is difficult since there is typ ica l ly l i t t le informat ion available to the model to make its assessment. T h e model can view w i t h which items the student experiments, but does not have direct access to the effects of those experiments on her understanding of the domain . A s a result, how to mode l effective exploratory behaviour has not been extensively researched. T h e Student M o d e l in this thesis has been implemented and evaluated in the context of the A d a p t i v e C o a c h for E x p l o r a t i o n ( A C E ) . T h e model monitors the student's explora t ion of A C E ' s activit ies to generate an assessment of how effectively the learner is explor ing . Us ing this assessment, A C E ' s Coach provides ta i lored feedback to guide the student's ex-plora t ion process. To handle the large amount of uncer ta inty present in the model l ing task, the Student M o d e l is based on Bayesian Networks . T h e features of A C E ' s Student M o d e l have been developed and refined using two evaluations of A C E w i t h human subjects. T h e first evaluat ion was used to evaluate the effects of inc lud ing tai lored support in an open learning environment, and also provided insight into ways to improve the Student M o d e l ' s p re l iminary design. T h e second evaluat ion tested those improvements. Resul ts of the evaluations found that both the frequency w i t h which students accessed the ta i lored feedback and the number of activit ies that they explored effectively (as determined by the Student M o d e l ) were posi t ively correlated w i t h learning. i i Contents Abstract ii Contents iii List of Tables vii List of Figures viii Acknowledgements x Dedication xi 1 Introduction 1 1.1 Intelligent Tu to r ing Systems and O p e n Lea rn ing Envi ronments 2 1.2 Prob lems w i t h Open Learn ing Env i ronments 4 1.3 Student M o d e l l i n g 6 1.3.1 Types of Assessment 7 1.3.2 B a n d w i d t h . 7 1.4 Thesis A p p r o a c h : M o d e l l i n g Effective E x p l o r a t o r y Behaviour 8 1.5 Thesis Goals and Cont r ibu t ions 9 1.6 Out l ine 11 2 Related Work 12 2.1 Suppor t i n O p e n Learn ing Envi ronments 12 i i i 2.1.1 Cogni t ive Tools 12 2.1.2 Tai lored Suppor t 13 2.2 Student M o d e l l i n g Us ing Bayes ian Networks 16 2.2.1 Examples of Bayes ian Student M o d e l s 16 2.2.2 Drawbacks of Bayes ian Networks 17 3 A C E 19 3.1 T h e G U I 20 3.1.1 T h e Mach ine U n i t 22 3.1.2 T h e A r r o w U n i t 23 3.1.3 T h e P l o t U n i t 24 3.1.4 Tools 25 3.2 T h e Student M o d e l 27 3.3 T h e C o a c h 27 3.3.1 C u r r i c u l u m 27 3.3.2 Hin t s 27 3.4 T h e Knowledge Base 29 3.4.1 Func t ion At t r ibu tes 30 3.4.2 Func t ion I / O Concepts 30 3.4.3 Relevant E x p l o r a t i o n Cases 30 4 S t u d e n t M o d e l - V e r s i o n I 34 4.1 Uncer ta in ty in the M o d e l l i n g Task . . . , 34 4.2 St ructure of the Student M o d e l ' s Bayesian Networks 36 4.2.1 H i g h Level Descr ip t ion of the M o d e l ' s Bayes ian Networks 36 4.3 Ne twork Design 39 4.3.1 Stat ic vs. D y n a m i c 39 4.3.2 M a c h i n e / A r r o w Un i t s 39 4.3.3 P l o t U n i t 42 iv 4.4 Evidence ' • • • 46 4.5 Cond i t i ona l P robab i l i t y Tables 46 4.6 Implementat ion 49 5 A C E Study 50 5.1 S tudy G o a l 50 5.2 Par t ic ipants • 51 5.3 Expe r imen t Design 51 5.4 D a t a Col lec t ion Techniques 53 5.5 Resul ts and Discussion 53 5.5.1 Effect of A C E on Lea rn ing 54 5.5.2 Effect of Student Character is t ics on A C E Usage 55 5.5.3 Subjects ' Percept ion of A C E 56 5.6 Student M o d e l A c c u r a c y and L i m i t a t i o n s 57 5.6.1 Knowledge Levels 57 5.6.2 E x p l o r a t i o n of Relevant E x p l o r a t i o n Cases 58 5.7 Proof-of-Concept 58 5.7.1 Case 1: P a s s i v e / T i m i d Learner 59 5.7.2 Case 2: Fa i l ing to Uncover Concepts 59 5.7.3 Case 3: Succeeding in Uncover ing Concepts 60 6 Student Model - Version II 61 6.1 Changes to the Network Structure 61 6.1.1 Knowledge Nodes ; 62 6.1.2 A d d i t i o n a l Genera l E x p l o r a t i o n Concepts 65 6.1.3 S imi la r i ty of Exercises 67 6.1.4 T i m e Nodes 67 6.2 Eva lua t ion of Changes 67 6.2.1 S tudy G o a l 67 v 6.2.2 Study Participants 68 6.2.3 Experiment Design 68 6.2.4 Results 68 7 Conclusions and Future Work 71 7.1 Satisfaction of Thesis Goals 71 7.1.1 Model for Exploration 71 7.1.2 Evaluation of the Model and Intelligent Support 72 7.2 Generalizability 73 7.3 Limitations 74 7.4 Future Work 75 7.4.1 Exploration of Relevant Exploration Cases 75 7.4.2 Extending the Model's Diagnostic Abilities 76 7.4.3 Formal Evaluation 76 7.4.4 Automated Testing 77 7.5 Conclusion 77 Bibliography 78 Appendix A Programming Credits 84 Appendix B Curriculum File 85 Appendix C Observer Sheet 87 Appendix D Pre-Test 88 Appendix E Post-Test 97 vi List of Tables 3.1 Functions Types and their Attributes 29 3.2 Function I/O Concepts 29 3.3 Symbols for the Exploration Cases 30 4.1 CPT for an Example Function Node fn 47 4.2 CPT for the Ability to Correctly Generate the Output for a Simple Function. 47 4.3 CPT for Understanding the Slope Concept 48 4.4 CPT for Understanding the Concepts in Arrow Unit Given Exploration and Direct Evidence 48 4.5 CPT for Understanding Positive Intercepts given Exploration and Direct Ev-idence 48 5.1 Experiment Schedule 52 5.2 Information Recorded in the Log Files 53 6.1 An Example CPT for an Exploration Node in the Second Version of the Student Model 66 6.2 Unnecessary Navigation Hints 69 6.3 Premature Passes 69 vii Lis t of F igures 3.1 A C E ' s Modu le s 20 3.2 The A C E Interface 21 3.3 T h e Mach ine U n i t 22 3.4 The A r r o w U n i t 23 3.5 T h e P lo t U n i t 24 3.6 T h e Tools Icons 25 3.7 T h e Lesson Browser 25 3.8 T h e E x p l o r a t i o n Assis tant 26 3.9 A n E x a m p l e H i n t W i n d o w 28 3.10 A Nav iga t ion H i n t 28 3.11 Rules for the Relevant E x p l o r a t i o n Cases 31 3.12 T h e Knowledge Base's Representat ion of a Linear Func t ion in the A r r o w U n i t 32 4.1 A High-Leve l Descr ip t ion of the Bayesian Networks 35 4.2 T w o Different Possibi l i t ies for the Di rec t ion of the A r c s i n a Network w i t h Dimens iona l Variables 38 4.3 T h e Sta t ic P o r t i o n of the Bayesian Network for the Mach ine and A r r o w Uni t s 40 4.4 Pa r t of the D y n a m i c P o r t i o n of the Bayesian Network for the A r r o w and Machine U n i t s that Conta ins E x p l o r a t i o n Nodes 41 4.5 Pa r t of the D y n a m i c P o r t i o n of the Bayesian Network for the A r r o w and Machine U n i t s that Conta ins Knowledge Nodes 43 v i i i 4.6 T h e Sta t ic Po r t i on of the Bayesian Network for a L inear Func t ion in the P l o t U n i t 44 4.7 A n E x a m p l e D y n a m i c P o r t i o n of the Bayes ian Network for the P l o t U n i t . . 45 6.1 A High-Leve l Descr ip t ion of the Second Vers ion of the Student M o d e l . . . . 62 6.2 A Simpli f ied E x a m p l e of the New Bayes ian Network for the M a c h i n e / A r r o w Uni t s 63 B . l A Sample C u r r i c u l u m F i l e 86 ix Acknowledgements F i r s t , I would like to thank my supervisor, C r i s t i n a C o n a t i , for a l l of her guidance. In addi t ion , her financial generosity allowed me to at tend a number of conferences, g iv ing me the oppor tun i ty to share my work and ideas w i t h others i n the field. I also feel very fortunate to have as a surrogate supervisor, G o r d M c C a l l a , a professor at the Univers i ty of Saskatchewan. G o r d got me started in this field several years ago when he hired me to work in the A I R E S lab for the summer. H i s continued active interest i n my research career means a great deal to me. Co l l abo ra t i ng w i t h K a s i a M u l d n e r and M i c h a e l Hugget t on the A C E environment was a great experience and I would not have been able to bu i ld something as substant ial as A C E wi thout them. A l a n M a c k w o r t h graciously agreed to be my second reader, p rov id ing me w i t h a number of very helpful comments. I also greatly appreciate the input I received from my proof-readers, Stephane Durocher and my dad, R i c k B u n t . M y dad provided me w i t h extremely helpful pieces of advice on how to organize the thesis and highl ight the contr ibut ions . S teph also deserves a b ig thanks being amazingly patient du r ing my stress episodes and for being so encouraging and car ing. F ina l ly , I would like to thank m y parents, R i c k and G a i l B u n t , who have bo th been incredibly loving and encouraging my whole life. W i t h o u t their sound advice and ca lming words I might not stuck through a turbulent first year in grad school. T h e y have been there for me always, wi thout question or hesi tat ion. A N D R E A B U N T The University of British Columbia August 2001 To my mom and dad xi Chapter 1 In troduc t ion A s software packages and the W o r l d W i d e Web continue to grow, users are often faced w i t h appl icat ions that are r ich w i t h features and informat ion, but that provide l i t t le guidance on how to use the features or find the informat ion that is relevant to them. Consider a user that is t r y ing to find informat ion on a topic of interest on the Internet. W h e n the user opens a Web browser, there are no direct instruct ions on how to find this informat ion. Ra ther the user must begin to experiment w i t h various search queries and explore different web pages unt i l she is able to find what she is looking for. T h i s type of environment , which requires its users to explore and experiment wi thout p rov id ing much expl ic i t ins t ruc t ion , is known as an open environment. T h e problem w i t h open environments is that, for various reasons, users are not always able to explore effectively. A potent ial solut ion to this p rob lem would be to augment the environments w i t h a model of the user that could assess the effectiveness of the user's exploratory behaviour. T h i s would allow the environments to sense when their users are not explor ing effectively and adapt accordingly. T h e thesis investigates issues involved in creating a user model that can assess a user's exploratory behaviour i n open learning environments (open environments that are used for educat ional purposes). If properly designed, this model would allow the envi-ronment to sense when the student is experiencing difficulty w i t h the explora t ion process 1 and provide ta i lored assistance, thereby, improv ing the experience of learning through open environments for students w i t h a wide range of learning styles. 1.1 Intelligent Tutoring Systems and Open Learning Environments Intelligent T u t o r i n g Systems (ITSs) are computer-based educat ional programs that present mater ia l i n a flexible and personalized way. A c c o r d i n g to Shute [40], an I T S is a system that is able to: "a) accurately diagnose a student's knowledge structures, ski l ls , a n d / o r style using principles , rather than pre-programmed responses, to decide what to do next and b) adapt ins t ruc t ion accordingly" . T h i s k i n d of diagnosis and adapta t ion, which is usual ly accomplished using A r t i f i c i a l Intel-ligence techniques, is what distinguishes ITSs from Compute r -Ass i s t ed Inst ruct ion. T h e u l t imate goal of any I T S is to achieve the same results as a human tutor , address-ing what is known as.the "2 s igma problem" [6], the name for the exper imental result that students who were tu tored on an ind iv idua l basis achieved test scores that were two s tandard deviations better than students who were exposed only to the typ ica l classroom experience. Since resource constraints rarely allow for students to benefit from ind iv idua l human tu -tor ing, I T S research has focused on ways to simulate this type of personalized ins t ruc t ion . Some ITSs have concentrated on repl icat ing the guided ins t ruc t ion provided by one-to-one tu tor ia l interactions, whi le others have focused on different educat ional paradigms, such as open learning environments. Advocates of systems of the first type believe that students learn best when presented wi th a set of focused act ivi t ies , on which they work under the supervis ion of a computer ized tutor or coach [4] [40] [46]. A l t h o u g h systems of this type differ in the strategies they use to provide this supervis ion, such as the nature and t i m i n g of feedback they provide on 2 the students ' performance, a l l follow the pr inciple that the tu tor should be control l ing the interact ion. Advocates of open learning environments (also known as discovery worlds and ex-plora tory learning environments) place less emphasis on learning through expl ic i t ins t ruc t ion and more emphasis on p rov id ing the learner w i t h the oppor tun i ty to explore an ins t ruc t iona l environment, acqui r ing knowledge of concepts in the learning domain i n the process. S t u -dent are allowed to explore these environments by exper iment ing wi th different i tems i n the environment 's interface. T h e experiments often involve a s imula t ion where students are able to manipula te different aspects and parameters i n order to observe the effects these changes have on outcomes of the s imula t ion . T h e idea is that through performing a meaningful set of experiments, the students should come to unders tand relationships between concepts in the domain by general izing the results of their experiments. A p a r t from the emphasis on exper imentat ion, open learning environments differ significantly from tutor-control led ITSs in that they place the onus on the student to ini t ia te these experiments. For instance, i n Smi th town [39], whose target domain is microeconomics, students are provided w i t h a set of variables and given the oppor tun i ty to hypothesize on the rela-tionships between these variables, which the students can verify by manipu la t ing the values of those variables. A n example of a different approach would be S C I - W I S E [49], wh ich focuses less on domain-specific exper imentat ion and more on helping students unders tand the stages of effective scientific inqui ry as they undertake a number of different research projects. Benefits derived from open learning environments are twofold. F i r s t , th rough ac-tive involvement i n the learning process students can, i n theory, acquire a deeper, more s t ructured unders tanding of concepts i n the domain [39] [43]. Second, these environments provide students w i t h the oppor tun i ty to develop meta-cognit ive skills associated w i t h ef-fective explora t ion [31]. These meta-cognit ive skil ls (i.e., skil ls per ta in ing to how to learn that are independent of the under ly ing domain) include hypothesis formation, the ab i l -ity to construct meaningful experiments and self-monitoring (the abi l i ty to moni tor one's 3 progress and understanding). In addi t ion , a l though not direct ly mentioned i n the l i tera-ture, self-explanation would seem to be another meta-cognit ive sk i l l relevant to learning in an open environment . Self-explanation [10] refers to a student's tendency and ab i l i ty to spontaneously generate explanations to themselves as they are exposed to mater ia l i n a learning context. W h i l e the te rm "mater ia l" has typ ica l ly been associated w i t h wr i t t en text, self-explanation also applies i n an open learning environment since the students w i l l be exposed to ins t ruc t iona l mater ia l that is not accompanied by tutor-generated explanat ions. Thus , to successfully learn from the mater ia l , students must be able to generate their own explanat ions. 1.2 Problems with Open Learning Environments W h i l e there has been increasing evidence of the effectiveness of tu tor-control led environ-ments (e.g., [2], [14], [13], [25], [26]), empi r i ca l evaluations of open learning environments have yielded mixed results. T h e most significant predictor of success in open learning en-vironments appears to be the students ' ac t iv i ty levels, since learners who are more active explorers have been shown to benefit more from environments that provide less s tructure [3.1] [39] [38]. A n impor tant finding from the point of view of this research is that learning in these environments depends on a number of user-specific traits, such as meta-cognit ive skills and knowledge level. Whe the r or not students possess the necessary meta-cognitive skills influences how much they benefit from these environments [42] [35] [39]. Students interact ing w i t h open learning environments have been found to have problems wi th the scientific inqu i ry process, inc lud ing difficulty formulat ing hypotheses, performing experiments and d rawing conclu-sions based on the results of their experiments [18]. Other common problems uncovered by empir ica l evaluations include not knowing how to generalize results of their explora t ion , and not being able to self-monitor [42]. T h e inab i l i ty that many students have to self-explain [36] [10] would also result in students having difficulty generating the necessary explanat ions about the phenomena they observe to be able to draw conclusions and to generalize results. 4 Anothe r user-dependent feature that influences students ' ab i l i ty to learn in open learning environments is their domain knowledge. Lower-achieving students tend to have more difficulty in open environments, whi le indicat ions of the opposite have been found in more restr icted environments [34], [35]. For instance, domain knowledge has been found to affect a student 's abi l i ty to employ cer ta in explora t ion strategies in a given context [23] [27]. T w o common explora t ion strategies are top-down, where students start w i t h a hypothesis and then perform experiments to verify that hypothesis, and bot tom-up, where students perform a series of experiments and gather the da ta to draw conclusions. T h e top-down strategy does not work for students wi thout enough domain knowledge to begin formula t ing a hypothesis, while the bot tom-up strategy is ineffective for students who do not have a broad enough understanding of the doma in to perform a range of different experiments [27]. F ina l l y , complete coverage of the explora t ion space can also be problemat ic wi thout any guidance. In environments w i t h large explora t ion spaces, students often fail to uncover a l l the impor tan t concepts in the domain [35]. T h i s could be related to a number of factors, inc lud ing mot iva t ion , self-monitoring and domain knowledge. Systems whose evaluations have yielded fair ly posi t ive results include Sherlock 2 [24], a s imula t ion environment for t rouble-shoot ing avionics, M A C R O S I M [20], a discovery environment targeted at economics, and S C I - W I S E . B o t h Sherlock 2 and M A C R O S I M , however, are more restricted than pure open learning environments. Sherlock 2 provides scaffolding, restr ic t ing the user's interactions w i t h i n the environment by p rov id ing a very-s t ructured set of act ivi t ies . In a s tudy of the M A C R O S I M environment, students were given an expl ic i t list of tasks to complete. Fur thermore , the M A C R O S I M evaluat ion tested learning differences between a group who received no rma l classroom ins t ruc t ion as well as the oppor tun i ty to use the system, and a group that was exposed only to the classroom ins t ruc t ion . F ina l ly , a s tudy of S C I - W I S E found posi t ive learning outcomes for a l l students, w i t h a much larger gain for low-achieving students, but the evaluation d i d not have a control group [49]. A l t h o u g h these evaluations provide some hope that open learning environments can be successful, their results do not refute any of the findings discussed i n this section. 5 B o t h Sherlock 2 and M A C R O S I M removed a key feature of open learning environments, wh ich is the freedom students have to in i t ia te the experiments. A l s o , since the S C I - W I S E and M A C R O S I M evaluations d id not include control groups, it is not possible to determine whether the posit ive learning outcomes should be a t t r ibu ted to the environments or to the addi t iona l t ime that the students were exposed to the educat ional mater ia l outside of the no rma l classroom ins t ruct ion . 1.3 Student Modelling T h e problems uncovered i n empi r ica l evaluations of open learning environments indicate that add i t iona l support is needed to make the environments beneficial for a l l users. One way of p rov id ing this support is to supply each student w i t h feedback on the explorat ion process throughout the interact ion. Since the sense of control and freedom that open learning environments provide to the user has the potent ia l to be very beneficial for learning, it is impor tan t to interrupt to provide this feedback only when warranted. A u g m e n t i n g these environments wi th a student model is fundamental to determining when and how sensible feedback should be provided. User model l ing is the process of bu i ld ing da ta structures and inference mechanisms that allow an appl ica t ion to assess certain properties of its user and ta i lor the interact ion ac-cordingly. Examples of relevant properties include i) behaviour patterns, i i ) cognitive states such as knowledge, preferences and goals, i i i ) non-cognit ive states such as emotions and per-sonali ty t rai ts . E q u i p p e d wi th this informat ion, the appl ica t ion can adapt its behaviour to meet the needs of the user in an informed manner. Student model l ing is sometimes thought of as a sub-problem of the user model l ing problem [22], where the target appl ica t ion is an I T S . In a way, however, student model l ing is a more difficult p rob lem since no assumptions can be made concerning the user's knowledge level, wh ich is constant ly changing. Student models are used in ITSs for a number of purposes, inc lud ing to decide when and how to advance the student through the cu r r i cu lum, offer advice (both sol ic i ted and unsolici ted), generate problems and act ivi t ies , and provide ta i lored explanat ions [45]. 6 1.3.1 Types of Assessment So far, the major i ty of student model l ing work has focused on assessment dur ing problem solving. One such assessment is of the student 's knowledge of concepts i n the domain (e.g., [11]). A second common form of assessment is of the qual i ty of a student 's solut ion steps (e.g., [5]) which can also be used to assess the student 's mastery of the rules used to generate the solutions (e.g., [3]). There are many challenges involved w i t h assessing this type of behaviour . One challenge is to determine i f a mistake was due to a sl ip or to incomplete knowledge. O n the other hand, when the student demonstrates some correct behaviour there is the possibi l i ty that she was just guessing [28]. O the r challenges include recognizing the student 's so lut ion pa th (plan recognition) a n d the uncer ta inty involved i n judg ing the qual i ty of the solut ion. Recent ly there have been new ini t ia t ives in student mode l l ing that have gone beyond knowledge assessment. One of these ini t ia t ives has been to model meta-cognit ive ski l ls , such as the student model for effective example s tudying in [14]. M o d e l l i n g meta-cognit ive skil ls brings a new set of challenges since assessing this type of sk i l l requires access to informat ion that cannot be found in the student's final answer or the steps that lead to that final answer. In add i t ion to knowledge, this type of assessment also requires informat ion on factors such as the student 's focus of at tent ion and her reasoning process. 1.3.2 Bandwidth A s i l lus t ra ted by the examples mentioned above, each type of student model assessment requires different kinds of informat ion about the student. T h e less of this pertinent infor-ma t ion that the model is able to obtain direct ly through the student 's interact ion w i t h the system, the more uncertainty there is in the model l ing process. T h e issue of the amount and qual i ty of informat ion about the student that is available to the mode l is referred to as the bandwid th issue [45]. W h e n t ry ing to assess a student's knowledge dur ing prob lem solving, a high bandwid th s i tua t ion would mean that the model would be able to view not only the steps of the student's solut ion, but also the mental processes that she used to generate these 7 steps. T h i s has the lowest degree of uncertainty since the model would be able to di rect ly use the correctness of the student 's solutions along w i t h the domain principles she used to generate her solutions to determine which set of concepts and rules she understands. In a low bandwid th s i tua t ion, the model might only be able to view the student's final answer, requir ing the mode l to infer the student's unders tanding of the concepts involved i n the problem from very l im i t ed informat ion. H i g h bandwid th is more difficult to come by when model l ing a sk i l l such as self-explanat ion . R a r e l y wou ld the model be provided w i t h direct informat ion on factors such as the amount of t ime the student spends reasoning about different parts of the mater ia l and the qual i ty of this reasoning process. Thus , the bandwid th is often increased by either designing a restr icted interface or by asking the student enough questions. For instance, i n C o n a t i and V a n L e h n ' s [15] assessment of self-explanation, they provided a restr icted interface to ob ta in informat ion on what por t ion of the interface the student was focusing her a t tent ion. T h i s da ta was obta ined by mask ing regions of interest and forcing the students to cl ick on the masks to read the under ly ing mater ia l . T h e bandwid th issue creates special challenges for student model l ing i n open learning environments. A s is the case w i t h effective self-explanation, assessing effective explora tory behaviour requires model ing factors that are not easily observable. W i t h open learning environments, however, designing a restricted interface or d i s rupt ing the user i n any way would remove some of the freedom and control that has the potent ia l to be very beneficial. 1.4 Thesis Approach: Modelling Effective Exploratory Behaviour To address the concern that open learning environments are not beneficial to a l l students this research concentrates on the development of a student model a imed at assessing the effectiveness of the student's exploratory behaviour. H a v i n g a model that can sense when the student is experiencing difficulty w i t h the explora t ion process w i l l al low the environment 8 to alter the interact ion to target the causes of difficulty. M o d e l l i n g students ' explora t ion has not been extensively researched. O p e n learning environments are problemat ic from a student model l ing point of view because of two fac-tors. T h e first factor is the uncertainty involved i n assessing the student 's behaviour i n a low bandwid th s i tua t ion through an unrestr ic ted interface that gives the model l i t t le informa-t ion about the student 's menta l processes and intentions. T h e second challenge is deal ing w i t h the unpredic tab i l i ty of the student 's explora t ion path , since there is no pre-defined sequence of act ivi t ies or tasks for the student to complete. To cope w i t h the uncertainty, the model proposed is based on Bayes ian Networks [33]. T h e unpredic tab i l i ty is handled by dynamica l ly construct ing port ions of the Bayes ian Networks depending on the par t icu lar explora t ion pa th that the student takes. T h e Student M o d e l for explora t ion has been implemented i n the context of the A d a p t i v e C o a c h for E x p l o r a t i o n ( A C E ) [7], an intelligent exploratory learning environment for the doma in of mathemat ica l functions. A C E uses the assessment of the Student M o d e l to provide ta i lored feedback i n the form of hints targeted at gu id ing and improv ing the students ' explora t ion of the mater ia l provided throughout the course of the interact ion. 1.5 Thesis Goals and Contributions T h e p r imary objective of this thesis is to investigate how to incorporate student model l ing into open learning environments as a mean of m a k i n g the environments beneficial for a l l types of learners through the addi t ion of intelligent support . To meet this objective, the thesis has the fol lowing goals: 1. To create a student model that is suitable for p rov id ing ta i lored feedback on the explora t ion process. Th i s entails: (a) Investigating what features are needed in a student model to be able to assess the effectiveness of a student's exploratory behaviour. (b) Crea t ing a student model that can accurately assess these features du r ing the 9 student's interact ion w i t h the environment. 2. To demonstrate the benefits of add ing intelligent support to open learning environ-ments using the assessment of a student model . M o s t research on student model l ing to date has focused on much more s t ructured act ivi t ies , such as problem-solving and question-answering tasks. Student model l ing for these types of activit ies has the advantage that there is a better definit ion of correct be-haviour , m a k i n g it easier to recognize this behaviour and also to formalize it i n a model . T h e same cannot be said for effective explora tory behaviour. In addi t ion , student model l ing i n problem-solving domains, such as algebra and physics, has the advantage that solutions are often broken down into a number of d is t inct steps, which widens the bandwid th i n a non-disrupt ive manner since it is na tu ra l to force students to ar t iculate these steps. Forc ing students to art iculate the steps of their explora t ion clashes w i t h the unrestr ic ted nature of open learning environments. T h e satisfaction of the goals ment ioned above represents an interest ing challenge. T h e low bandwid th , as a result of the type of sk i l l being model led and the inherent unre-str icted nature of open learning environments, results in a great deal of uncer ta inty in the assessment task. T h e or iginal i ty of this work lies in the fact that there has been very l i t t le work done on how to moni tor explora t ion and how to provide ta i lored feedback in an open learning environment, despite the moun t ing evidence that such support is needed. The thesis also contributes to research on the use of Bayes ian Networks i n student model l ing . A l t h o u g h Bayesian Networks are well suited for diagnosis i nvo lv ing large degrees of uncertainty, they have yet to be employed in open learning environments. Ano the r cont r ibu t ion is to extend work being done in student model l ing to bu i ld Bayes ian Networks dynamica l ly at run-t ime. 10 1.6 Outline Chapte r 2 reviews previous work related to suppor t ing explora t ion i n open learning envi-ronments and student model l ing using Bayes ian Networks . Chap te r 3 discusses the A C E environment, concentrat ing ma in ly on the G r a p h i c a l User Interface and the Coach . C h a p -ter 4 presents the Student M o d e l that assesses the effectiveness of the user's exploratory behaviour wi th in the A C E environment . Chapte r 5 presents the results of a user study. Changes made to the student mode l based on findings from this s tudy are discussed i n chapter 6. Chapter 7 presents conclusions and plans for future extension. 11 Chapter 2 R e l a t e d Work T h i s chapter reviews previous work related to creating a student model that assesses effective explora t ion in an open learning environment. Relevant work includes other approaches to suppor t ing explorat ion in open learning environments as a contrast to the approach taken i n this thesis. In addi t ion , this chapter presents work related to student model l ing using Bayes ian Networks . 2.1 Support in Open Learning Environments Section 1.2 described several difficulties that students have i n open learning environments. Research a imed at helping students overcome these difficulties has followed two ma in ap-proaches. T h e first approach has been to augment the environments w i t h cognitive tools. T h e second approach has been to provide the students w i t h more expl ic i t , ta i lored support . 2.1.1 Cognitive Tools Cogni t ive tools are designed to help scaffold the appl ica t ion of cognit ive skills relevant to open learning environments. T h e y include tools to help students w i t h ind iv idua l parts of the scientific inqui ry process, such as hypothesis format ion, and tools that help students apply other relevant meta-cognit ive ski l ls , such as self-monitoring and reflection. E x a m p l e 12 cognit ive tools include the Hypothes is Scratchpad in 4 S E E [44], which is designed to help students s tructure their hypotheses, and the F i l l - i n Forms i n [31], designed to help students moni tor their progress as they explore. Other examples of cogni t ive tools are those that are designed to support reflection, inc lud ing the graphica l trace too l i n [37] and the Reflect ion Assessment Phase i n S C I - W I S E [49] where peers (and sometimes teachers) evaluate each other 's work. N o t a l l designers have formally evaluated the effectiveness of their cognit ive tools. Those that have (e.g., [44] and [31]), however, have found that even carefully designed tools can sometimes interfere w i t h the learning process. T h i s is especially true when a l l learners are required to use the tools, even those who already possess and apply the targeted ski l ls . 2.1.2 Tailored Support T h i s approach, which is the one taken in this thesis, supports students i n open learning environments by p rov id ing expl ic i t , ta i lored feedback. Because of the difficulty i n mon i to r ing student behaviour i n such unrestr icted environments, few other systems have followed this approach. Veermans and V a n Jool ingen [48] c la im that ful l learner model l ing i n open learning environments is impossible, and that the model l ing should focus only on a l lowing the system to help the learner search the hypothesis and experiment spaces. T h e i r approach focuses on evaluat ing and suppor t ing the students' abili t ies to confirm or reject hypotheses by assessing the steps of their exper imentat ion. Un l ike A C E , this approach assumes that students w i l l be active enough to generate hypotheses and perform the exper imenta t ion necessary to verify these hypotheses. In addi t ion , since only the correctness of the students ' decisions to confirm or reject is judged, the system cannot support students who are having difficulty covering the explora t ion space by making suggestions as to which hypotheses would give them a more complete understanding of the domain . In [47], Veermans et al. evaluated the effectiveness of their intelligent help against a control group that received canned, untai lored feedback. A l t h o u g h they d i d not find a 13 significant difference between the post-test scores for the two groups, they d i d f ind differ-ences in the way the two groups used the environment . T h e y found that students i n the exper imental group spent more t ime work ing on a task, performed more different exper i -ments, and focused more of their experiments to the specific task that they were work ing on. T h e y also found that the cognitive strengths the students brought to the in teract ion affected their ab i l i ty to benefit from the ta i lored feedback. T h e pre-tests inc luded questions designed to measure students ' in tui t ive knowledge about the relationships between var i -ables. Students i n the experimental condi t ion who scored higher on these questions tended to score higher on the post-tests. Since the same result was not obta ined for the cont ro l group, this indicates that in tu i t ive knowledge enabled students to better unders tand how to make use of the ta i lored support . B o t h the usage differences between the exper imenta l and controls groups and the ind iv idua l differences i n being able to successfully learn from tai lored support further indicate the potent ial benefits of hav ing a model of exp lora t ion that could diagnose i n d i v i d u a l difficulties wi th the explora t ion process. In Smi th town [39], where students are able to perform experiments invo lv ing var i -ables i n the doma in of microeconomics, a d i s t inc t ion is made between the "explora t ion phase" and the "experiment phase". In the explora t ion phase, the student is gather ing informat ion and m a k i n g observations about variables i n the par t icular example economy. In the experiment phase, the student forms hypotheses, manipulates variables and draws conclusions per ta in ing to the laws of supply and demand. In the latter phase, the system supports the students ' discovery by guid ing them through a fixed sequence of steps that involve generating hypotheses followed by a set of experiments to verify the correctness of those hypotheses. D u r i n g this process, the system supplies intelligent feedback on the cor-rectness of the students ' steps. Smi th town is equipped w i t h a large knowledge base of rules related to effective inqu i ry behaviour that help the system provide this intelligent feedback. Un l ike A C E , however, Smi th town does not assist the student i n the explora t ion phase and does not address the needs of students who were unable to perform experiments i n the first place. 14 W i t h Belvedere [32], students construct diagrams of their scientific inqu i ry be-haviour , an ac t iv i ty that allows them to unders tand the relationships among the various components of the inqui ry process. Components in the d iagram represent different parts of the inqui ry process, such as the hypothesis, the data , and the theory. Components can be connected by arcs w i t h labels such as "supports", "explains" and "conflicts", that exp la in the relationships between the components. T h e environment 's intelligent support provides advice on the syntact ic correctness of the diagrams. For example, a student may have i n -dicated i n her d iagram that some of the da t a exp la in hypotheses. In this case, this system would analyze the d iagram and inform the student that da ta support hypotheses, but do not expla in them. T h e system also provides advice related to the completeness of the diagrams, such as suggesting that the d iagram include a hypothesis. Belvedere does not, however, a t tempt to moni tor or unders tand the student's explora t ion process. F ina l l y , Hypadap te r [21] has a user mode l to support exploratory learning i n a hypertext system. Informat ion i n the model includes user characteristics such as cur ios i ty level and mater ia l presentation preferences, which are obtained from a questionnaire, and knowledge level wh ich is determined based on the l inks that the user has followed. T h e model is used by the system to restrict the amount of information and l inks available for the user to explore. T h e model does not, however, t ry to capture how effectively the user is explor ing the mater ia l presented. . T h e work of this thesis differs significantly from what has been done previously i n two ways. F i r s t , the model permits the system to target the needs of less active students. Second, once the model detects that a student is experiencing difficulty w i t h the explora t ion process, the system can guide this process throughout the interact ion, not just when the student submits work to be evaluated. T h e solut ion proposed here, however, solves only half of the problem. T h e model can detect when the student is not explor ing effectively, but is not r ich enough in features to diagnose the causes of poor explorat ion, such as lack of mot iva t ion or meta-cognit ive skil ls . 15 2.2 Student Modelling Using Bayesian Networks Bayes ian Networks [33] are directed acycl ic graphs whose nodes represent r andom variables and whose arcs represent direct probabi l i s t ic dependencies among variables. E a c h r andom variable has a set of values that i t can take on, such as True or False for b inary r andom variables. E a c h node in the network w i t h predecessors has an associated C o n d i t i o n a l P r o b -abi l i ty Table ( C P T ) , while nodes wi thout predecessors have pr ior probabi l i t ies . T h e arcs i n the networks define the dependencies amongst variables in the network, rendering nodes either dependent or independent given evidence (see [9] for a good explanat ion of indepen-dence in Bayes ian Networks) . T h e network can be queried at any t ime to ob ta in the belief that a given node is a specified value. N e w evidence can be in t roduced into the network by sett ing the values of one or more of the variables i n the network to a specific observed value. In any low-bandwidth s i tua t ion, in terpret ing user characterist ics such as domain knowledge and meta-cognit ive skil ls based on l imi ted observations of student behaviour involves a great deal of uncertainty. Bayes ian networks provide a sound way of model l ing and processing this uncertainty, m a k i n g them very suitable for student model l ing . A p a r t from the ab i l i ty to formalize the uncerta inty i n the model l ing task, the inferences provided by Bayes ian Networks are well suited for student model assessment since the value of every node can be computed given whatever evidence is available. T h i s includes being able to predict effects given causes (i.e., predict students ' actions given informat ion on the students ' features) and vice versa. 2.2.1 Examples of Bayesian Student Models A number of other ITSs have used Bayes ian student models, inc lud ing [12], [14], [28], [29], [30] and [50]. M u c h of their use so far, however, has been for knowledge assessment, such as i n [29], [28] and [30]. T h e student model in H Y D R I V E [30], an environment in which students learn how to trouble-shoot an aircraft hydraul ics system, assesses the students ' knowledge of aircraft components and the strategies they use to fix these components. M a y o and M i t r o v i c [29] use Bayesian Networks to assess students ' knowledge of S Q L , informat ion that 16 is used to tai lor the tutor 's cu r r i cu lum. F i n a l l y , the student model in O L A E [28] assesses the student's knowledge of the laws and rules of physics. T h e student model in Andes [12], the successor to O L A E , also performs plan recogni t ion since it can determine which solut ion pa th the student was fol lowing. Recently, there have been some interest ing applicat ions of Bayes ian Networks that have extended beyond knowledge assessment. These appl icat ions include extending the student model in Andes to assess how effectively the student is self-explaining [14] and the work done in the I-Help project, wh ich employs inspectable Bayes ian Networks in a d is t r ibuted sett ing [50]. T h e Student M o d e l i n A C E provides another form of innovative assessment i n that it assesses the effectiveness of the student 's explora t ion . 2.2.2 Drawbacks of Bayesian Networks One problem w i t h using Bayes ian Networks is that i f the networks are large, their specifi-cat ion is a t ime-consuming process. For this reason, student model research has also inves-t igated ways to modify and construct Bayes ian Networks at run- t ime. T h e student model i n Andes [12], constructs the Bayes ian Networks di rect ly from problem solut ion graphs gen-erated by the problem solver. S M o d e l [50], the d is t r ibuted student model l ing server for I-Help environment, has a facil i ty to construct Bayes ian Networks for its student models when provided w i t h a X M L descript ion of the networks nodes, l inks and prior probabi l i t ies . T h e Bayes ian Networks in A C E are pa r t i a l ly specified by the model designer and then ex-tended dur ing the interact ion according to the cu r r i cu lum and the student's explora t ion of the environment. Ano the r concern w i t h using Bayes ian networks in real-t ime environments is that they can be computa t iona l ly expensive [22]. T h e fact that they have been used successfully i n several environments indicates that, as long as the networks are not too large, this com-puta t iona l complexi ty is manageable. T h e Bayes ian Networks in A C E ' s Student M o d e l are kept to a manageable size by using two techniques. T h e first is to d ivide the model into two smaller networks and the second is to extend the networks dynamica l ly according to 17 the specific mater ia l that the student is explor ing. B o t h of these techniques are discussed i n greater detai l in chapter 4. T h e final b ig concern w i t h using Bayesian networks i n a model l ing appl ica t ion is the issue of how the model designers arrive at the values for the C P T s . One way to deal w i t h this problem is to hand-design the C P T s using informed estimates and to refine these values through empir ica l evaluations. Ano the r technique involves using machine learning to learn to C P T s from data . A C E follows the first approach. 18 Chapter 3 A C E T h e A d a p t i v e C o a c h for E x p l o r a t i o n ( A C E ) [7] is an intelligent open learning environment for the domain of mathemat ica l functions. T h e p r imary goal beh ind the development of A C E was to create an environment to evaluate the benefits of p rov id ing ta i lored guidance to support explora t ion by adding intelligence (in the form of a user mode l and coach) to the system. A secondary and complementary goal was to develop an environment that allows student to explore the phenomena associated w i t h mathemat ica l functions i n a highly graphica l manner . T h e style and content of the mater ia l being presented i n A C E is based loosely on Stewart 's precalculus textbook [41]. F igure 3.1 i l lustrates A C E ' s four modules: the G U I , the Coach , the Student M o d e l and the Knowledge Base. Ar rows in the d iagram represent the flow of informat ion between the modules. T h e G U I dispenses informat ion to the Coach and Student M o d e l related to the learner's actions i n the environment and receives directives from the Coach per ta in ing to what mater ia l to present. The communica t ion between the Student M o d e l and the Coach is unid i rec t ional : the Student M o d e l provides the Coach w i t h informat ion the C o a c h needs about the student so that the Coach can make decisions regarding the provis ion of ta i lored feedback. Current ly , the Coach does not inform the Student M o d e l of the k i n d of feedback it provides to the student and how the student responds to i t . Th i s is potent ia l ly a l im i t a t i on to be addressed in future versions of the system. Since much of the Coach 's feedback is 19 Figure 3.1: A C E ' s Modu le s i n the form of suggestions, some of which are quite vague, it is first necessary to observe how students tend to respond to this type of feedback before it can be incorporated into the Student M o d e l . F ina l ly , a l l modules access the Knowledge Base for function-related knowledge. T h i s chapter provides a brief in t roduc t ion to A C E ' s components. T h e Student M o d e l , the p r imary focus of this thesis, is discussed in detai l i n chapter 4. Since the development of A C E was jo int work, appendix A gives the greatly deserved credits. 3.1 The GUI T h e G U I is designed to al low students to explore numerous aspects of functions. These aspects include the relat ionship between the input and output of a function as well as the relat ionship between function graphs and their equations. F igure 3.2 is a screen shot of the complete interface for one of A C E ' s units . T h e top-left window is the central place of interact ion, i n which the learner works on the exercises that A C E provides to explore various function concepts. A t the bot tom-right of this panel, there is a "Next Exerc ise" bu t ton that 20 Figure 3.2: T h e A C E Interface 21 drag a box behind the tail of the arrow and click on the coloured button ! i next exercise; i Figure 3.3: T h e Mach ine U n i t the student presses to indicate that she is finished explor ing the current exercise. T h e right panel is a set of hypertext help pages that contain instruct ions on how to use A C E and function-related definitions. T h e bo t tom panel displays messages from the Coach . In addi t ion , there is a series of icons in the top-left corner, whose functionalit ies are discussed in section 3.1.4. T h e mater ia l that the G U I presents is d iv ided into units and exercises. Un i t s are collections of exercises whose mater ia l is presented wi th a common theme and mode of interaction. Current ly , A C E has three units: the Mach ine U n i t , the A r r o w U n i t and the P l o t U n i t . Exercises w i th in the units differ in function type and equation. 3.1.1 The Machine Unit T h e Machine U n i t (fig. 3.3) provides the student w i t h the oppor tun i ty to explore the relat ionship between an input and the output that a given function generates. T h e student 22 output f ( x ) = 4 x - 4 x f 3 123 227 o o input x drag the inputs ( x ) up to the correct outputs (f(x)) next exercise Figure 3.4: T h e A r r o w U n i t can explore this relationship by dragging any number of inputs displayed at the top of the screen to the ta i l of the function "machine" (the arrow shown in fig. 3.3). The machine computes the output and spits i t out the other end of the arrow by encasing the output in an animated pink ba l l . If there are mul t ip le steps involved in the computa t ion (e.g., subst i tu t ion and algebraic operations), the student must click the "step" but ton (found direct ly below the machine) to view the equat ion being resolved. 3.1.2 The Arrow Unit T h e A r r o w U n i t (fig. 3.4) allows the student to explore how a range of inputs is mapped onto a range of outputs. Th i s uni t requires more active thought on the part of the students as they must both select which inpu t to experiment wi th and connect that input to the correct output . Th i s is the only ac t iv i ty w i t h i n A C E that allows students to demonstrate their understanding directly. 23 3| (x + 1,4: 2.2-, next exercise Figure 3.5: T h e P l o t U n i t E a c h input has a "dragbal l" which can be connected to any of the above outputs. A s the student drags an input up to one of the outputs, a connecting line is formed. If the student succeeds in choosing the correct output , the line turns green, otherwise the line turns red. The student can reconnect an input to a different output at any t ime. 3.1.3 The Plot Unit T h e P l o t U n i t is A C E ' s most interest ing uni t in terms of the range of explorat ion that i t permits . T h e goal of this uni t is to have the student gain an understanding of the relat ionship between the graph of a function a n d its equat ion. In do ing so, the student should also become familiar wi th different properties of graphs, inc lud ing slopes and intercepts. T h e student can manipula te the graph by either dragging it around the screen (using the mouse) or by edi t ing the equat ion box (shown i n the bottom-left hand corner of fig. 3.5). Changes i n the posi t ion of the graph are immedia te ly visible in the function's equation (also 24 Figure 3.6: T h e Tools Icons jsmmmm -Chapter.!-:-• -machineUmt „ • _ -, r ^ . 0 v- • • | Ex. 1 -arrowUnit . | Ex. 2 1 [ Ex. 3 1 4 i "Chapter 2 ; Figure 3.7: T h e Lesson Browser shown in a non-editable form in the top-right hand corner of the screen) and changes in the equation immediate ly update the graph. T h e student can zoom i n and out using the magnifying glass icons to view the graph from a variety of different perspectives. 3.1.4 Tools A t the top of the ma in interact ion window, there is a series of icons, which are magnified in figure 3.6. T h e first, second and fourth icons (from left to right) support navigat ion through the cur r i cu lum, the th i rd icon represents the E x p l o r a t i o n Assistant and the last icon is a calculator tool . 25 You have tried thes small-range +. small-grange -• zero '* * large-range + large-range -Figure 3.8: T h e E x p l o r a t i o n Assis tant C u r r i c u l u m N a v i g a t i o n W h e n the student clicks on the "Next Exerc ise" bu t ton , the system presents the cur r i cu lum i n a sequential order. However, to remain consistent w i t h the design principles of open learning environments and al low the student as much freedom as possible, there are three addi t iona l ways to navigate other than using the "Next Exerc ise" bu t ton . T h e right and left arrows in figure 3.6 allow the student to skip forward to the next exercise and return to the previous exercise, respectively. T h e fourth icon (a scroll) opens the Lesson Browser (see fig. 3.7), which lets the student j u m p direct ly to any exercise in the cu r r i cu lum by c l ick ing on i t . T h e E x p l o r a t i o n A s s i s t a n t T h e E x p l o r a t i o n Assistant is a cognitive too l that helps students moni tor and organize their explora t ion. It displays and categorizes students ' exploratory actions w i t h i n an exercise in terms of relevant explorat ion cases (see sec. 3.4.3 for a descript ion of relevant explorat ion cases). F igure 3.8 shows the too l open for an exercise i n the Mach ine U n i t after the student 26 has explored a smal l posit ive input and a large negative input . 3.2 The Student Model T h e Student M o d e l monitors the student 's exploratory actions i n the environment to pro-duce a probabi l i s t ic assessment of the effectiveness of the student 's explora tory behaviour. T h e assessment technique, which is based on Bayesian Networks , is a p r imary focus of this thesis and w i l l be discussed in detai l i n chapter 4. 3.3 The Coach A C E ' s coaching component has two m a i n duties. The first is to b u i l d the cu r r i cu lum and the second is to use the assessment of the Student M o d e l to provide ta i lored feedback i n the form of hints geared towards helping the student explore more effectively. 3.3.1 Curriculum T h e C o a c h builds the cu r r i cu lum at run- t ime from a text file that specifies the unit and funct ion type for each exercise. T h e coefficients and exponents of the funct ion equations can either be specified manual ly in the text file or generated randomly from w i t h i n a specified range (also included in the text file). A p p e n d i x B shows a sample cu r r i cu lu m file. Cur ren t ly the content of the cur r i cu lum remains fixed once it is bui l t . A long-term goal is to be able to adapt the cu r r i cu lum dynamica l ly throughout the course of the in teract ion as the Student M o d e l provides an assessment of the student's weaknesses. 3.3.2 Hints T h e C o a c h provides two types of hints: explora t ion hints and naviga t ion hints . E x p l o r a t i o n hints provide suggestions to the student on how to improve her explora t ion and can be obta ined on demand as the student explores an exercise by c l i ck ing on the "Get H i n t " bu t ton (located at the bo t tom of fig. 3.2). Hin t s are supplied to the student at increasing 27 Why no: try some more inputs? .1 . " Why don't-you try. a large negative num ber as the input? , ' . : . . Why don't you try plugging in som e unexplored input:, like x = -49 .0 Figure 3.9: A n E x a m p l e H i n t W i n d o w M l mm\ I don't think y o u have exp lored this exerc ise enough... If you stay, you can get a hint... Stay* Move on ! Figure 3.10: A Nav iga t ion H i n t levels of detai l . T h e lowest detai l level is a generic suggestion to explore the current exercise further. The most detailed hints provide suggestions on exact ly what things to try. F igure 3.9 shows an example of three levels of hints for the Mach ine U n i t . T h e Student M o d e l determines the concept that is the focus of a hint . If the student tries to move on to a new exercise (either by c l i ck ing on the "Next Exercise" but ton or by using one of the navigat ion tools) before the C o a c h feels that the student is ready, the C o a c h w i l l generate a navigat ion hint . In these si tuations the student receives the message displayed i n figure 3.10. T h i s message contains a warn ing along w i t h a suggestion to stay i n the exercise and obta in an explora t ion hint . Since the system is designed to give the student as much control as possible, the student can either choose the follow the Coach 's advice or to move on . A s wi th explora t ion hints, the Coach makes the decision of whether or not to generate a navigat ion hint based on informat ion obtained from the Student M o d e l . 28 Table 3.1: Funct ions Types and their At t r ibu tes Function Type Equation Form Attributes Constant f(x) = a y-intercept Linear }{x) = ax + b y-intercept, x-intercept, slope Power : f(x) = axc scal ing factor, exponent P o l y n o m i a l f(x) = anXn + O n - i l ™ - 1 + . . . set of {coefficient, expo-nent} Table 3.2: Func t ion I /O Concepts Function Type Equation Substitution Arithmetic Constant f(x) = a simple s imple L inea r f(x) = ax + b simple s imple Power f(x) = axc simple complex P o l y n o m i a l f(x) - anxn + an-1xn~l + ... complex complex 3.4 The Knowledge Base For the purpose of this system, the function domain is l im i t ed enough that it does not require an extensive knowledge engineering effort. Other m a t h domains , such as first-order calculus, require the student to possess extensive background knowledge already, inc lud ing having mastered mathemat ica l functions. T h i s is problemat ic because not only would this background knowledge have to be model led i n the knowledge base, but students also often lack this background knowledge, m a k i n g it difficult for the Student M o d e l to determine whether the student is having difficulty understanding the concepts being presented or is missing the prerequisite knowledge. Since A C E was designed to test the effects of adding intelligent support , it is not desirable to have this confounding variable. Th i s section highlights some of the key features of the knowledge base. T h e system currently fully supports four different types of functions: constant functions, l inear functions, power functions and po lynomia l functions. E a c h function type has an associated set of attr ibutes, concepts and explora t ion cases. 29 Table 3.3: Symbols for the E x p l o r a t i o n Cases S y m b o l M e a n i n g S + S m a l l posi t ive number S- S m a l l negative number z Zero L + Large posi t ive number L - Large negative number Pos_Neg Posi t ive and Negat ive version of the same number. 3.4.1 Function Attributes Func t ion at tr ibutes, described briefly i n table 3.1, are basic properties of a function's equa-t ion and graph. T h e definit ion of each function i n the Knowledge Base contains methods to compute or access these at tr ibutes. 3.4.2 Function I /O Concepts Func t ion I / O concepts are basic operations that a student must be able to perform to generate the correct output for a par t icular funct ion. These concepts include being able to perform ar i thmet ic operations and subst i tu t ion, wh ich are further d iv ided into simple and complex ar i thmet ic and subst i tu t ion, based on the complexi ty of the function. T h e I / O concepts for each function are summarized i n table 3.2. 3.4.3 Relevant Exploration Cases Relevant explora t ion cases are the salient concepts that should be explored i n each exercise in order to gain a thorough understanding of the target mater ia l . In the P l o t U n i t w i t h a constant function, for example, the student should explore how the graph looks w i t h bo th posit ive and negative intercepts. The rules given in figure 3.11 define which explora t ion cases are relevant for each function type wi th in each uni t . Table 3.3 describes the abbreviat ions found in the rules. For example, the first rule states that, in the M a c h i n e U n i t , i f the function is a constant 30 Machine Uni t ConstantFunction LinearFunct ion PowerFunct ion Polynomia lFunct ion A r r o w Uni t • ConstantFunction LinearFunct ion PowerFunct ion Polynomia lFunc t ion Plot Uni t • ConstantFunction LinearFunct ion PowerFunct ion S+, S - , Z , L + , L -S+, S - , Z , L + , L -S+, S - , Z , Pos_Neg S+, S - , Z , P o s _ N e g S+, S - , Z , L + , L S+, S - , Z , L + , L -S+, S - , Z , Pos_Neg S+, S - , Z , Pos_Neg positive intercept, negative intercept positive intercept, negative intercept positive slope, negative slope, zero slope even exponent, odd exponent, positive shift, negative shift, large positive scaling, small positive scaling, large negative scaling, small negative scaling, zero scal ing F igure 3.11: Rules for the Relevant E x p l o r a t i o n Cases 31 Function Type: Linear Unit: Arrow Attributes: I/O Concepts: Exploration Cases • Small Positive (S+) Small Negative (S-) Zero (Z) Large Positive (L+) Large Negative (L-) Y-intercept X-intercept Slope Substitution: Simple Arithmetic: Complex Figure 3.12: T h e Knowledge Base's Representat ion of a L inear Func t ion i n the A r r o w U n i t function then the student should explore how the function behaves w i t h smal l posit ive numbers (S+) , smal l negative numbers (S-), zero (Z) , large posi t ive numbers ( L + ) and large negative numbers (L - ) . T h e rule is the same for a linear funct ion, while w i t h power and p o l y n o m i a l functions the students should see how the function behaves w i t h a posi t ive and negative version of the same number (PosJNeg i n fig. 3.11), since a function equation w i t h even exponents w i l l generate the same output i n bo th cases. Large numbers are not inc luded as relevant explora t ion cases for either of these functions since they could cause some students to get fixated on compl ica ted ar i thmet ic unnecessarily. T h e A r r o w U n i t has the same set of rules since it also deals w i t h the relat ionship between input and output . In the plot unit , the rules specify which function manipula t ions the students should explore. For a constant function, they should view the graph w i t h bo th posi t ive and negative intercepts. T h e rule for l inear functions builds on this, adding also posi t ive and negative slopes along w i t h the zero slope, which turns the function back to a constant one. T h e relevant explora t ion cases for a power function include shifting (which results in the student v iewing the graph at a variety of intercepts), scaling (which changes the w i d t h and orien-32 ta t ion of the graph) and o d d and even exponents (which change the shape of the graph) . P o l y n o m i a l functions were not implemented at the t ime this research was done, but are planned for future versions of A C E . F igure 3.12 displays the representation of a linear function for the A r r o w U n i t i n the Knowledge Base. A l t h o u g h it may not be ideal , this method of knowledge representation is sufficiently detailed to al low A C E ' s other modules (the Coach , G U I and Student M o d e l ) to carry out their responsibil i t ies i n an efficient manner. In par t icular , the representation of the relevant explora t ion cases is an impor tan t part of the Student M o d e l ' s assessment of effective explora t ion behaviour, the focus of the next chapter. 33 Chapter 4 Student M o d e l - V e r s i o n I The first version of A C E ' s Student M o d e l was bui l t keeping in m i n d two p r imary objectives. T h e first was for the Student M o d e l to generate an assessment of a student's explora t ion of the A C E environment that would allow the Coach to provide ta i lored feedback a imed at guid ing and improv ing this explora t ion when necessary. T h e second goal was to a t tempt to use the student 's explora tory actions as a means of assessing the student's knowledge of concepts in the domain , since effective explora t ion should enable the student to ga in better understanding of the explored concepts. T h i s chapter examines the techniques used by the first version of the Student M o d e l to generate its assessment. T h e second version of the Student M o d e l is discussed i n chapter 6. 4.1 Uncertainty in the Modelling Task M o d e l l i n g exploratory behaviour in an open environment is not an easy task. M a i n t a i n i n g a natura l , unrestr icted interact ion style gives the student sufficient freedom to explore, but provid ing this type of freedom lowers the bandwid th of informat ion available to the Student M o d e l . The Student M o d e l has access to low-level informat ion such as mouse clicks and keystrokes, but not to higher-level information such as the student's intentions and cognitive states. W i t h o u t such informat ion, there is considerable uncertainty involved in the 34 Correct Behaviour Relevant Exploration Cases / \ Exploration of Concepts Exploration of Exercises Exploration of Units Figure 4.1: A H igh -Leve l Descr ip t ion of the Bayesian Networks assessment process. To manage this uncertainty, the Student M o d e l uses Bayes ian Networks . A s described i n section 2.2, Bayes ian Networks provide a computa t iona l framework that allows the uncer ta inty involved i n a model l ing task to be formulated and processed i n a pr inc ip led way. M o d e l l i n g exploratory behaviour i n a s i tua t ion wi th l imi ted bandwid th is par t i cu-lar ly filled w i t h uncertainty. T h e Student M o d e l can view w i t h which items i n the interface the student is exper iment ing. T h i s informat ion, however, is not always sufficient to al low the Student M o d e l to determine what the student was t r y ing to accomplish w i t h these interface manipulat ions . Fur thermore , this informat ion does not always give any ind ica t ion of what the student understands, m a k i n g it harder for the Student M o d e l to determine di rec t ly i f the student's actions are cont r ibut ing to explora t ion that is effective enough to help them learn the mater ia l . T h e model w i l l sometimes have access to more informat ion than other times; using Bayes ian Networks w i l l allow the model to leverage dynamica l ly any available informat ion. 35 4.2 Structure of the Student Model's Bayesian Net-works T h e Student M o d e l is separated into two Bayes ian Networks : one for the i n p u t / o u t p u t units (Machine and A r r o w ) and another for the graph manipu la t ion uni t (P lo t ) . Separat ing the model into two networks helped to bo th d iv ide the in i t i a l design task into two more manageable components and address some performance concerns. U p d a t i n g the p robab i l -ities i n Bayes ian Networks is an N P - h a r d p rob lem [17], where the runn ing t ime increases w i t h the number of nodes and arcs. T h e advantages of having one large network are that al l informat ion about the student is available to the coaching component throughout the session and there is no overhead involved i n swi tch ing between networks. H a v i n g mul t ip le smaller networks, on the other hand, reduces the runn ing t ime associated w i t h upda t ing (or querying) a node du r ing the session because of the smaller subset of arcs and nodes involved in the computa t ion . In A C E , the skil ls and behaviour being model led in the M a c h i n e U n i t and the A r r o w U n i t have v i r tua l ly no overlap w i t h those model led in the P l o t U n i t . Thus , the Coach does not require access to bo th sets of informat ion at one t ime. D i v i d i n g the model into these two separate networks allows the system's performance to remain reason-able throughout the session, despite that the fact the networks grow w i t h each new exercise (a feature that is discussed i n subsequent sections). T h e two networks have a common high-level design, which this section describes. 4.2.1 High Level Description of the Model's Bayesian Networks E a c h network in A C E ' s Student M o d e l consists of two classes of nodes: explora t ion nodes and knowledge nodes. E x p l o r a t i o n nodes represent the effectiveness of the student 's ex-plora tory behaviour. These nodes are present i n the network at several different levels of granular i ty: the explora t ion of ind iv idua l explora t ion cases wi th in an exercise, of i n d i v i d -ual exercises, of groups of related exercises (units) , of groups of concepts that appear across mul t ip le exercises and overall exploratory behaviour . E a c h explorat ion node can take on the 36 value of either True of False. A True value means that the student has sufficiently explored the i t em associated w i t h the node (i.e., the funct ion, concept, uni t or relevant explora t ion case). F igu re 4.1 provides a high-level descr ipt ion of the interactions among the different classes of nodes. Assessment at different levels of granular i ty allows the coaching compo-nent to provide a wide range of ta i lored feedback, some of which is already i n A C E . T h e assessment of i nd iv idua l explora t ion cases and groups of concepts can be used to select the content of hints w i th in an exercise. T h e assessment of how well the student has explored groups of concepts and related exercises can be used to adapt the cu r r i cu lum and to pro-vide suggestions as to which exercises to explore. F ina l l y , the overal l assessment can be used to provide general, high-level feedback on the student's explora t ion strategies (or lack thereof). Cur ren t ly , A C E ' s Coach does not ful ly support the two latter types of feedback but , because the relevant assessments are already provided by the Student M o d e l , i t could easily be extended to do so. Knowledge nodes represent the student 's level of understanding of the relevant do-ma in concepts that she demonstrates du r ing the interact ion. E x p l o r a t i o n nodes are used i n conjunct ion w i t h evidence of correct behaviour to produce an assessment of the stu-dent's unders tanding (see fig. 4.1). L i k e explora t ion nodes, knowledge nodes are also b inary variables, where a True value represents the p robab i l i ty that the student understands that concept. Ev idence of good exploratory behaviour carries less weight than does evidence of correct sk i l l appl ica t ion , as is reflected i n the Student M o d e l ' s Cond i t i ona l P r o b a b i l i t y T a -bles. In A C E ' s in i t i a l design, however, evidence of correct sk i l l appl ica t ion current ly comes only i n the A r r o w U n i t . There was a p lan to include addi t iona l units that wou ld provide some expl ic i t testing of students' knowledge of a l l concepts in the domain , but t ime d i d not permi t a l l of the elements of the in i t i a l proposal to be implemented. Fur thermore , any uni t that involved a lot of testing would take away some of the feeling of freedom and control that A C E presently permits . Pa r t of the future development of A C E w i l l include adding more act ivi t ies that test correct sk i l l appl ica t ion whi le s t i l l main ta in ing the explora tory nature of 37 (A) (B) Figu re 4.2: T w o Different Possibi l i t ies for the Di rec t ion of the A r c s i n a Ne twork w i t h Dimens iona l Variables the in terac t ion . A C E ' s Bayesian Networks are ma in ly composed of d imensional variables, meaning the nodes form hierarchies, w i th skil ls represented at different levels of specificity. A c c o r d i n g to Jameson [22], there are two possibil i t ies for the direct ion of the arcs when using variables of this type. T h e first opt ion is to have the arcs point from the sub-skil ls to the more general ski l ls (fig. 4 . 2 A ) . W i t h this opt ion , the sub-skil ls are independent, unless direct evidence of the general sk i l l can be gathered, and hav ing evidence of one sub-ski l l does not increase the p robab i l i ty that the student possesses another. The other opt ion has the arcs poin t ing from the more general skills to the sub-skil ls (fig. 4 .2B) . W i t h this op t ion , evidence of the student hav ing one sub-ski l l influences the probabi l i t ies that the student also possesses the other sub-skil ls (unless the higher-level sk i l l is observed, in which case the sub-skil ls become independent) . H Y D R I V E [30] is an example of a system that uses op t ion A ; O L A E [28] is a system that uses opt ion B . O p t i o n A was selected for A C E ' s Student M o d e l ; having explored one sub-concept well does not increase the probabi l i ty of hav ing effectively explored another sub-concept. O p t i o n B makes the most sense in si tuations where the sub-skills are related. Thus , w i th in A C E , the sub-skills are not considered to be related; the model expects the students to explore a l l aspects thoroughly. T h e assumption of independence of sub-skil ls was.also made for the knowledge nodes. Fur thermore, when necessary, the effect of related sub-skil ls can s t i l l be achieved by adding arcs between them (with a slight performance cost). 38 4.3 Network Design 4.3.1 Static vs. Dynamic E a c h Bayes ian Network has a static por t ion and a dynamic por t ion . T h e stat ic por t ion is completely specified ahead of t ime by the model designer, meaning i t remains ident ica l for every student and session. In addi t ion to this static component, each network has a dynamic por t ion that is constructed throughout the interact ion w i t h the student. T h e reason for having part of the network be dynamic is that there are aspects of the model l ing process that vary from session to session. F i r s t , the cu r r i cu lu m is generated au tomat ica l ly at run-t ime from a text file and, thus, there could be a different cu r r i cu lum for each session. A d d i n g nodes dynamica l ly to the network removes any curr iculum-dependent informat ion from the static por t ion , avoiding si tuations where the mode l designer has to b u i l d a separate network for every conceivable cur r i cu lum. In addi t ion to the structure of the cu r r i cu lum, the exact nature of each exercise is unknown ahead of t ime since the coefficients and exponents of the function equations may be randomly generated. F i n a l l y , using A C E ' s naviga t ion tools, each student may take a different path through the cu r r i cu lum, choosing to explore some of the exercises and ignore others. D y n a m i c addi t ions to the network would be even more essential should cu r r i cu lum itself become dynamic since v i r t ua l l y no details of a session other than the general concepts presented would be known a p r i o r i . 4.3.2 Machine/Arrow Units Static Portion Figure 4.3 shows the pre-defined por t ion of the network for the M a c h i n e and A r r o w U n i t s . T h i s network contains nodes that represent the student's unders tanding of concepts that are involved w i t h the abi l i ty to generate the input and output of a funct ion such as subs t i tu t ion and ar i thmet ic (see nodes " s impleAr i thme t i c " , "s impleSubs t i tu t ion" , " complexAr i t hme t i c " and "complexSubs t i tu t ion" in fig. 4.3). In addi t ion , the network contains nodes that repre-sent higher-level effective exploratory behaviour such as the explora t ion of i n d i v i d u a l units 39 Figure 4.3: T h e Sta t ic P o r t i o n of the Bayes ian Network for the Mach ine and A r r o w U n i t s (e.g., " a r rowExp lo ra t ion" i n fig. 4.3) and overall explora tory behaviour ("overa l lExplo-ra t ion" in fig. 4.3). In a l l subsequent figures, the nodes that are labeled w i t h "explorat ion" in conjunction w i t h those labeled "f," and "f^Case;" refer to explora t ion nodes whi le a l l others refer to knowledge nodes. Dynamic Portion T h e por t ion of the network that is added dynamica l ly (see fig. 4.4) contains nodes repre-senting the explora t ion of i n d i v i d u a l exercises, the explora t ion of relevant explora t ion cases, and the explora t ion of categories of concepts. T h e effectiveness of a student's explora t ion of an ind iv idua l exercise is represented by the nodes labeled "f*" (e.g., "fj" i n fig. 4.4). These nodes are added to the network every t ime a student visits a new exercise. In this example, the student has vis i ted three functions: f i , f2 and f.^ . A s function nodes are added to the network, they are l inked to their corresponding unit nodes. A s i l lus t ra ted by figure 4.4, "fi" and "f2" represent exercises i n the Mach ine U n i t and is an exercise in the A r r o w U n i t . T h e degree to which the student is considered to have explored an ind iv idua l exercise is influenced by whether or not she has explored the salient concepts w i t h i n that exercise 40 Figure 4.4: Pa r t of the D y n a m i c P o r t i o n of the Bayesian Network for the A r r o w and Mach ine Uni t s that Conta ins E x p l o r a t i o n Nodes 41 i n addi t ion to how long she spent explor ing the exercise. T h i s relat ionship is accomplished in the network by adding nodes for each of the relevant explora t ion cases associated w i t h the function presented i n the exercise ( information that can obtained from the knowledge base) and a node representing explora t ion t ime. T h e relevant explora t ion case nodes are labeled "f jCasei" (e.g., " f i C a s e i " i n fig. 4.4) and the t ime node is labeled "r^Time" (e.g., " f i T i m e " in fig. 4.4). T h e t ime node represents a measure of how long the student spent explor ing the exercise as a whole, not each i n d i v i d u a l explora t ion case. A measure of how long the student spent explor ing each case wou ld be more informative, but wou ld also be more difficult to determine. H o w to measure t ime spent explor ing a par t icular case w i l l be investigated as part of future research. Since the relevant explora t ion case nodes are instances of concepts that can appear across mul t ip le exercises, nodes are added to the network representing the more general effectiveness of the student's explora t ion of these concepts (e.g., "exploredSmal lPos" in fig. 4.4). If an explora t ion concept is not i n the network already, i t is added dynamica l ly at the same t ime as the corresponding relevant explora t ion case: T h e final part of the A r r o w / M a c h i n e network that is added dynamica l ly (shown i n fig. 4.5) deals w i t h the A r r o w U n i t , the on ly uni t that gives the student the oppor tun i ty to direct ly demonstrate her knowledge. In this uni t , nodes representing proficiency in gener-a t ing correct input and output of a function are added to the network (e.g., 13IO in fig 4.5). These " f J O " nodes are attached to nodes representing the type of subst i tu t ion and ar i th-metic needed to be able to generate the correct output for the function i n question. W h e n a student connects an input to an output , a new node labeled "fJOCase. ;" (e.g., " f a l O C a s e i " in fig. 4.5) is attached to the " f i l O " node and set to True i f the answer is correct and to False, otherwise. 4.3.3 Plot Unit T h e structure of the network for the P l o t U n i t is s imi lar to the A r r o w and Mach ine Un i t s . T h e difference is that there are more detailed knowledge and explorat ion hierarchies since 42 Figure 4.5: Pa r t of the D y n a m i c P o r t i o n of the Bayes ian Network for the A r r o w and Mach ine U n i t s that Conta ins Knowledge Nodes this uni t involves a wider range of concepts and, accordingly, a greater potent ia l for explo-ra t ion . Static Portion T h e static por t ion of the network contains a detailed knowledge hierarchy and a mir rored explora t ion hierarchy for the concepts associated w i t h the graph of a function. Feeding into each node of the knowledge hierarchy is its corresponding explora t ion node and a node representing some form of expl ic i t evidence of this sk i l l . T h e former nodes have the prefix label "test" (e.g., "testPosIntercept" in fig. 4.6). Unfortunately, this type of direct evidence cannot be gathered w i t h A C E ' s current set of act ivi t ies , and so these nodes remained unob-served. F igure 4.6 shows a por t ion of the network for a linear function. Concepts involved w i t h the graph of a linear function include intercepts (positive and negative) and slopes (positive, negative and zero). There are analogous port ions of the network for each of the different types of functions: the more compl ica ted the function, the more complicated the 43 44 Figure 4.7: A n E x a m p l e D y n a m i c Po r t i on of the Bayes ian Ne twork for the P l o t U n i t associated explora t ion hierarchy. For example, the hierarchy for a constant function consists only of intercept explora t ion whi le the hierarchy for a power function contains concepts such as shifting, scal ing and exponents. Dynamic Portion A s w i t h the M a c h i n e / A r r o w network, every t ime a student visi ts a new exercise, addi t iona l nodes are added dynamica l ly . F igure 4.7 shows an example por t ion of the dynamic network for the P l o t U n i t , where the student has vis i ted a constant function ( " f i " ) and a linear function ("£2")- T h e first node to be added dynamica l ly is the function node (labeled "f i") representing how well the student has explored that par t icular exercise (e.g., " f i " i n fig. 4.7). A subtle difference present i n this network is rather than hav ing the uni t explora t ion node influenced by the functions explora t ion nodes, it is instead influenced by the explora t ion nodes for each of the different types of functions (see fig. 4.6) since there w i l l only be one exercise associated w i t h each function type. A s in the A r r o w / M a c h i n e network the relevant explora t ion cases nodes (e.g., " f i C a s e i " in fig. 4.7) and a t ime node (e.g., " f i T i m e " i n fig. 4.7) feed into each function node. T h e explorat ion case nodes represent specific instances of concepts at the bo t tom level of the explorat ion hierarchy, such as the explora t ion of posit ive and negative intercepts ("exploredPosIntercept" and "exploredNeglntercept" in figs. 4.7 and 4.6). 45 4.4 Evidence T h e nodes that are observed as evidence are the explora t ion case nodes (e.g., " f i C a s e i " in fig. 4.4), t ime nodes (e.g., " f i T i m e " i n fig. 4.4) and, in the case of the A r r o w U n i t , nodes for the generation of input and output (e.g., " f3 T OCase3" in fig. 4.5). T i m e nodes are observed as True when the t ime that the student has spent explor ing an exercises passes over a pre-specified threshold. T h i s threshold is unit-dependent and was determined by observing people us ing the system. The explora t ion case nodes are a l l set to False when they are in i t i a l ly added to the network and are not set to True un t i l the student performs interface actions that the system considers to be an ind ica t ion of the student having explored the associated concepts. E a c h uni t has a different in terpreta t ion of what interface actions provide such evidence. In the Mach ine U n i t , dragging an inpu t that is an instance of a relevant explora t ion case to the Mach ine is considered evidence of the student having explored that case. In the A r r o w U n i t , this evidence is in t roduced when the student drags a relevant input to an output . F ina l l y , the student is considered to have explored a relevant explora t ion case in the P l o t U n i t when she either drags and drops the graph to an interesting pos i t ion (as defined by the relevant explora t ion cases) or edits the function equation to change the graph in a desired way. 4.5 Conditional Probability Tables T h e Cond i t i ona l P r o b a b i l i t y Tables ( C P T s ) in the Bayes ian Networks were constructed using estimates developed as a part of this research. A s is the case w i t h a l l human-designed C P T s , such estimates need to be refined further through empir ica l evaluations. Ra the r than fully describing the tedious details of the C P T s , this section w i l l highlight some of the impor tant features instead. A s i l lus t ra ted i n table 4.1, the C P T s for the function explora t ion nodes (e.g., " f i " i n fig. 4.4) are set so that the more relevant explorat ion cases the student explores, the higher is the probabi l i ty that she has effectively explored that funct ion. T h e same technique is 46 Table 4.1: C P T for an E x a m p l e Func t ion Node fn. f„Casel f„Case2 f nCase 3 f„Time f n T T T T 0.97 F 0.75 F T 0.75 F 0.50 F T T 0.75 F 0.50 F T 0.50 F 0.25 F T T T 0.75 F 0.50 F T 0.50 F 0.25 F T T 0.50 F 0.25 F T 0.25 F 0.03 Table 4.2: C P T for the A b i l i t y to Cor rec t ly Generate the Ou tpu t for a S imple Func t ion . simpleArithmetic simpleSubstitution simpleArrowOutput T T 0.97 F 0.45 F T 0.45 F 0.03 used to update the C P T s for the uni t and concept explora t ion nodes. For the nodes representing knowledge of concepts in the domain , the C P T s were constructed using estimates of the impor tance of master ing the associated sub-skills i n order to fully unders tand the higher level ski l ls . Table 4.2 shows an example for the abi l i ty to generate the correct output ( " r i gh tS imp leAr rowOupu t " in fig. 4.3) for functions invo lv ing simple ar i thmet ic and simple subst i tu t ion. In this case, bo th simple ar i thmet ic and simple subst i tu t ion are seen as having equal impor tance i n master ing the higher level concept. Table 4.3 shows an al ternative example for the slope concept ("slope" in fig. 4.6) where the zero slope is of lesser impor tance . F ina l ly , tables 4.4 and 4.5 show examples of two C P T s for nodes that depend on 47 Table 4.3: C P T for Unders tand ing the Slope Concept . posSlope negSlope zeroSlope slope T T T 0.97 F 0.8 F T 0.7 F 0.6 F T T 0.7 F 0.6 F T 0.3 F 0.03 Table 4.4: C P T for Unders tanding the Concepts i n A r r o w U n i t G i v e n E x p l o r a t i o n and Direct Evidence simpleArrowOutput complexArrowOutput arrowExploration arrow T T T 0.97 F 0.8 F T 0.7 F 0.2 F T T 0.8 F 0.4 F T 0.6 F 0.03 Table 4.5: C P T for Unders tanding Pos i t ive Intercepts given E x p l o r a t i o n and Di rec t E v i -dence exploredPosIntercept testPosIntercept poslntercept T T 0.97 F 0.6 F T 0.8 F 0.03 48 some expl ic i t evidence of the student unders tanding the corresponding concept and the explora t ion of that concept. In both cases, i f the student has only explored the concept wel l , the network is less confident that she understands that concept but the network s t i l l uses good explora t ion as some ind ica t ion of understanding. A n al ternative wou ld be to use any expl ic i t negative evidence as an ind ica t ion that the student definitely does not unders tand the concept. However, it is possible that she may be del iberately exh ib i t ing incorrect behaviour as part of her exper imenta t ion . 4.6 Implementation T h e Student M o d e l is wr i t ten i n Java and the Bayes ian Networks were bu i l t us ing the Jav-aBayes T o o l [16]. T h e JavaBayes too l comes w i t h a G U I that allows the user to create networks and to perform operations on nodes i n the networks such as observing and query-ing . In addi t ion , the tool can store networks created w i t h the G U I to a text file. M i n o r modif icat ions were made to the or ig inal JavaBayes code to permi t a different appl ica t ion , like A C E , to load a network from one of these text files and use the inference engine at run- t ime, as well as to allow the appl ica t ion to alter the structure of the network. 49 Chapter 5 A C E S t u d y T h i s chapter presents the results of a user s tudy conducted w i t h A C E i n October and November of 2000. T h e study, a l though l imi t ed i n a number of ways, provides some i n i -t i a l support for the effectiveness of the A C E learning environment i n p romot ing learning. In add i t ion , the study provides some evidence of the benefits of this par t i cu la r approach to suppor t ing exploratory behaviour using ta i lored feedback w i t h the help of the Student M o d e l ' s assessment described in chapter 4. 5.1 Study Goal T h e goal of the s tudy was to evaluate the effectiveness of A C E , inc lud ing the accuracy of the Student M o d e l , to verify the hypothesis that intelligent support i n an open learning environment can help to make the experience more beneficial for a l l types of learners. In addi t ion , ana lyz ing the students' behaviour as they used the system would give insight into ways to improve the interface, the Coach interventions and the assessment of the Student M o d e l . 50 5.2 Participants In i t ia l ly the p lan was to conduct the study w i t h A C E ' s target popula t ion , which is high school students. Because of unforeseen difficulties i n coord ina t ing w i t h a loca l h igh school, the s tudy was run instead using universi ty students t ak ing a computer l i teracy course. A l t h o u g h not the target popula t ion , these subjects were s t i l l considered suitable since the teaching assistants for the course felt that their students had very l im i t ed m a t h knowledge. T h e s tudy par t ic ipants were volunteers from a first year computer l i teracy course offered by the Compu te r Science Depar tment at the Univers i ty of B r i t i s h C o l u m b i a . T h i s course introduces students to computers and appl ica t ion programs and does not serve as an in t roduc t ion to computer science. T h e part ic ipants were screened further so that those selected had not taken a universi ty ma th course at the 200 level or above and had not taken any m a t h w i t h i n the past year. T h e to ta l number of subjects was 14: 10 females and 4 males. A l l subjects signed a consent form and were pa id a $20 s t ipend for their pa r t i c ipa t ion . 5.3 Experiment Design In i t i a l ly the p lan was to conduct a 2-group experiment. One group would use the complete A C E environment; the other would act as a control group, using the A C E environment w i t h the ta i lored support disabled. T h i s type of design would be a convincing means of evaluat ing the effectiveness of adding intelligent support to an open learning environment since learning and usage differences could be compared w i t h and wi thout intelligent support . T h i s type of design would also provide support for inc lud ing a student model to assess explora tory behaviour, since the intelligent support is based on this assessment. Ul t ima te ly , the decision was made to abandon the 2-group design. Several students who volunteered for the study d id not show up for their sessions, leaving only 14 subjects. D i v i d i n g such a smal l subject pool into two groups was unl ikely to give any reliable infor-ma t ion on A C E ' s effectiveness. A s a result, a l l the subjects were pooled into one group that used the ful l A C E environment. M o r e useful informat ion could l ikely be obtained by 51 Table 5.1: Exper imen t Schedule A c t i v i t y D u r a t i o n Pre-test phase 20 mins A C E session 45 mins Post-test phase 15 mins ana lyz ing how usage of A C E relates learning in one large group than could be obta ined from a two-group design w i t h two smal l groups. T h i s decision was also affected by the fact that several subjects turned out to have better knowledge than expected, thus causing a cei l ing effect on the pre-test. T h e s tudy took place i n a research lab in the C o m p u t e r Science Depar tment at the Univers i ty of B r i t i s h C o l u m b i a . A t most three subjects took part in the s tudy at one t ime. Subjects were ins t ructed not to communicate w i t h each other and dividers were placed between the computers so that they couldn ' t see each other. There was always an observer present in the room who remained i n the background. A s i l lus t ra ted i n table 5.1, the dura t ion of each session was at most 80 mins and consisted of a pre-test phase, a session w i t h A C E and a post-test phase. T h e pre-test phase included a paper and penci l test designed to gauge students ' knowledge of the topics targeted by A C E . T h e test (found i n appendix D ) consists of 39 questions, d iv ided equally into questions on function output recognit ion and generation, graph proper ty recognit ion, equation proper ty recognit ion and graph-equation correspondence. T h e pre-test phase also included 13 qual i ta t ive questions designed to gain informat ion on the students ' tendency to explore, their pr ior computer experience and previous math course. In addi t ion , the subjects wrote a "Need for C o g n i t i o n " test [8]. T h i s is a well-respected test of an ind iv idua l ' s tendency to engage i n and enjoy cognitive activit ies and was used in the s tudy to see i f there was any relat ion between this measure and a student's tendency to explore. T h e post-test phase consisted of a A C E topic test constructed to be s imi lar but different to the one given to subjects in the pre-test phase and a nine i tem questionnaire targeted at the students ' subjective view of their A C E experience (see appendix E for a copy of the post-test). 52 Table 5.2: Information Recorded in the L o g Fi les I n t e r a c t i o n E v e n t D e s c r i p t i o n N u m b e r of exercises v is i ted T h e number of exercises the student chose to explore. Number of exercises passed A n exercise is passed if A C E let the student leave an exercise wi thout a navigat ion hint . Number of stay events A stay event occurs when a student follows a naviga t ion hint and remains in the current exercise. N u m b e r of leave events A leave event occurs when the student does not follow a navigat ion hint and chooses to move on . N u m b e r of explora t ion hints T h e to ta l number of explora t ion hints accessed by the student. Average hint level T h e average hint level accessed by the student. T h e hint levels range from 0-2 w i t h level 0 being the most general hint and level 2 being the most specific hint . N u m b e r of explora tory actions A c t i o n s performed by the student that the Student M o d e l uses as evidence of the student hav ing explored a relevant explora t ion case. Lesson Browser usage T h e number of times the student j u m p e d to an exercise using the Lesson Browser . E x p l o r a t i o n Assis tant usage T h e number of times the student opens the explora t ion assistant. 5.4 Data Collection Techniques E a c h session was observed by one researcher who recorded informal observations on an observation sheet (see appendix C ) . Several sessions were also captured on videotape. In addi t ion , A C E was instrumented to produce log files that capture the sessions at a finer level of detai l . Table 5.2 summarizes the key interact ion events captured in the logs. 5.5 Results and Discussion Since it was not possible to run a 2-group study, l inear regression analysis was used as a da ta analysis technique to verify i f there is any correlat ion between different aspects of A C E usage and student learning. T h e fol lowing is a summary of the analysis performed on the 53 event counts summarized in table 5.2. 5.5.1 Effect of A C E on Learning To verify that A C E triggers any learning at a l l , the pre-test and post-test means were compared and a significant difference was found between the two (p = 0.013). Regression analyses were also performed w i t h a number of event counts. E a c h analysis involved the given event count and the pre-test score as independent variables and the post-test score as the dependent variable. U s i n g the students ' improvement scores as the independent variable and the event count as the dependent variable would not have been as reliable a method of evaluat ing how each event count affects learning. There are a number of problems associated w i t h using this method i n educat ional research, which are summar ized i n [19]. One of these problems relates to cei l ing effects, which is par t icu la r ly relevant to this analysis since 8 of the 14 subjects showed a near cei l ing effect, scoring 90% or higher on the pre-test. C e i l i n g effects restrict the d i s t r ibu t ion of potent ial improvement scores for different levels of in i t i a l knowledge, since higher ab i l i ty students w i l l not be able to improve as much. Ano the r p rob lem is that not a l l improvement intervals have the same meaning. A smal l improvement for a student w i t h h igh pre-test scores is often much more meaningful than the same improvement for a student w i t h low pre-test scores since improvement i n this interval can mean that the student had mastered the more difficult domain concepts. Instead, [19] suggests using pre-test scores and the exper imental treatment as predictors i n the regression model and post-test scores as the independent variable. H a v i n g pre-test scores as the first predictor in the model explains the por t ion of the variance in post-test scores that is due to pr ior abil i ty, leaving the event count to expla in what is left of that variance. Because of the smal l sample size, no more than two independent variables could be added to the regression model while s t i l l ob ta in ing results that were s ta t is t ica l ly significant. Pre-test score is always a significant predictor of post-test score. T h e fol lowing event counts were found to be significant posit ive predictors of post-54 test score (after control l ing for pre-test score): 1. T o t a l number of explora t ion hints accessed [p = 0.0406, R2 = 84.6%] 2. N u m b e r of exercises passed [p = 0.0093, R2 = 87.9%] T h e first result provides an i n i t i a l ind ica t ion that A C E ' s support of the explora t ion process, in terms of explora t ion and naviga t ion hints generated using the Student Mode l ' s assessment, improves learning. T h e second result provides i n i t i a l evidence that the Student M o d e l ' s assessment reflects students ' learning i n the environment, since the C o a c h deter-mines that a student has passed an exercise by querying the Student M o d e l for relevant probabi l i t ies . T h e above results could also be a t t r ibu ted to addi t iona l factors that were not con-t ro l led for, inc lud ing the student's general academic abi l i ty and conscientiousness. T h e fact that there is not a significant correlat ion between the two event counts supports the supplied interpretat ion, but a s tudy that controls for these factors would be required to rule out this possibi l i ty. T h e total number of explora tory actions was not found to be a significant predictor of learning. T h i s may be because of how the exploratory actions were recorded i n the log. E v e r y interface act ion that indicated explora t ion of a relevant explora t ion case was counted. T h e Student M o d e l , however, uses only one such act ion to set a relevant explora t ion case node to True. Exp lo ra to ry actions after this point are considered by the Student M o d e l to be redundant . In fact, a few cases of students performing redundant explorat ions were observed. T h i s inabi l i ty to self-monitor is consistent w i t h one of the problems of open learning environments [42]. Some students may not have the self-monitoring skil ls necessary to unders tand when they have explored a concept sufficiently and as a result begin to over-explore that concept rather than move on. 5.5.2 Effect of Student Characteristics on A C E Usage T w o sources of information were used to t ry to capture a student's tendency to explore: the "Need for Cogn i t i on" ( N F C ) test and the qual i ta t ive por t ion of the pre-test. Results of 55 past evaluations of open learning environments suggest that students who are more active explorers benefit more from this style of learning (e.g., [39]). Whe the r or not students are active explorers might be influenced by certain personali ty t rai ts , such as whether they are explora tory by nature. If so, the Student M o d e l should be able to identify learners who are explora tory by nature to help the C o a c h adapt the ins t ruc t ion accordingly. There weren't any significant results for the qual i ta t ive pre-test questions, which is ha rd to interpret since these questions were hand-crafted and may not be suitable for cap-t u r i n g the desired informat ion. For the N F C , there was one significant result. T h e number of explora tory actions was found to be a negative predictor of the N F C score [p = 0.0285, R2 = 34.0%]. T h e interpretat ion of this result is that the larger the number of exploratory actions a student performed, the lower was their "Need for C o g n i t i o n " . T h i s result seems somewhat surpr is ing at first, but i t may just be an ind ica t ion of the over-explorat ion prob-lem mentioned earlier and of the fact that students w i t h higher N F C scores exhibi t more control led explora t ion. O r , the N F C might not be a good measure of explora tory tendency. 5.5.3 Subjects' Perception of A C E These statistics were obtained using the qual i ta t ive por t ion of the post-tests. T w o statist i-ca l ly significant results came from this analysis: 1. H o w helpful the students found the hints to be, when cont ro l l ing for pre-test scores, was a posit ive predictor of the post-test scores [p — 0.0339, R2 = 85%]. T h i s indicates that when students expressed that they l iked the hints suppl ied by A C E this was not s imply a product of a "desire to please". 2. H o w helpful the student found the hints was also a posi t ive predictor of the average level of hint they accessed [p = 0.0267, R2 - 34.7%]. T h i s is not surpr is ing since the higher level hints contain more specific suggestions. However, even a generic hint, a suggestion to explore further, was enough to elicit further explora tory behaviour in many students. 56 5.6 Student Model Accuracy and Limitations The log files captured several of the Bayesian Network probabilities after a student visited an exercise. These probabilities included those for function exploration, unit exploration, exploration of any relevant concepts, and for any pertinent knowledge nodes. None of these probabilities related significantly to the post-test scores, the NFC or the subjective questions on the pre-test. The only significant statistic concerning the Student Model, discussed in section 5.5.1, was that the number of exercises passed was positively correlated with post-test scores (controlling for pre-test scores). This result is encouraging since the Coach uses the Student Model's assessment of the effectiveness of the student's exploration of that exercise and its associated concepts to determine if a student has passed an exercise. It is disappointing, however, that none of the probabilities in the Bayesian Networks nor choosing to follow or disregard the Coach's navigation hints (stay or move on events) were significant predictors of post-test scores. Observing students during the sessions and going through the log files manually afterward uncovered some limitations of the first version of the Student Model that could potentially explain some of the lack of results. The more significant limitations include the Student Model's treatment of knowledge levels and how it assesses whether or not a student has effectively explored a relevant exploration case. 5.6.1 Knowledge Levels Although the study participants were not the target population, having several highly knowl-edgeable students use the system uncovered one significant flaw in the model. Students who already understand certain concepts should not be expected to explore these concepts as thoroughly as someone with lower knowledge levels. In fact, some of the subjects who ap-peared to be able to self-monitor would often explore only concepts they did not completely understand. The first version of the Student Model did not take this issue into account the Machine and Plot units do not test a student's knowledge of relevant concepts directly. Instead the Student Model interpreted signs of inactivity in certain exercises as poor ex-57 ploratory behaviour, causing the Coach to intervene when the students tried to leave. This would lead to unnecessary stay events for high ability students, making it hard to correlate this event count with the post-test scores. 5.6.2 Exploration of Relevant Exploration Cases Currently, the Student Model looks only at interface actions to determine if the student has explored a particular exploration case. This limitation was especially apparent in the Plot Unit. There were a few students who performed several of the desired interface actions in this unit, but did not end up learning the underlying concepts, while others learned these concepts after minimal exploratory actions. For example, a couple of students experimented with several negative and positive slopes, but not did learn the difference between the two, while others caught on after experimenting with only one of each. These observations suggest that effective exploration is more than just performing the appropriate interface manipulations; the students must also pay careful attention to the results of those actions. For example, in the Plot Unit it is possible that students who did not learn from their actions were moving the function graph around, but did not look at the function equation. Lacking the skills or motivation to self-explain could potentially impede learning in these situations. 5.7 Proof-of-Concept Part of the hypothesis of this thesis is that it is beneficial to include a student model that can assess exploratory behaviour in an open learning environment because of the individualized form of tailored support that it permits. There are several issues to be resolved in terms of how best to assess exploration and what form this tailored support should take. This study may not provide a concrete test of the hypothesis, but anecdotal evidence of students' behaviour using A C E does highlight the importance of developing a model to assess explo-ration and provides evidence of what kinds of features that such a model should include. To illustrate this point, the following subsections summarize a few key examples of students 58 using the system. T h e first case shows the need to elicit exploratory behaviour from passive learners. T h e next two cases involve students fai l ing to uncover a l l of the concepts available for explora t ion. 5.7.1 Case 1: Passive/Timid Learner O n the pre-test, this student exhibi ted a lack of unders tanding of constant functions. Her exploratory behaviour was adequate i n the Mach ine U n i t , but very poor in the A r r o w U n i t . In this uni t , when she was presented w i t h a constant function, she t r ied one input , got i t wrong, stared at the screen for a length of t ime and then t r ied to leave. W h e n the C o a c h issued a warn ing she remained i n the exercise, stared at the screen for another length of t ime and then chose to move on despite the Coach 's second suggestion to stay. T h i s example i l lustrates that some learners are passive and /o r t i m i d i n an open learning environment, and as a result, fail to improve. P r o v i d i n g hints upon request may not be the solut ion for these learners (she d i d not access a hint despite the Coach 's suggestions that she do so) since students do not always access intelligent support when needed [1]. Thus , it is even more impor tan t to have a student model that can identify t i m i d learners and allow the system to provide more proact ive, unsol ici ted guidance. 5.7.2 Case 2: Failing to Uncover Concepts T h i s learner was very active i n the P l o t U n i t , but failed to improve on post-test questions targeted at the zero slope for l inear functions. He experimented w i t h a l l the relevant explo-rat ion cases in the P l o t U n i t except this one. T h e Coach generated navigat ion hints when he tr ied to leave the exercise. He chose to stay i n the exercise, but d id not request a hint from the Coach . A l t h o u g h he remained an active explorer, he d id not end up exper iment ing w i t h a zero slope. T h i s example shows the potent ia l benefit of hav ing a model that can assess the explorat ion of i nd iv idua l concepts. T h i s assessment influenced the Coach 's decision to provide a navigat ion hint when this student t r ied to leave the exercise, and to provide hints at increasing levels of details targeted at the zero slope, had the student requested them. 59 User- in i t ia ted hints might not always be an adequate means of e l ic i t ing more thor-ough explora t ion. Fur thermore, de termining the appropria te hint level should be informed by the Student M o d e l , since a generic suggestion to explore further may not be helpful to students who have explored a l l but the more subtle concepts. Th i s is also consistent w i t h the finding that students do not always benefit from high-level hints [1]. There were three other par t ic ipants whose behaviour fits this descr ipt ion. 5.7.3 Case 3: Succeeding in Uncovering Concepts T h i s student scored poor ly on the pre-test questions invo lv ing negative slopes. He d id not experiment w i t h negative slopes i n the P l o t U n i t un t i l he received a navigat ion hint from the Coach . A t this point he asked for more hints and obtained one targeted at negative slopes. A s a consequence, he performed the appropr ia te explorations and improved on the post-test. T h e benefit of ta i lored hints was also very noticeable in two other par t ic ipants . Despite the s tudy's l imi ta t ions , i t provided some insight into how to improve the Student M o d e l to make i t more accurate. These changes are discussed in the next chapter. 60 C h a p t e r 6 Student Mode l - Version II A s discussed i n section 5.6, A C E ' s pi lot s tudy uncovered some l imi ta t ions of the first version of the Student M o d e l . T h i s chapter discusses the changes made to the model to address some of these problems; others are left as future work. Results of an evaluat ion of the updated Student M o d e l conducted using a smal l group of subjects s imi lar to the par t ic ipants in the or ig ina l s tudy are also presented. 6.1 Changes to the Network Structure Figure 6.1 presents a high-level descript ion of the second version of the Student M o d e l , while figure 6.2 shows a simplif ied example Bayes ian Network that would be generated dur ing an A C E session. In this simplif ied example, the student is work ing on exercises in the Mach ine and A r r o w Uni t s and has vis i ted three exercises as indica ted by the fi, f2, and f3 nodes. To simplify the d iagram, there are only two types of functions (constant and linear) and the details of the relevant explora t ion case nodes (e.g., "fi C a s e i " in fig. 4.4) and the specific explora t ion concept nodes (e.g., "smal lPosInput" in fig. 4.4) have been omi t ted . T h e major changes to the network structure (shown i n fig. 6.1) include the incorpora t ion of knowledge into the assessment of effective explora t ion and the inc lus ion of another category of explora t ion nodes that assess how effectively the student is explor ing the more general 61 Figure 6.1: A High-Leve l Descr ip t ion of the Second Vers ion of the Student M o d e l concepts targeted by A C E , such as how effectively the student is explor ing the relat ionship between the input and output for each of the different types of functions (constant, l inear, power and po lynomia l ) . In addi t ion , as i l lus t ra ted i n the example Bayes ian Network, the s imi la r i ty of exercises is now taken into account and the t ime nodes have been removed. 6.1.1 Knowledge Nodes A s discussed i n the previous chapter, the first version of the Student M o d e l d i d not take the student's knowledge of the mater ia l covered by A C E into considerat ion, assuming that a l l students should explore the mater ia l to the same extent. Thus , even students who already had h igh knowledge often received warnings from the C o a c h as they navigated through the cu r r i cu lum. These unnecessary warnings do not follow A C E ' s design pr inciple that ta i lored feedback should be provided only to students who are experiencing difficulty. The proposed solution to this problem uses the student 's knowledge as part of the assessment of how well that student is explor ing. If the student has high knowledge of a concept, she should not be expected to explore that concept as thoroughly. If the student chooses not to focus on a concept that she already understands, the Student M o d e l should 62 Figure 6.2: A Simpli f ied E x a m p l e of the New Bayes ian Network for the M a c h i n e / A r r o w Uni t s 63 s t i l l consider this to be effective explorat ion since it demonstrates the student 's ab i l i ty to reflect on her own unders tanding. O n the other hand, the Student M o d e l s t i l l expects students w i t h low knowledge to explore thoroughly. In the second version of the Student M o d e l , components of the Bayes ian Networks that represent explora t ion nodes are now influenced by the student's knowledge (see fig. 6.1). M o r e specifically, as shown in figure 6.2, every explora t ion node is now influenced by one or more knowledge nodes. These knowledge nodes have a different meaning t han those i n the first Student M o d e l . In the first version of the Student M o d e l model , knowledge nodes were updated throughout the interact ion using expl ic i t evidence of the student 's knowledge of concepts i n addi t ion to the student's explora t ion of those concepts. Because there is very l i t t le expl ic i t evidence of the students ' knowledge as they use A C E (only i n the A r r o w U n i t ) , the major i ty of these nodes could essentially be used only as a means of de termining how effectively the student was explor ing. In the second version of the Student M o d e l , the knowledge nodes represent either the students ' knowledge of concepts pr ior to explora t ion , or knowledge assessed th rough correct behaviour i n A C E ' s less exploratory act ivi t ies . E x p l i c i t evidence on these nodes can current ly be obtained only in the A r r o w U n i t . Since explora t ion nodes are now influenced by more than just the student's interface actions, their meaning has also changed since the first version of the model . A True value for an explora t ion node represents the probabi l i ty that the student has effectively explored that i tem, using a broader definit ion of what effective explora t ion means. T h e Student M o d e l believes the student to have explored a concept effectively i f she would not benefit from further explora t ion of that concept. For a student not to benefit from further explora t ion of a concept means that the combinat ion of the student's explora t ion actions and her knowledge level has caused her to unders tand that concept. A low probabi l i ty means that the student should explore that concept more thoroughly to increase her understanding. In figure 6.2, the nodes "cons t an tFunc IOEx" and " f i " are two examples of explo-ra t ion nodes influenced by knowledge nodes. T h e node "cons tan tFuncIOEx" is influenced by the corresponding knowledge node, "cons tan tFuncIO" , while " f l " is influenced by "s im-64 p l e A r i t h m e t i c " , "s impleSubst i tu t ion" and "cons tan tFuncIO" because the exercise involves a l l of these concepts. Table 6.1 shows a t yp i ca l C P T for an explora t ion node. T h e explo-ra t ion node, f i , has a h igh probabi l i ty value (currently set to 0.8) i f a l l of the associated knowledge nodes are True . If one or more of the associated knowledge nodes are False, the probabi l i ty that the student would benefit from further explora t ion of that exercise is once again determined by the student's ac t iv i ty level . T h e more explora t ion cases the student explores, the higher is the probabi l i ty for the function explora t ion node. If a l l of the knowl -edge nodes are true and the student continues to explore, the probabi l i ty increases s l ight ly w i t h the added explora t ion . 6.1.2 Additional General Exploration Concepts The first version of the Student M o d e l p rovided an assessment of students ' explora tory behaviour w i t h i n par t icular exercises w i t h the nodes in the network that represent the relevant explora t ion cases and the i n d i v i d u a l functions. T h e Student M o d e l also assessed the student's behaviour over a longer per iod of t ime w i t h the nodes representing uni t explora t ion , related explora t ion cases appearing i n mul t ip le exercises, and overall explora tory behaviour . The pi lot s tudy also uncovered the need to ma in t a in a long term assessment of concepts that span a number of exercises and /o r uni ts but do not correspond to specific explora t ion cases. For example, a number of subjects thoroughly explored most of the exercises in the arrow and machine units w i t h the exception of constant functions. T h i s type of informat ion , however, could not be obtained from the or ig ina l network since constant function explora t ion was not exp l ic i t ly represented as one of the relevant explora t ion cases. T h e model now has addi t ional nodes that represent the student's explora t ion of these more general explora t ion concepts; these include the explorat ion of the i n p u t / o u t p u t rela-t ionship for each type of function and how effectively the student is explor ing the concepts of ar i thmet ic and subst i tu t ion. These explora t ion concepts are not associated w i t h par t ic-ular explora t ion cases, but w i t h exercises or uni ts . Depending on the generali ty of these concepts, the associated explorat ion nodes are influenced either by how well the student 65 Table 6.1: A n E x a m p l e C P T for an E x p l o r a t i o n Node in the Second Vers ion of the Student M o d e l constant simple simple f i C a s e i f i Case2 f i FuncIO Substitution Arithmetic T T T T T 0.97 F 0.9 F T 0.9 F 0.8 F T T 0.97 F 0.5 F T 0.5 F 0.03 F T T T 0.97 F 0.5 F T 0.5 F 0.03 F T T 0.97 F 0.5 F T 0.5 F 0.03 F T T T T 0.97 F 0.5 F T 0.5 F 0.03 F T T 0.97 F 0.5 F T 0.5 F 0.03 F T T T 0.97 F 0.5 F T 0.5 F 0.03 F T T 0.97 F 0.5 F T 0.5 F 0.03 66 explores related exercises (e.g., " cons t an tFunc IOEx" i n fig. 6.2) or how well she explores related units (e.g., " subs t i tu t ionEx" in fig. 6.2). 6.1.3 Similarity of Exercises T h e first version of the Student M o d e l d id not take the s imi la r i ty of exercises into account in its assessment of effective exercise explora t ion . If a student explores one exercise thoroughly and then chooses to explore a s imi la r exercise, she should not be expected to explore this second exercise as thoroughly. To address this concern, arcs were added between functions nodes (see " f l " , "f2", and "f3" i n fig. 6.2). T h e strengths of the arcs are determined by s imi la r i ty scores, which are computed based on the uni t and function types. 6.1.4 T i m e Nodes T h e final change consisted of removing the t imes nodes. H o w well a student explored an exercise d id not seem to depend on t ime spent i n the exercises but how long they spent reasoning about their actions and how long they spent explor ing each i n d i v i d u a l explora t ion case. Thus , for the t ime being, the t ime nodes have been removed from the network and w i l l be added again as part of the future research which incorporates a more sophist icated view of what it means to explore a relevant explora t ion case effectively. 6.2 Evaluation of Changes 6.2.1 Study Goal T h e new Student M o d e l was evaluated w i t h a smal l group of subjects. A larger par t ic ipant group was sought but was not obtained because of a lack of interested par t ic ipants and t ime constraints. T h e goal of the s tudy was to investigate how the changes i n the model influenced the subjects ' interactions w i t h the system. Ideally, the assessment of the new Student M o d e l should result in the C o a c h intervening less frequently w i t h the h igh ab i l i ty students wi thout causing the Coach to ignore the students who are experiencing difficulty. 67 A l s o , the model should be able to detect si tuations in which students are systematical ly fa i l ing to explore general concepts, such as the input and output of specific function types. 6.2.2 Study Participants Since this s tudy was designed to address some of the l imi ta t ions uncovered by the pilot study, par t ic ipants w i t h s imi lar levels of ma th abi l i ty were sought. N o computer l i teracy course was being offered at the t ime of the study, but the recrui ted subjects were s imi lar to the previous ones since they had also graduated from high school and had not taken any univers i ty ma th courses. A to ta l of five subjects par t ic ipa ted i n the study. 6.2.3 Experiment Design T h e second study used essentially the same experimental design as the p i lo t s tudy (see sec. 5.3). T h e only difference was that the second version of the Student M o d e l requires some in i t i a l ind ica t ion of the student's knowledge. T h i s informat ion was obta ined from the pre-tests. Af ter the students wrote the pre-test and before they used A C E , the pre-tests were marked and used to set the values of the knowledge nodes. If i t was obvious that the student unders tood a par t icular concept, the corresponding knowledge node was set to True. If they obvious ly d i d not understand the concept the node was set to False. If there was insufficient evidence or confl ict ing evidence, the node remained unobserved, w i t h the pr ior probabil i t ies set to P (True ) = 0.5. A n al ternative to sett ing the nodes to either True or False would be to adjust the prior probabil i t ies . T h i s approach w i l l be investigated i n the future. 6.2.4 Results T h e log files of the students' interact ion w i t h A C E were analyzed to see how changes made to the model influenced the number of navigat ion hints generated by the Coach . T h e changes were intended to reduce unnecessary interrupt ions, wi thout the mode l becoming too lenient. Thus , to evaluate these changes, the o ld and new logs were analyzed by hand for two event counts: the number of unnecessary navigat ion hints and the number of premature passes. 68 Table 6.2: Unnecessary Nav iga t i on Hints Version 1 Version 2 # Subjects 14 5 # Unnecessary navigation hints 62 2 Total # navigation hints 163 42 % of navigation hints that were un-necessary 38% 5% Average # of unnecessary navigation hints per person 4.4 0.4 Table 6.3: Premature Passes Version 1 Version 2 # Subjects 14 5 # Premature passes 6 5 Total # exercises to be assessed 154 55 % of premature passes 4% 9% Average # of premature passes per person 0.4 1 Unnecessary naviga t ion hints were considered to occur when the Coach generated a naviga-t ion hint despite i t being clear from the pre-test that the student unders tood the concepts associated w i t h that exercise. A premature pass occurred when the Student M o d e l deter-mined that the student passed the exercise, but the student d id not appear to unders tand the associated concepts on the post-test. T h e modif icat ions to the model do not address the problem of the mode l overest imating the students ' exercise explora t ion , so the count of premature passes was not expected to change significantly from the previous experiment . Tables 6.2 and 6.3 show the results of the analysis. A s desired, the new Student M o d e l resulted in a d ramat ic reduct ion in the percentage of navigat ion hints that were deemed to be unnecessary (from 38% w i t h the o ld model to 5% w i t h the new model) . T h i s is an impor tan t reduct ion since, w i t h the previous version, students were receiving an average of 4.4 unnecessary interuptions per session while they received an average of fewer than one w i t h the new model . T h e number of premature passes d id rise s l ight ly (from 4% to 9%). Th i s indicates that the second model s t i l l overestimates that students ' explora tory 69 behaviour: when students perform a large number of exploratory interface actions, the model assesses good explora tory behaviour even though some of these students do not learn from their explora t ion . T h e results of this s tudy provide further evidence that the model needs to place more weight on the amount of t ime the student spends reflecting on each of her explora tory actions and on the student's ab i l i ty to self-explain the phenomena that she observes i n the environment to accurately assess effective explora t ion. T h e e l imina t ion of the t ime nodes and incorpora t ing exercise s imi la r i ty also could have caused part of the increase in premature passes since bo th changes resulted i n the model becoming more lenient. T h e s tudy d i d not provide an evaluation of the general explora t ion concept nodes. None of the par t ic ipants i n this s tudy chose to ignore any par t icular high-level concepts, as was the case i n the first study. T h e coaching component also needs to be extended before this change can be fully evaluated. Current ly , the Coach makes suggestions w i t h i n an exercises and warns the student as she leaves an exercise. T h e Coach does not, however, supply exercise naviga t ion hints that t ry to steer the student towards explor ing exercises that target the higher-level concepts that she has not effectively explored. 70 C h a p t e r 7 Conclusions and Future Work Thi s thesis presented research on creat ing a student model that can assess the effectiveness of a student's explora tory behaviour i n an open learning environment. T h i s work addresses a substantial l imi t a t ion of open learning environments; previous empir ica l evaluations have found that students ' ab i l i ty to learn i n these environment depends on certain user-specific features that influence their ab i l i ty to explore effectively. T h e Student M o d e l in t roduced here permits the provis ion of ta i lored feedback on a student's explora t ion process i n an at tempt to make the environment beneficial for a l l learning styles. It does so by moni to r ing the students ' actions in the environment unobtrusively to main ta in the unrestr icted nature of open learning environments. 7.1 Satisfaction of Thesis Goals 7.1.1 Model for Exploration T h e p r imary goal of the thesis was to create a model suitable for the provis ion of ta i lored feedback on the explora t ion process. T h i s goal involved two parts: i) to determine what type of features the model should include, ii) to bu i ld a model that is capable of assessing these features dur ing students ' in teract ion w i t h the A C E environment. The model 's features were determined through an i terative design process. T h e 71 model was first constructed based on an i n i t i a l hypothesis of how to assess effective explo-ra t ion . T h i s hypothesis was tested using an evaluat ion w i t h human subjects. T h e results of this evaluat ion were then used to design a revised model , which was once again evaluated. T h e model was created using Bayes ian Networks to handle the large amount of un-certainty involved in moni to r ing student behaviour i n an open environment. T h e type of uncertainty present i n the assessment process stems from the freedom the environment gives the student and the lack of informat ion available concerning the reasons beh ind the students ' actions and how these actions contr ibute to effective explora t ion. T h e mode l performs its assessment by moni to r ing the student 's explora t ion of the relevant explora t ion cases present w i t h i n each exercise. Nodes corresponding to these relevant explora t ion cases form the ba-sis of the rest of the assessment, inc lud ing the effectiveness of the student 's explora t ion of exercises, of units, of concepts, and of the student's overall exploratory behaviour . E x p l o -ra t ion nodes are influenced by a set of relevant knowledge nodes, incorpora t ing the student's exis t ing knowledge into the assessment of effective explora t ion. T h e model also had to deal w i t h the fact that A C E ' s cu r r i cu lum is bui l t at run-t ime and that each student can choose to take a different pa th through the cu r r i cu lum. To address this problem, the model constructed port ions of the Bayesian Networks dynamica l ly over the course of the interact ion. These port ions involved the function nodes and nodes for each of their associated relevant explora t ion cases. T h e Student M o d e l is suitable for the provis ion of ta i lored feedback because it allows the environment to sense when and what the student is not explor ing effectively. W h e n this occurs, the Coach can guide the explora t ion process using the Student M o d e l ' s assessment of which concepts and exercises have yet to be sufficiently explored. 7.1.2 Evaluation of the Model and Intelligent Support T h e second goal of the thesis was to evaluate the effectiveness of p rov id ing intelligent feed-back that supports the explora t ion process. T w o user evaluations were performed. The first was designed to provide a sense of how A C E ' s support affects how students use and 72 learn from the environment. T h e second evaluated the changes made to the model after the first s tudy to improve its accuracy. A number of key results and observations from these evaluations validate the Student M o d e l ' s design and provide indicat ions that the type of feedback A C E supplies is beneficial to students. In par t icular : • T h e more hints the students accessed, the more they improved . • T h e number of exercises that the Student M o d e l felt that the students had effectively explored was posi t ively correlated w i t h the students ' improvements . • Some students are inact ive despite low knowledge. • Some students are unable to discover a l l impor tan t domain concepts wi thout guidance. T h e first result provides support for the addi t ion of Student Mode l - t a i lo red feedback since the content of the hints was ta i lored to concepts that the Student M o d e l felt were insufficiently explored. T h e second result is an in i t i a l i nd ica t ion that the Student M o d e l is able to accurately assess effective explora t ion . T h e final two points are observations from the studies that i l lustrate why i t is impor tan t to have a student model that can identify students who are not act ively explor ing and that can generate a type of assessment that allows the Coach to give them specific suggestions targeted at improv ing their explora t ion. These studies also uncovered addi t iona l factors to include i n future versions of the model to improve its assessment and diagnostic capabil i t ies. These factors are described in section 7.3. 7.2 Generalizability T h e Student M o d e l was ta i lored specifically to the A C E environment , but its high-level design is generalizable to any open learning environment w i t h a set of act ivi t ies and ex-p lora t ion concepts of interest w i t h i n those act ivi t ies . Us ing the model 's framework would require defining the static por t ion of the network (both the structure and the C P T s ) and bu i ld ing a knowledge base that could identify the explora t ion cases and higher-level concepts 73 that are relevant to each act iv i ty . In addi t ion , the environment would have to decide what behaviour patterns signify effective explora t ion of the relevant explora t ion cases. 7.3 Limitations Despi te being subjected to an i terat ive design process, the proposed Student M o d e l s t i l l has two main l imi ta t ions . F i r s t , i t has a l imi ted view of what i t considers to be effective explora t ion of a relevant explora t ion case. A s demonstrated by the user evaluations, the current interface actions alone may not be sufficient to determine i f the student has effectively explored a concept. Other factors that seem to be impor tan t include whether or not the student is a t tending to the effects of her exploratory actions, a n d whether or not the student is reflecting on and generating self-explanations about the mate r i a l she is explor ing. T h e second ma in l im i t a t i on i n design of the Student M o d e l is that , a l though i t assesses the effectiveness of the student 's exploratory behaviour , i t is not sufficiently r ich i n features to diagnose the causes of poor explorat ion. Resul ts of many empir ica l evaluations of open learning environments show that effective explora t ion depends on a number of factors, inc lud ing meta-cognit ive ski l ls , explorat ion strategies and, potential ly, mot iva t ion . Cur r en t l y the model can inform the Coach if the student is not explor ing effectively and allows the Coach to provide specific suggestions as to what elements need further explora t ion . T h e model does not, however, al low the Coach 's feedback to target the under ly ing causes of the poor explora t ion. F ina l ly , the evaluations of A C E provide indicat ions of the accuracy of the Student M o d e l and the benefits of p rov id ing tai lored feedback. Since the evaluations d id not have control groups, however, i t is not possible to determine exact ly what features of A C E con-t r ibu ted to these posi t ive learning outcomes. Unta i lo red C o a c h instruct ions or the G U I by themselves could potent ia l ly be responsible for the obta ined outcomes. 74 7.4 Future Work 7.4.1 Exploration of Relevant Exploration Cases To fully assess effective explora t ion , the model needs to perform a more sophist icated as-sessment of what i t means for a student to explore a relevant explora t ion case effectively. Current ly , the model bases this assessment on only the student 's interface actions. T h e findings of past evaluations of open learning environments and the work done i n this thesis suggest, however, tha t the explora t ion process depends on a number of factors, i nc lud -ing knowledge of explora t ion strategies, domain knowledge, mot iva t ion , and meta-cognit ive ski l ls , such as self-explanation, self-monitoring and reflection. In addi t ion , factors such as the student's emot ional state and personali ty traits could also be relevant, especially to m o d -el l ing mot iva t ion . T o assess effective explorat ion proper ly these addi t iona l factors should be considered by the Student M o d e l . M o d e l i n g each addi t iona l factor along w i t h the probabi l i s t ic dependencies among factors, would al low the model to perform a richer assessment of effective explora t ion be-haviour , as long as the mode l has evidence of some of these factors. A C E ' s current interface can provide very l i t t le informat ion about these addi t iona l factors. T rack ing self-explanation, which seems to be one of the most relevant factors, requires, among other things, informat ion on the student's focus of at tent ion. K n o w i n g where the user is focusing her at tent ion would greatly increase the model 's abi l i ty to interpret the user's actions as evidence of good exploratory behaviour . It may be possible that a student 's interface actions indicate good explora t ion but that she is not a t tending to the results of that explorat ion. For example, i n A C E ' s graph man ipu la -t ion act ivi ty, the student may be actively manipu la t ing the graph but not examin ing the changes this causes i n the function equation. In this case, the student would fail to ga in an understanding of the relat ionship between a function graph and its equation. A d d i t i o n a l variables model l ing the user's at tent ion could be added to the networks and updated using eye-tracking. 75 E v e n w i t h the addi t iona l information from the eye-tracker assessing self-explanation and other meta-cognit ive skil ls relevant to effective explora t ion , requires more in format ion on the student's thought processes than is current ly provided by A C E ' s interface. T h e challenge w i l l be to re-design the interface to provide more informat ion wi thout t ak ing away the sense of freedom and control that A C E current ly permits . 7.4.2 Extending the Model's Diagnostic Abilities H a v i n g the Bayes ian Networks include variables on the addi t iona l factors discussed i n section 7.4.1 would also extend the Student Mode l ' s ab i l i ty to diagnose causes of poor explora t ion . A s is the case w i t h the current Student M o d e l , querying the explora t ion nodes wou ld al low the environment to sense when the student is experiencing difficulty w i t h the explora t ion process. W h e n the student isn ' t explor ing effectively, the environment could query variables related to factors influencing this assessment, such as self-explanation and domain k n o w l -edge, to ob ta in a probabi l i s t ic estimate of the specific causes of the student's difficulty. T h i s would permi t the environment to tai lor its feedback to direct ly address the most l ike ly causes of difficulty. 7.4.3 Formal Evaluation A d d i t i o n a l formal evaluations are needed to evaluate bo th the effects of p rov id ing ta i lored feedback and the Student M o d e l ' s role in this process. T h e first planned evaluat ion involves compar ing learning and usage differences between an exper imental group that would use the full A C E environment and a control group that would use only the A C E interface, w i t h the Coach and Student M o d e l disabled. To evaluate the role of the Student M o d e l , a s imi la r s tudy could be conducted w i t h bo th groups receiving the Coach 's feedback. T h e exper imen-tal group, however, would receive feedback tai lored to the assessment of the Student M o d e l , while the control group could receive feedback either on demand or based on an unta i lored strategy. 76 7.4.4 Automated Testing Eventual ly , this research aims to automate the adminis t ra t ion of the pre-tests and post-tests. Cu r r en t l y these tests are used bo th to evaluate A C E and to al low the Student M o d e l to ob ta in an estimate of the students ' knowledge of function concepts pr ior to using the system. T h i s process is not only t ime-consuming for the marker , but also requires that the students wai t for the pre-tests to be marked and for the values of knowledge nodes to be set. H a v i n g A C E administer the tests would al low the Student M o d e l to immedia te ly use the results to set the values wi thout the need for human assistance. 7.5 Conclusion A p a r t from open learning environments, there are numerous other computer applicat ions that require their users to explore autonomously. T h e fact that some users are able to explore these environments while others are not, creates a problem. If the environments are too open, some users w i l l not be able to explore effectively. O n the other hand, i f they are too restricted, other users w i l l get upset at the lack of freedom and control . T h e Student M o d e l proposed i n this thesis is a first step i n the solut ion to this problem. It can help the environment dis t inguish between these two classes of users and helps to facil i tate the provis ion of adapted support . 77 Bibliography [1] V . A leven and K . R . Koedinger . L i m i t a t i o n s of student control : D o students know when they need help? In G . Gauth ie r , C . Frasson, and K . V a n L e h n , editors, Proceedings of the Fifth International Conference on Intelligent Tutoring Systems (ITS 2000), pages 292 - 303, B e r l i n , 2000. Springer. [2] V . A l e v e n , K . R . Koedinger , and K . Cross . T u t o r i n g answer-explanation fosters learning w i t h unders tanding. In S. P . Lajoie and M . V i v e t , editors, AIED-99, pages 199-206, A m s t e r d a m , 1999. I O S Press. [3] J . R . Anderson . T h e expert module . In M . C . Po i son and J . J . R ichardson , editors, Foundations of Intelligent Tutoring Systems, pages 21-53. E r l b a u m , Hi l l sda le , N J , 1988. [4] J . R . Anderson , A . T . Corbe t t , K . R . Koedinger , and R . Pellet ier . Cogn i t ive tutors: Lessons learned. Journal of the Learning Sciences, 4(2):167-207, 1995. [5] J . R . Ande r son and R . Pellet ier . A development system for model- t rac ing tutors. In L . B i r n b a u m , editor, Proceedings of the International conference on the learning sci-ences, pages 1-8. Assoc ia t ion for the Advancement of C o m p u t i n g i n E d u c a t i o n , 1991. [6] B . S . B l o o m . T h e 2 s igma problem: T h e search for methods of group ins t ruc t ion as effective as one-to-one tu tor ing. Educational Researcher, 13(6):4-16, 1994. [7] A . B u n t , C . C o n a t i , M . Huggett , and K . M u l d n e r . O n improv ing the effectiveness of open learning environments through ta i lored support for explora t ion. In J . D . M o o r e , C L . Redfield , and W . L . Johnson, editors, Proceedings of AIED 2001, 10th International 78 Conference on Artificial Intelligence in Education, pages 365-376, San A n t o n i o , T X , 2001. [8] J . T . Cac ioppo and R . E . Pet ty . The need for cogni t ion. Journal of Personality and Social Psychology, 42(1):116-131, 1984. [9] E . Cha rn i ak . Bayesian networks wi thout tears. Al Magazine, 12(4):50-63, 1991. [10] M . T . H . C h i . Self-explaining exposi tory texts: T h e dua l processes of generating infer-ences and repair ing mental models. In R . Glaser , editor, Advances in Instructional Psychology, pages 161-238. Lawrence E r l b a u m Associates, M a h w a h , N J , 2000. [11] J . A . Co l l i n s , J . E . Greer , and S . X . Huang . A d a p t i v e assessment us ing granular i ty hierar-chies and Bayesian nets. In G . Gau th i e r C . Frasson and A . Lesgold , editors, Proceedings of the Third International Conference on Intelligent Tutoring Systems (ITS'96), pages 569-577, B e r l i n , 1996. Springer. [12] C . C o n a t i , A . S . Ger tner , K . V a n L e h n , and M . J . D r u z d e l . On- l ine student model ing for coached problem solving using Bayes ian networks. In A . Jameson, C . Pa r i s , and C . Tasso, editors, Proceedings of the Sixth International Conference on User Modeling, pages 231-242, V ienna , New Y o r k , 1997. Springer. [13] C . C o n a t i and K . V a n L e h n . Fur ther results from the evaluat ion of an intelligent com-puter tu tor to coach self-explanation. In K . Van lehn G . Gauth ie r , C . Frasson, editor, Proceedings of the Fifth International Conference on Intelligent Tutoring Systems (ITS 2000), pages 304-313, B e r l i n , 2000. Springer. [14] C . C o n a t i and K . V a n L e h n . Toward computer-based support of meta-cognit ive ski l ls : a computa t iona l framework to coach self-explaination. International Journal of Artificial Intelligent in Education, 11, 2000. [15] C . C o n a t i and K . V a n L e h n . P r o v i d i n g adaptive support to the unders tanding of i n -s t ruct ional mater ia l . In Proceedings of the International Conference on Intelligent User Interfaces, pages 41-47, Santa Fe, N M , 2001. 79 [16] F . G . C o z m a n . JavaBayes . h t t p : / / w w w . c s . c m u . e d u / javabayes/ . [17] P . D a g u m and M . Luby . A p p r o x i m a t i n g probabi l i s t ic inference in Bayes ian belief net-works is N P - h a r d . Artificial Intelligence, 60(1):141-153, 1993. [18] T . de J o n g and W . R . van Jool ingen. Scientific discovery learning w i t h computer s imu-lations of conceptual domains. Review of Educational Research, 68(2):179-201, 1998. [19] M . D . G a l l , W . R . B o r g , and J . P . G a l l . Educational Research: An Introduction. L o n g m a n Publ ishers , s ix th edi t ion, 1996. [20] P . W . Gr imes and T . E . W i l l e y . T h e effectiveness of microcomputer s imulat ions in the principles of economics course. Computers and Education, 14:81-86, 1990. [21] H . H o h l , H . D . Bocker , and R . Gunzenhauser . Hypadap te r : A n adaptive hypertext system for exploratory learning and p rogramming . User Modeling and User-Adapted Interaction, 6(2-3):131-156, 1996. [22] A . Jameson. Numer i ca l uncertainty management i n user and student model ing: A n overview of systems and issues. User Modeling and User-Adapted Interaction, 5(3-4):193-251, 1996. [23] M . Kangassa lo . Chi ld ren ' s independent explora t ion of a na tu ra l phenomenon by using a p ic tor ia l computer-based s imula t ion . Journal of Computing in Childhood Education, 5(3):285-297, 1994. [24] S. K a t z , A . Lesgold, E . Hughes, D . Peters, G . Eggan , M . G o r d i n , and L . Greenberg. Sherlock 2: A n intelligent tu tor ing system bui l t on the l rdc framework. In C P . B l o o m and R . B . Lo f t i n , editors, Facilitating the development and use of interactive learning environments, Hi l l sdale , N . J , 1998. E r l b a u m . [25] K . R . Koedinger , J . R . Anderson , W . H . W . H . Hadley, and M . A . M a r k . Intelligent tu-tor ing goes to school in the big city. Journal of Artificial Intelligence, 8:30 - 4 3 , 1997. 80 [26] A . M . Lesgold , S.P. Lajoie , M . B u n z o , and G . Eggan . Sherlock: A coached pract ice environment for an electronics t roubleshoot ing job. In J . L a r k i n and R . Chabay, edi-tors, Computer assisted instruction and intelligent tutoring systems: Shared issues and complementary approaches, pages 201-238. Lawrence E r l b a u m Associates , Hi l l sda le , N J , 1992. [27] M . W . Lewis , M . Bishay, and D . M c A r t h u r . T h e macrostructure and micros t ruc ture of inqu i ry act ivi t ies: Evidence from students using a microwor ld for mathemat ica l discovery. In Proceedings of AI-ED 93 World Conference on Artificial Intelligence in Education, E d i n b u r g h , Scot land, 1993. [28] J . M a r t i n and K . V a n L e h n . Student assessment using bayesian nets. International Journal of Human-Computer Studies, 42:575-591, 1995. [29] M . M a y o and A . M i t r o v i c . Us ing a probabi l i s t ic student model to control p rob lem difficulty. In G . Gau th ie r , C . Frasson, and K . V a n L e h n , editors, Proceedings of the Fifth International Conference on Intelligent Tutoring Systems (ITS 2000), pages 524 -533, B e r l i n , 2000. Springer. [30] R . J . M i s l e v y and D . H . G i tomer . T h e role of probabi l i ty-based inference i n an intell igent tu to r ing system. User Modeling and User-Adapted Interaction, 5(3-4):253-282, 1996. [31] M . Njoo and T . de Jong . E x p l o r a t o r y learning w i t h a computer s imula t ion for control theory: Lea rn ing processes and ins t ruc t ional support . Journal of Research in Science Teaching, 30(8):821-844, 1993. [32] M . Pao lucc i , D . Suthers, and A . Weiner . A u t o m a t e d advice-giving strategies for sci-entific inquiry. In G . Gau th ie r C . Frasson and A . Lesgold, editors, Proceedings of the Third International Conference on Intelligent Tutoring Systems (ITS'96), pages 372 -381, B e r l i n , 1996. Springer. [33] J . Pea r l . Probabilistic Reasoning in Intelligent Systems. M o r g a n K a u f m a n n , Los A l t o s , C A , 1988. 81 [34] M . M . Recker and P . P i r o l l i . Student strategies for learning p rogramming from a computa t iona l environment. In C . Frasson, G . Gau th ie r , and G . I. M c C a l l a , editors, Proceedings of the Second International Conference on Intelligent Tutoring Systems, pages 382-394, B e r l i n , Heidelberg, 1992. Springer. [35] B . J . Reiser, W . A . Copen , M . Ranney, A . H a m i d , and D . Y . K i m b e r g . Cogn i t ive and mot iva t iona l consequences of tu to r ing and discovery learning. Technical report , T h e Inst i tute for the Lea rn ing Sciences, 1994. [36] A . R e n k l . Lea rn ing from worked-examples: A s tudy on ind iv idua l differences. Cognitive Science, 21(1), 1997. [37] L . Schauble, K . Raghavan , and R . Glaser . T h e discovery and reflection nota t ion: a graphica l trace for suppor t ing self-regulation in computer-based laboratories. In S. L a -joie and S. Derry , editors, Computers as Cognitive Tools, pages 319-337. E r l b a u m , Hi l l sda le , N J , 1993. [38] V . J Shute. A comparison of learning environments: a l l that glit ters. . . In S. P . La jo ie and S. J . Derry , editors, Computers as Cognitive Tools. Lawrence E r l b a u m Associates , Hi l l sda le , N J , 1993. [39] V . J Shute and R Glaser . A large-scale evaluat ion of an intelligent discovery wor ld : Smi th town . Interactive Learning Environments, 1:55-77, 1990. [40] V . J . Shute and J . Pso tka . Intelligent tu to r ing systems: Past , present, and future. In Handbook of Research on Educational Communications and Technology, pages 570 -600. M a c m i l l a n , New Y o r k , N Y , 1996. [41] J . Stewart . Single Variable, Early Transcend.enta.ls. B r o o k s / C o l e , Pacif ic Grove , 3rd edi t ion, 1995. [42] W . R . van Jool ingen. Cogni t ive tools for discovery learning. Journal of Artificial Intel-ligence and Education, 10, 1999. 82 [43] W . R . van Jool ingen. Designing for col laborat ive discovery learning. In K . Vanlehn G . Gau th ie r , C . Frasson, editor, Proceedings of the Fifth International Conference on Intelligent Tutoring Systems (ITS 2000), pages 202-211, B e r l i n , 2000. Springer . [44] W . R . van Jool ingen and T. de Jong . Suppor t ing hypothesis generation by learners explor ing an interactive computer s imula t ion . Instructional Science, 20, 1991. [45] K . V a n L e h n . Student model ing . In M . C . Po i son and J . J . R ichardson , editors, Foun-dations of Intelligent Tutoring Systems, pages 55-78. E r l b a u m , Hi l l sda le , N J , 1988. [46] K . V a n L e h n . Concep tua l and meta learning dur ing coached problem solving. In C . Fras-son, G . Gauth ie r , and A . Lesgold , editors, Proceedings of Intelligent Tutoring Systems, 3rd International Conference (ITS'96), pages 29-41 , New Y o r k , 1996. Spr inger-Ver lag . [47] K . Veermans, T . de Jong , and W . R . van Jool ingen. P r o m o t i n g self-directed learn-ing i n s imulat ion-based discovery learning environments through intell igent support . Interactive Learning Environments, 8(3):229-255, 2000. [48] K . Veermans and W . R . van Jool ingen . U s i n g induc t ion to generate feedback in s imula-t ion based discovery learning environments. In B . P . G o e t l , H . M . Halff, C L . Redfield , and V . J . Shute, editors, Proceedings of Intelligent Tutoring Systems, 4th International Conference (ITS'98), pages 196-205, B e r l i n , 1998. Springer. [49] B . Y . W h i t e and T . A . Sh imoda . E n a b l i n g students to construct theories of col labora-tive inqui ry and reflective learning: Compu te r support for metacognit ive development. International Journal of Al in Education, 10:151-182, 1999. [50] J . D . Zapa ta -Rive ra and J . E . Greer . S M o d e l server: Student model l ing in d is t r ibuted multi-agent tu tor ing systems. In J . D . M o o r e , C L . Redfield, and W . L . Johnson, editors, Proceedings of AIED 2001, 10th International Conference on Artificial Intelligence in Education, pages 446-455, San A n t o n i o , T X , 2001. 83 A p p e n d i x A Programming Credits A C E was developed jo in t ly w i t h K a s i a M u l d n e r and M i c h a e l Hugget t . K a s i a M u l d n e r was p r imar i l y responsible for designing and implement ing the Coach , whi le M i c h a e l Huggett bu i l t most of the G U I , w i t h K a s i a p rogramming the Browser T o o l and the E x p l o r a t i o n Assis tant . In addi t ion , bo th K a s i a and M i c h a e l were heavily involved i n the first A C E user study. 84 A p p e n d i x B Curriculum File Figure B . l is the cu r r i cu lum that was used for bo th A C E user evaluations. 85 B E G I N _ S E T : set 1 BEGIN_UNIT:machine BEGIN_STEP:ConstantFunc <S+> E N D _ S T E P B E G I N _ S T E P : LinearFunc <S+;S+> E N D _ S T E P BEGIN_STEP:PowerFunc <2,-3> E N D S T E P BEGIN_STEP:PolyFunc <2;1;5> E N D _ S T E P BEGIN_UNIT:arrow BEGIN_STEP:ConstantFunc <S-> E N D _ S T E P BEGIN_STEP:LinearFunc <S-;S-> E N D _ S T E P BEGIN_STEP:PowerFunc <2,2> E N D _ S T E P BEGIN_STEP:PolyFunc <-l;4;2> E N D _ S T E P B E G I N _ S E T : set2 BEGIN_UNIT:p lo t BEGIN_STEP:ConstantFunc <-3> E N D _ S T E P BEGIN_STEP:LinearFunc <3;-2> E N D _ S T E P BEGIN_STEP:PowerFunc <2,-3> E N D _ S T E P F igure B . l : A Sample C u r r i c u l u m F i l e 86 A p p e n d i x C Observer Sheet Participant: Date: Time: Comments: (H=Help, M=Mood, E=Exploration, S=System, 0=Other) Category Time Comment H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O H / M / E / S / O 87 A p p e n d i x D Pre-Test Part 1: Questionnaire 1. A package gets dropped off at your doorstep, containing a brand new scooter. The only problem is that assembly is required. You (a) read the manual carefully, and then commence assembly (b) skim the manual quickly, and start to put the thing together (c) ask an expert to help you do it (d) trash the manual and start right away - you will figure it out as you go 2. When working with others, you (a) are very vocal about your ideas - you often have a good idea of how to do things (b) contribute some ideas, but don't like to be the main person in charge (c) are mainly quiet - you prefer for others to take charge 3. When learning how to use a new computer application for an assignment, you (a) only figure out use those things that are totally necessary to get the assignment done (b) figure out how to use it to get the assignment done, plus learn a few extra things that caught your attention (c) fully explore the application - you are very curious about the various things you may discover in there 4. When it comes time to decide on what and how to do a school project, you (a) prefer to have the teacher tell you 88 (b) prefer to make one up yourself (c) a mix of a and b: the teacher initially helps, but you have the final say 5. When learning something, you like to (a) get an overall picture of the thing you are learning, and then figure out the details (b) figure out the details first, and then get an overall picture (c) a mix of a and b (d) other: 6. You are in a strange new city. You (a) get a map right away and use it frequently to plan your routes (b) get a map but use it only when absolutely necessary (c) don't bother with a map -if lost, you'll ask someone 7. You play computer games (a) almost every day (b) once or twice a week (c) less often that once a week 8. You use the Web to find information that you need (like a phone number, shopping, etc) (a) almost every day (b) once or twice a week (c) less often that once a week 9. On a scale of 1-5, where 1 means very much, you like to surf the web 1) Very much 2) so-so 3) Neutral 4) not very much 5) not at all 89 Part 2: Recognizing the output of a function For the following function equations specify the output of the function with the given inputs. Please show your work in the spaces provided. 1) / ( * ) = 3 W h a t is the output of / ( 0 ) ? a) 1 b) 0 c) 3 d) -3 W h a t is the output of / ( 4 ) ? a) 4 b) 3 c) -4 d) 0 2) f(x) = 3x + 2 W h a t is the output of / ( 4 ) ? a) 4 b) 5 c) 2 d) 14 W h a t is the output of / ( - 4 ) a) -4 b) 5 c) -10 d) 2 90 3) f(x) = 5x2 W h a t is the output of / ( - l ) ? a) -1 b) -5 c) 5 d) 1 W h a t is the output of / ( l ) ? a) -1 b) -5 c) 5 d) 1 4) f(x) = 3x3 + 2x + 1 W h a t is the output of / ( 0 ) ? a) 0 b) 1 c) -1 d) 6 W h a t is the output of / ( 2 ) ? a) 2 b) 29 c) 23 d) 6 91 Part 3: Function Output For each question, choose the three inputs and connect them to the correct outputs. Choose the three inputs will best help you understand function. 1) f(x) = - 2 Outputs : -5 -4 14 -2 2 0 14 15 29 Inputs: -45 -32 -8 -2 0 2 6 25 34 2) f(x) = -2x - 5 Outputs : -5 -45 87 -65 -9 9 -3 -15 33 Inputs: -46 -19 -7 -1 0 2 5 20 30 3) f(x) = -3x2 Outputs : 0 -27 -12 -147 12 Inputs: -3 -2 0 2 7 4) f(x) = 2x2 - x + 1 Outputs : 29 1 16 79 22 Inputs: -6 -3 0 3 4 92 Par t 4: G r a p h Properties For each question, circle the appropriate response. 1) The y-intercept of the graph in this picture is: Positive/Negative/Zero x 2) The y-intercept of the graph in this picture is: Positive/Negative/Zero X 3) The slope of the graph in this picture is: Positive/Negative/Zero 4) The slope of the graph in this picture is: Positive/Negative/Zero y X 93 5) The slope of the graph in this picture is: Positive/Negative/Zero 6) The exponent in the function equation graph in this picture is: Even/Odd 7) The graph has been scaled by a: Positive Number / Negative Number 8) Which graph has been scaled by a larger number?: Graph A / Graph B Graph A Graph B 94 Part 5: Graph/Equation Properties The function f(x) = lx may be best described by the graph: a) b) c) d) None of these graphs The function f(x) = -2x + 5 may be best described by the graph a) b) c) 1 y \ . None of these graphs The function f(x) = 4x -5 may be best described by the graph: a) b) c) d) The function f(x) = lx may be best described by the graph: a) b) c) None of these graphs d) . \ • y / . None of these graphs V / ' 95 Part 6: Equation Properties T h e function f(x) = 3 has a) a posi t ive slope b) a negative slope c) a zero slope T h e function f (x) = 3x + 7 has a) a posi t ive slope b) a negative slope c) a zero slope T h e function }(x) = — la; + 10 has a) a posi t ive slope b) a negative slope c) a zero slope T h e function f(x) = — 2x + 5 has a) a posi t ive y-intercept b) a negative y-intercept c) a zero y-intercept T h e function f(x) = 3a; — 6 has a) a posi t ive y-intercept b) a negative y-intercept c) a zero y-intercept T h e function f(x) = 5 has a) a posi t ive y-intercept b) a negative y-intercept c) a zero y-intercept T h e function f(x) = — 7 has a) a posi t ive y-intercept b) a negative y-intercept c) a zero y-intercept 96 A p p e n d i x E Post-Test Part 1: Questionnaire Comments can be added in the spaces provided. I found the informat ion in the hints helpful (please circle one) a) St rongly agree b) agree c )Neut ra l d) disagree e) Strongly disagree I unders tood why the computer suggested I stay i n an exercise (please circle one) a) St rongly agree b) agree c) Neu t r a l d) disagree e) Strongly disagree I found i t helpful when the computer to ld me to explore an exercise more than I had (please circle one) a) St rongly agree b) agree c )Neut ra l d) disagree e) St rongly disagree T h e amount of guidance provided by the computer was (please circle one) a) N o t enough b) O K c) Just R igh t d) A bit too much e) W a y too much 97 T h e computer gave me enough freedom to move around between act ivi t ies (please circle one) a) S t rongly agree b) agree c) Neu t ra l d) disagree e) Strongly disagree I found the information i n the help pages useful (please circle one) a) S t rongly agree b) agree c )Neut ra l d) disagree e) Strongly disagree I prefer to learn this way over using a text book (please circle one) a) S t rongly agree b) agree c )Neut ra l d) disagree e) S t rongly disagree M y favorite uni t was: Mach ine (unit 1) Swi tchboard (unit 2) P l o t (unit 3) I feel that I learned a lot using A C E (please circle one) a) S t rongly agree b) agree c )Neut ra l d) disagree e) Strongly disagree T h e th ing I l iked the most about A C E : T h e th ing I l iked the least about A C E : W h a t I would like to see added to A C E 98 Part 2: Recognizing the output of a function For the following function equations specify the output of the function with the given inputs. Please show your work in the spaces provided. 1) f(x) = 8 W h a t is the output of / ( 0 ) ? a) 5 b) 0 c) 8 d) -8 W h a t is the output of f{2) ? a) 2 b) 8 c) -2 d) 0 2) f(x) =3a ; + 3 W h a t is the output of / ( 5 ) ? a) 18 b) 5 c) 6 d) 14 W h a t is the output of /(—5) a) -5 b) 3 c) -12 d) -15 99 3) f(x) = 2x2 W h a t is the output of / ( - 2 ) ? a) -1 b) -4 c) 8 d) W h a t is the output of / ( 2 ) ? a) 2 b) 4 c) 1 d) 4) f{x) = 2xz + 3x - 1 W h a t is the output of / ( 0 ) ? a) 0 b) 1 c) -1 d) 4 W h a t is the output of / ( l ) ? a) 1 b) 4 c) 5 d) 6 100 Part 3: Function Output For each question, choose the three inputs and connect them to the correct outputs. Choose the three inputs will best help you understand function. 1) f{x) = 5 Outpu t s : -5 -6 11 -2 2 0 5 15 22 Inputs: -35 -27 -7 -3 0 3 5 23 39 2) f{x) = 2x-4 Outputs : 4 -40 60 -88 -2 -16 -4 -8 38 Inputs: -42 -18 -6 -2 0 1 4 21 32 3) /(•'•) - »;/:2 Outputs : 4 -0 -36 16 -4 Inputs: - 2 - 1 0 1 3 4) f{x) = 2x2 - 3x + 1 Outputs : 4 16 67 56 2 Inputs: -5 -2 0 2 6 101 Part 4: Graph Properties For each question, circle the appropriate response. 1) The y-intercept of the graph in this picture is: Positive/Negative/Zero X 2) The y-intercept of the graph in this picture is: Positive/Negative/Zero X 3) The slope of the graph in this picture is: Positive/Negative/Zero 4) The slope of the graph in this picture is: Positive/Negative/Zero y X 102 5) The slope of the graph in this picture is: Positive/Negative/Zero X 6) The exponent in the function equation graph in this picture is: Even/Odd 7) The graph has been scaled by a: Positive Number / Negative Number 8) Which graph has been scaled by a larger number?: Graph A / Graph B Graph A Graph B 103 Part 5: Graph/Equation Properties The function f(x) = 5x may be best described by the graph: a) b) c) d) None of these graphs The function f(x) = -2x - 6 may be best described by the graph: a) b) c) \ . \ ' The function f(x) = 3x +5 may be best described by the graph: a) b) c) d) . / * None of these graphs / X The function f(x) = - lx may be best described by the graph: a) b) c) d) None of these graphs d) 1 y / . None of these graphs V / ' 104 Part 6: Equation Properties T h e function f(x) a) a posi t ive slope - 4 has b) a negative slope c) a zero slope The function f(x) a) a posi t ive slope 5x + 2 has b) a negative slope c) a zero slope The function f(x) a) a posi t ive slope 4x + 6 has b) a negative slope c) a zero slope T h e function f(x) = —4x + 5 has a) a posi t ive y-intercept b) a negative y-intercept c) a zero y-intercept T h e function f(x) = 3a; — 4 has a) a posi t ive y-intercept b) a negative y-intercept c) a zero y-intercept T h e function f(x) = — 7 has a) a posit ive y-intercept b) a negative y-intercept c) a zero y-intercept T h e function f(x) = — 3 has a) a posi t ive y-intercept b) a negative y-intercept c) a zero y-intercept 105 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051691/manifest

Comment

Related Items