Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

First-year medical students’ perceptions of objective structured clinical examinations’ contributions… Min, Cynthia Shu 2020

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
24-ubc_2020_may_min_cynthia.pdf [ 1.29MB ]
Metadata
JSON: 24-1.0389811.json
JSON-LD: 24-1.0389811-ld.json
RDF/XML (Pretty): 24-1.0389811-rdf.xml
RDF/JSON: 24-1.0389811-rdf.json
Turtle: 24-1.0389811-turtle.txt
N-Triples: 24-1.0389811-rdf-ntriples.txt
Original Record: 24-1.0389811-source.json
Full Text
24-1.0389811-fulltext.txt
Citation
24-1.0389811.ris

Full Text

FIRST-YEAR MEDICAL STUDENTS’ PERCEPTIONS OF OBJECTIVE STRUCTURED CLINICAL EXAMINATIONS’ CONTRIBUTIONS TO LEARNING  by  Cynthia Shu Min  BHSc. (Honours), McMaster University, 2011  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Cross-Faculty Inquiry in Education)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)   April 2020  © Cynthia Shu Min, 2020   ii  The following individuals certify that they have read, and recommend to the Faculty of Graduate and Postdoctoral Studies for acceptance, the dissertation entitled:  First year medical students’ perceptions of objective structured clinical examinations’ contributions to learning   submitted by Cynthia Shu Min in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY in Cross-Faculty Inquiry in Education  Examining Committee: Dr. Kevin Eva, Department of Medicine, University of British Columbia Co-supervisor Dr. Daniel Pratt, Department of Educational Studies, University of British Columbia Co-supervisor  Dr. Deborah Butler, Department of Educational and Counselling Psychology, and Special Education, University of British Columbia  Supervisory Committee Member Dr. Michelle Stack, Department of Educational Studies, University of British Columbia University Examiner Dr. Gary Poole, School of Population and Public Health, University of British Columbia University Examiner  Additional Supervisory Committee Members: Dr. Joanna Bates, Department of Family Practice, University of British Columbia Supervisory Committee Member Dr. Rose Hatala, Department of Medicine, University of British Columbia Supervisory Committee Member iii  Abstract  There is increasing interest in medical education in the use of assessments to enhance student learning. However, there remains limited understanding of the range of factors that influence student learning before, during, and after assessments. Current research has mainly focused on learning after an assessment and focused on providing feedback as a form of support. A Self-Regulated Learning perspective prompts consideration of a wider range of potential factors and suggests that students’ personal objectives drive learning in educational tasks with students navigating the learning environment in ways they think will most help them work towards their objectives.   Using principles of Constructivist Grounded Theory, first-year medical students attending a Canadian university were interviewed about their experience learning through Objective Structured Clinical Examinations (OSCEs) that were presented by their training program as needing to be completed for formative (21 students) and summative (15 students) purposes. Each OSCE consisted of a series of brief simulated patient encounters.  This research demonstrated that students’ learning through OSCEs was a complex process influenced by interactions between factors related to the student, assessment design and the broader environment.   iv  Lay Summary  To support medical students’ learning, educators often rely on the use of assessments. However, it is unclear how educators should act or implement these opportunities to optimize the learning that takes place. Through interviews with students about their experiences, we found that learning is driven by students’ personal interests and that a complex set of factors interact to influence the actions that are taken before, during, and after an assessment.  v  Preface  The research in this dissertation was proposed and executed by Cynthia Shu Min with the guidance of her committee. The interview protocol was designed and piloted specifically for the research in this dissertation. The methodology and tools for the constructivist grounded theory undertaken here were based upon Dr. Kathy Charmaz’s scholarship. The research in this dissertation received approval from the Research Ethics Board at the University of British Columbia, approval number H15-03403.  vi  Table of Contents Abstract ................................................................................................................................... iii Lay Summary ........................................................................................................................... iv Preface ....................................................................................................................................... v Table of Contents ..................................................................................................................... vi List of Tables ............................................................................................................................ xi List of Figures..........................................................................................................................xii Acknowledgements ................................................................................................................. xiv Chapter 1: Introduction ...........................................................................................................1 1.1 Defining summative and formative assessment functions .........................................2 1.2 Issues in existing research on factors influencing learning through assessments .......3 1.2.1 Provision of feedback is a limited approach to encouraging learning through assessments .................................................................................................................... 4 1.2.2 Encouraging learning after an assessment is a narrow conceptualization of learning through an assessment ....................................................................................... 8 1.3 Using self-regulation as a lens to expand understanding of potential factors influencing students’ learning through assessments ........................................................... 10 1.3.1 Students’ interpretation of the relevance of the assessment to their personal objectives influencing learning ......................................................................................13 1.3.2 Emotions influencing learning ........................................................................13 1.3.3 Environmental factors influencing learning ....................................................14 1.4 Contexts of the studies contained in this dissertation and the research objective ..... 15 Chapter 2: Methodology ........................................................................................................ 18 vii  2.1 Research Paradigm ................................................................................................ 18 2.1.1 Methodology ..................................................................................................19 2.1.1.1 Development of grounded theory ................................................................. 20 2.1.2 Constructivist grounded theory .......................................................................22 2.1.2.1 The role of the researcher............................................................................. 22 2.1.2.2 The role of previous knowledge or sensitizing concepts ............................... 23 2.1.2.3 Timing of data collection and analysis ......................................................... 25 2.1.2.4 Products of research ..................................................................................... 26 2.1.3 Summary ........................................................................................................26 2.2 Data collection ....................................................................................................... 26 2.2.1 The research context.......................................................................................27 2.2.1.1 Characteristics of the OSCEs ....................................................................... 28 2.2.1.1.1 Characteristics of the Formative OSCE ................................................. 29 2.2.1.1.2 Characteristics of the Summative OSCE ............................................... 30 2.2.2 Participants ....................................................................................................32 2.2.2.1 Participant recruitment ................................................................................. 32 2.2.2.2 Sample size.................................................................................................. 33 2.2.3 Timeframe for data collection.........................................................................35 2.2.4 Method...........................................................................................................36 2.2.4.1 Interview procedure ..................................................................................... 37 2.2.4.2 Data management ........................................................................................ 38 2.2.5 Ethics .............................................................................................................39 2.3 Data analysis .......................................................................................................... 40 viii  2.4 Rigour of the research process ............................................................................... 43 2.4.1 Researcher position and reflexivity .................................................................43 2.4.2 Triangulation ..................................................................................................48 2.4.3 Theoretical saturation and sufficiency ............................................................49 Chapter 3: FOSCE study results ............................................................................................ 50 3.1 Students’ task interpretation and personal objectives for the FOSCE ...................... 52 3.1.1 Types of personal objectives ..........................................................................54 3.1.1.1 Personal objectives oriented towards improving performance or working towards achieving a desired level of performance ...................................................... 54 3.1.1.2 Personal objectives oriented towards having a positive emotional state or improving one’s emotional state ................................................................................ 55 3.1.2 Students setting personal objectives of relevance to a variety of contexts .......58 3.1.2.1 Variation in students’ understanding of the clinical work context ................. 59 3.1.2.2 Variation in students’ understanding of OSCE-style assessments ................. 61 3.1.3 Section summary ............................................................................................65 3.2 Planning and enacting strategies to work towards personal objectives .................... 66 3.2.1 Working toward personal objectives by preparing for the FOSCE (pre-assessment effects) ........................................................................................................67 3.2.1.1 Factors influencing students’ preparation for the FOSCE ............................. 70 3.2.1.2 Section summary ......................................................................................... 71 3.2.2 Working toward personal objectives through practicing skills during the FOSCE (pure assessment effects) ..................................................................................72 3.2.2.1 Factors influencing skills practice ................................................................ 73 ix  3.2.3 Working toward personal objectives by monitoring performance and adjusting one’s approach to learning or one’s understanding (post-assessment effects) .................76 3.2.3.1 Variation in sources of information students used to monitor performance and to adjust learning or understanding ............................................................................ 81 3.2.3.2 Variation in the types of information and opportunities students used to monitor performance and adjust learning or understanding ........................................ 88 3.2.3.3 Factors influencing students’ access to information from reflection in and on action…. .................................................................................................................... 93 3.2.3.4 Factors influencing access to information from external support opportunities…………………………………………………………………………..96 3.2.3.5 Factors influencing access to practice opportunities for implementation of learning adjustments post-FOSCE............................................................................ 105 3.2.3.6 Section summary ....................................................................................... 107 3.3 Chapter summary ................................................................................................. 109 Chapter 4: SOSCE study results .......................................................................................... 111 4.1 Personal objectives for learning in the context of the SOSCE ............................... 113 4.2 Planning and enacting strategies to work towards personal objectives .................. 117 4.2.1 Working toward personal objectives by preparing for the SOSCE ................ 118 4.2.1.1 Factors influencing preparation for the SOSCE .......................................... 119 4.2.1.2 Section summary ....................................................................................... 124 4.2.2 Working toward personal objectives through skills practice during the SOSCE and factors influencing strategy use ............................................................................. 125 x  4.2.3 Working toward personal objectives through monitoring performance and adjusting learning or understanding after the SOSCE ................................................... 127 4.2.3.1 Factors related to students’ understanding of the assessment context.......... 128 4.2.3.2 Factors influencing students’ access to information to address performance objectives when prioritizing emotional state objectives ............................................ 133 4.2.3.3 Factors influencing students’ access to information or practice opportunities when prioritizing performance objectives................................................................. 141 4.2.3.4 Section summary ....................................................................................... 150 4.3 Chapter summary ................................................................................................. 152 Chapter 5: Discussion.......................................................................................................... 155 5.1 Personal objectives .............................................................................................. 155 5.1.1 Interpretation of the relevance of an assessment for addressing personal objectives .................................................................................................................... 158 5.2 Factors influencing progress towards personal objectives ..................................... 159 5.2.1 Students’ emotional state .............................................................................. 159 5.2.2 Information students require to guide learning .............................................. 164 5.2.3 Assessment scheduling and the broader environment ................................... 173 5.3 Limitations .......................................................................................................... 176 5.4 Conclusion ........................................................................................................... 178 References.............................................................................................................................. 182 Appendices ............................................................................................................................ 202 Appendix A : Emails sent students containing information about the FOSCE and the SOSCE . 202 Appendix B : Sample interview guides .................................................................................... 216 xi  List of Tables Table 1: Characteristics of the FOSCE and SOSCE ................................................................... 31  xii  List of Figures Figure 1: Butler and Schnellert’s Integrative Model of Self-Regulation (reprinted with permission from Butler and Schnellert) ..................................................................................... 11 Figure 2: Timeframe for data collection for the FOSCE ............................................................ 36 Figure 3: Timeframe for data collection for the SOSCE ............................................................ 36 Figure 4: Factors influencing first-year medical students’ regulation of learning in a FOSCE .... 52 Figure 5: Students’ task interpretation and personal objectives .................................................. 52 Figure 6: Strategies students planned and enacted to work towards personal objectives ............. 66 Figure 7: Factors influencing enactment of preparing ................................................................ 70 Figure 8: FOSCE as an opportunity for skills practice ............................................................... 72 Figure 9: Working toward personal objectives by monitoring performance and adjusting one’s approach to learning and understanding of OSCEs .................................................................... 76 Figure 10: Perceived credibility of information from reflection in and on action and external support opportunities ................................................................................................................. 81 Figure 11: Factors influencing access to information from reflection in and on action ............... 94 Figure 12: Factors influencing access to information from external support opportunities ......... 96 Figure 13: Factors influencing access to practice opportunities for implementation of learning adjustments post-FOSCE ........................................................................................................ 105 Figure 14: Factors influencing first-year medical students’ regulation of learning in OSCEs ... 111 Figure 15: Factors influencing students’ preparation for the SOSCE ....................................... 119 Figure 16: Factors influencing skills practice in the SOSCE .................................................... 125 Figure 17: Factors related to students’ understanding of the assessment context influencing adjustment of learning or understanding .................................................................................. 128 xiii  Figure 18: Factors influencing students’ access to information to address performance objectives when prioritizing emotional state objectives ............................................................................ 133 Figure 19: Factors related to students’ access to information or practice opportunities when prioritizing performance objectives ......................................................................................... 143                    xiv  Acknowledgements Completing this Ph.D. has truly been a collaborative journey that would not have been possible without the support of many people.  To my supervisors Dr. Kevin Eva and Dr. Dan Pratt, you have been a constant source of encouragement and guidance throughout this process. Thank you for being so patient with me.   To my advisory committee Dr. Deb Butler, Dr. Joanna Bates and Dr. Rose Hatala, your diverse perspectives have illuminated my “blindspots” and challenged me to think critically about my work.  To the M.D./Ph.D. program, especially Dr. Lynn Raymond, thank you for providing the resources and infrastructure necessary for me to complete the program.  To my parents and brother, thank you for believing in me and letting me find my path in life.  Finally, to my partner Lorne, thank you for patiently listening to me talk about my work and always telling me that I can do it even when I wanted to give up. 1  Chapter 1: Introduction One of the central goals of the medical profession is to provide safe and competent care.1 Given the “conventional assertion that assessment drives learning,”2 the provision of external feedback after assessments3 has been one strategy used by medical educators to help students work towards this goal. Similar to the emphasis in the general education literature,4 the importance placed on the role of external feedback in the development of competent health professionals can be seen in the volume of effort aimed at optimizing feedback delivery, uptake and impact in medical education.5–8   Formative assessment is the term commonly used to indicate assessments designed by the faculty for the intended purpose of helping students accelerate learning (e.g., those aimed at generating feedback to guide the learner regarding how he or she can improve).4,9 Traditionally in medical education, formative assessments have been contrasted with summative assessments,7 which refers to the use of assessments designed by the faculty for the intended purpose of making a judgement about a student’s learning.7   More recently, because of increasing awareness of the role of external feedback in the development of competent practitioners, medical educators have shown increasing interest in determining how assessments are “used by students” (in contrast to “intended by faculty”) to support their learning.7 Given that the summative and formative labels assigned by educators do not determine how students respond to an assessment (i.e., they might learn from exercises labelled as summative and might feel judged in exercises labelled as formative), the distinction has begun to blur with educators questioning whether or not it is necessary (or even useful) to 2  draw a firm distinction between two “types” of assessment.7,10 Existing research suggests that transforming assessments to fulfill the goal of supporting student learning is not as simple as claiming the assessment to be for formative purposes or tweaking the protocol. For example, adding feedback may not make an experience “formative” because feedback is not always adopted by students for formative use.8,10,11 After defining the summative and formative distinction more thoroughly, the following sections elaborate on why this is so by outlining some of the factors that can influence students’ learning through assessments.   1.1 Defining summative and formative assessment functions An assessment refers to any appraisal, judgment, or evaluation of an individual’s work or performance.12 The terms “summative” and “formative” are commonly used by medical educators and medical education researchers to label assessments in a manner that describes the intended purpose of the assessment as designed. Generally speaking, assessments labelled as summative are high-stakes13 and focus on assigning grades or a judgement of competence (i.e., pass or fail)14 because they are intended to be “assessment of learning”.13 In contrast, assessments labelled as formative tend to be lower-stakes with a focus on the provision of feedback because they are intended to be “assessment for learning”.13 Wiliam15 suggests that the terms “formative” and “summative” are “grounded in the function that the evidence elicited by the assessment actually serves, and not on the kind of assessment that generates the evidence.” In other words, as Black and Wiliam16 state, an assessment functions formatively “to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or 3  better founded, than the decisions they would have taken in the absence of the evidence that was elicited.”  By focusing on using “summative and “formative” as the actual function of the evidence elicited by the assessment instead of the properties of the assessment, it means that assessments labelled as “summative” or “formative” can both function formatively. An assessment that is used to draw inferences about students’ current or future levels of achievement serves a summative function whereas an assessment used to provide guidance regarding next steps in instructional activities or performance alteration serves a formative function.15   It is important to note that because educators and students independently form their own impressions of the function of an assessment, educators assigning a label of “formative” or “summative” to an assessment cannot be assumed to lead students to use the assessment for the same purpose.   1.2 Issues in existing research on factors influencing learning through assessments There are two main limitations in the existing research on the adaptation of assessments to support learning: the focus on the provision of more feedback and the focus on encouraging learning after an assessment.  4  1.2.1 Provision of feedback is a limited approach to encouraging learning through assessments Thus far, the provision of feedback from external support opportunities has been the dominant focus of research into how to support student learning through summative and formative assessments.8,10,11,17 Feedback “is information about performance generated by teachers, peers, or students themselves that is fed back to learners to give them a sense of progress and inform further action and learning.”18 Thus, feedback can be received or taken from an external source or be self-generated. In this dissertation, the term feedback encompasses any qualitative or quantitative information about students’ performance (outcome or process) delivered or generated for the sake of performance improvement.   The assumption behind providing students more feedback is that students desire the opportunity (or at least are willing) to learn from assessments and are limited in doing so simply because they do not have access to the information required. However, feedback is not always used by students to fulfill a formative function.8,10,11 Therefore, the availability of external feedback might not be the sole factor influencing learning through assessments.   Factors related to characteristics of the feedback itself might be influencing students’ engagement and use of feedback even when it is available. For example, the credibility of feedback, influenced by the quality of the observations collected and the skill and interest of the observer, can influence students’ engagement.19–23   5  Existing research also considers the relationship between the feedback provider and the recipient. For example, strong relationships with mutual understanding of the goals student and educator are striving to achieve (i.e. an educational alliance24,25) help to foster mutual respect and psychological safety.26  The content of feedback might also influence student engagement. According to Hattie and Timperly,27 effective feedback for learning purposes refers to feedback that helps students understand where they are going, how they are doing, and what they should do to improve. Existing research on students’ perspectives suggests students prefer feedback that is timely, personal, criteria-referenced, and that can be used for further improvement.28 Not achieving students’ expectations in these regards may create a sub-optimal affective state (i.e., an emotional reaction to the feedback such as frustration), which has also been suggested to influence feedback use.25,29–32  At the same time, a lack of engagement with any feedback provided does not necessarily mean that students are not learning from the assessment because they can draw upon other sources of information. For example, feedback external to the assessment such as from work-based clinical preceptors may be actively sought out by the individual.19 Students also generate their own feedback19 while engaged in the task, known as reflection-in-action, or after engagement in the task, known as reflection-on-action.33 Reflection is a “metacognitive process that creates a greater understanding of both the self and the situation so that future actions can be informed by this understanding.”34 According to Kolb's experiential learning cycle,35 experience causes the student to think about (i.e., reflect on) what occurred in a way that focuses the learner on 6  understanding how their actions or reactions contributed to the experience (for better or worse) and results in the identification of learning needs. Eventually, students apply knowledge and skills that are newly gained to new experiences and the cycle begins anew.  However, effectively self-generating feedback can be difficult for students because it may be cognitively demanding for them to monitor their progress while engaging in the assessment and because less skilled individuals are poorly positioned to judge the strength of their performances.36 Reflection-in-action may be especially difficult when the task is complex or when the individual is inexperienced or relatively unknowledgeable.37,38   Students’ engagement with feedback might also be influenced by their personal objectives or interests. For example, the desire for positive emotions seems to influence students’ engagement. Harrison et al.8 found that the summative nature or label of an Objective Structured Clinical Examination (OSCE) led students to be dominated by a fear of failing the assessment, which also led students to feel negative emotions such as anxiety. They speculated that the relief of passing the OSCE deterred students from engaging with more specific feedback, thus negatively impacting learning.   Because the authors did not explore students’ experience in a formative assessment, it is unclear whether emotions influence engagement more strongly in summative assessment contexts or are experienced equally by students in formative assessments as well. If negative emotions are associated with both summative and formative assessments, then educators may have similar 7  challenges with student uptake of feedback even when assessment practices are set up with the dominant intention to be formative.  Students’ engagement with feedback can also be driven by a desire to improve performance in the clinical work context. Harrison et al.8 similarly reported that the feedback’s relevance to clinical work influenced students’ decisions regarding whether or not to engage with it, thereby suggesting that students make learning decisions based on the desire to improve in the clinical workplace and not necessarily based on the extent to which the assessment is intended to be summative or formative. This interpretation is further supported by a recent study3 on factors affecting medical students’ receptivity to feedback from different assessment systems that contained either mostly formatively-labelled assessments or contained a mixture of summatively and formatively-labelled assessments. It found that perceived relevance of the assessment to authentic clinical contexts was a prominent factor that impacted students’ receptivity to feedback.   Given that existing research suggests that the provision of feedback alone may not lead students to use feedback to support their learning, there needs to first be a better understanding of the factors that are influencing students learning through assessments. Such an understanding would better position educators regarding how to design feedback systems or assessments to support students’ learning. Existing research hints at factors related to the feedback itself (e.g., its content) and factors related to the student (e.g., their emotion and their perceptions of the credibility and relevance of feedback) as potential factors. Further, because existing research suggests the possibility that some factors may be more prominent in assessment with a specific 8  label, such as negative emotions in summative assessments, it is important to explore both summative and formative assessment contexts to form a comprehensive understanding of factors influencing students’ learning through assessments.   1.2.2 Encouraging learning after an assessment is a narrow conceptualization of learning through an assessment As previously mentioned, existing research on the formative aspects of assessments has been focused on encouraging students to use information or feedback about their performance to guide their learning in a manner that will improve future performance. This is known as a post-assessment learning effect because it takes place mainly after the assessment.39,40   However, focusing on encouraging post-assessment effects alone is a limited approach to encouraging learning through assessments. Gielen et al.39 suggest that there are two other types of learning effects related to assessments: pre-assessment effects (i.e., learning that takes place before the assessment) and pure or true assessment effects (i.e., learning that takes place during the assessment). By focusing on understanding how to support the post-assessment effects of assessment, educators may overlook two other ways to support students’ learning through assessments.   Awareness of pre-assessment effects first became prominent in the early 1970s when researchers found that what university students studied was influenced by what they believed would be assessed regardless of the explicitly stated learning objectives.41,42 Researchers within and outside of medical education have subsequently identified multiple reasons why students are 9  motivated to learn in advance of an assessment.41–44 Some of these factors are similar to the personal objectives influencing student uptake of feedback. For example, students prepared to improve performance in different contexts such as future clinical careers.45,46 In addition, some students prepared to avoid consequences, academic or otherwise, associated with inadequate performance on an assessment.41,42,44,45 Emotions also motivated preparation (e.g., worry,42,44 threat,44,47 or anxiety44,47) in addition to influencing uptake of feedback. This overlap in the personal objectives influencing pre-assessment and post-assessment learning effects suggests that students’ personal objectives might be influencing learning through an assessment in multiple ways.   With respect to learning that takes place during the assessment itself,39,48–50 known as pure or true assessment effects, there is again reason to believe similar factors might have an influence during both formative and summative assessments.39 One example of learning during assessment occurs when individuals form new connections between the assessed content because the assessment requires them to reorganize their knowledge to tackle novel problems.39 Further, according to the test-enhanced learning literature, the act of being tested helps individuals learn by enhancing their memory for the tested material more than if they had just studied the same material.51,52 This is called the testing effect and it is believed to happen because retrieving information from memory strengthens the memory for that information, thereby making it more likely that it will be recalled in the future.51 Tests that use a short answer or essay format have been shown to enhance retention of information more than a multiple-choice question format.53 The difference may arise because short answer and essay tests require more effortful retrieval 10  given that individuals have to construct answers rather than simply recognizing the correct answers.53   In sum, though educators have focused on encouraging post-assessment learning effects when encouraging learning through assessments, assessments can also have learning effects before and during the assessment. Therefore, to best support students’ learning through an assessment, there needs to be a better understanding of students’ perception of factors influencing learning before, during, and after the experience of being assessed.  1.3 Using self-regulation as a lens to expand understanding of potential factors influencing students’ learning through assessments One perspective that provides insight into potential factors influencing students’ learning through assessments is that of self-regulation because it positions the person doing the learning (e.g., the student) as the controller of engagement. Specifically, I will be drawing on Butler and Schnellert’s Integrative Model of Self-Regulation (Figure 1).54 Butler and Schnellert’s version of the model is inspired by and is a simplified version of the model previously developed by Butler and Cartier.55  Butler and Schnellert’s model follows the classic definition of self-regulation: “self-generated thoughts, feelings and actions that are planned and cyclically adapted to the attainment of personal goals.”56 These goals or personal objectives motivate or drive individuals’ willingness to invest and engage in activities54. When the personal objectives are related to learning, the term “self-regulated learning” is used.  11   It has been argued that every person self-regulates their behaviour to achieve personal objectives.56 Therefore, it is inaccurate to characterize some individuals as self-regulating and others as not self-regulating.56,57 What differs across individuals is not whether or not they are engaged in self-regulation but whether they do so effectively towards the achievement of their personal objectives.56   Figure 1: Butler and Schnellert’s Integrative Model of Self-Regulation (reprinted with permission from Butler and Schnellert58)  According to Zimmerman and Schunk,59 research on cognition and metacognition, motivation, and behavioural and developmental processes informed the initial research on self-regulation. Initial interest in studying self-regulated learning came from research showing that differences in self-regulatory processes were associated with achievement differences in students60,61 and supports for self-regulated learning could be used to improve achievement.62,63   In the mid-1980s, two symposia on the topic took place, which resulted in an inclusive definition of self-regulation of learning as “the degree to which students are metacognitively, motivationally, and behaviorally active participants in their own learning process.”64 Since that 12  time, many models of self-regulation have been developed.65 Butler and Schnellert’s model is integrative in that it was developed based on an amalgamation of the factors associated with self-regulated learning in the existing literature.66   According to Butler and Schnellert’s model, when confronted with an educational task, students draw on information in the environment, their knowledge, and perceptions that they bring into the learning environment to engage in task interpretation.66 Task interpretation shapes the rest of the cycle of strategic action by shaping the goals or personal objectives students set, the strategies they will use to work towards personal objectives, and the criteria they will use to monitor performance and make adjustments.66 Effective engagement in self-regulation means engagement in cycles of strategic action that unfold in recursive, dynamic, and flexible ways.67  To work towards personal objectives, students plan and enact strategies. Strategies students can use include cognitive strategies,68 that can be implemented when preparing for an assessment or when undertaking the assessment itself. To be maximally successful, students should use self-regulating strategies to monitor the success of their cognitive strategies and make adjustments if they are not satisfied with their progress.66 Such adjustments can lead to post-assessment effects. If students experience difficulty enacting any of these strategies, it could negatively impact their learning.  In the following sections, I will identify factors related to Butler and Schnellert’s model of self-regulation that provide additional insight into how assessment might influence learning.  13  1.3.1 Students’ interpretation of the relevance of the assessment to their personal objectives influencing learning As previously mentioned, students’ personal objectives seem to influence uptake of feedback and preparation for the assessment. A self-regulation perspective also suggests that the extent to which students interpret the content assessed as helping them to work towards their objectives should influence learning through an assessment.   The type of knowledge demanded by the assessment may also influence individuals’ engagement with the assessment as a learning opportunity. Students might not interpret the assessment as a meaningful learning opportunity if the type of knowledge assessed does not align with the type of knowledge they value. For example, if a medical student’s personal objective is to be an effective physician, an assessment they perceive to require memorization of facts may not align with their objective if they believe that being an effective clinician requires an understanding of knowledge rather than memorization of facts.44,69 In this case, students may not want to engage in learning after the assessment because doing so is perceived as not helping them move towards achieving their personal objective of becoming an effective physician.   1.3.2 Emotions influencing learning Beyond personal objectives, Butler and Schnellert’s model also highlights that how students experience (and respond to) emotions in an educational task can also influence learning.54 Emotion refers to “individuals’ affective responses when presented with or engaged in an activity.”54 Self-regulation research also suggests that, in addition to personal objectives related to learning, students can have objectives oriented towards achieving well-being.70 Striving for 14  well-being may be detrimental to their learning if prioritizing the maintenance of well-being distracts students from learning for performance improvement.71   1.3.3 Environmental factors influencing learning Butler and Schnellert’s model of self-regulation also emphasizes that an educational task like an assessment is not situated in a vacuum and, instead, is situated within a specific context,66 such as the medical curriculum. Factors in the learning environment, such as the need to take on other educational tasks and social aspects of the environment, might play a role in influencing students’ learning through an assessment.  Within the curriculum, students often need to attend to multiple tasks. For example,  when individuals prepare for assessments, there are often other assessments or other competing learning priorities that require attention.44,69 Therefore, individuals need to balance the time they spend preparing for the assessment and the time they spend on other educational demands, which might influence their learning pre-assessment. Similarly, after an assessment, individuals may have competing tasks to which they must attend that may impact their learning from an assessment.   Balancing multiple demands might also be detrimental to students’ well-being and lead students to focus on restoring their social, physical, or mental health instead of learning. For example, before an assessment, some students have to neglect other aspects of their lives in order to engage in the mentally challenging task of preparing for an assessment.69 Learning after 15  assessment may, therefore, take on lower priority if it means individuals continue to neglect other areas of their lives and prevent the opportunity to take a break69 to restore well-being.  Peers and family members are examples of social influences that may impact individuals’ engagement in learning after an assessment. Parents can encourage and support individual learning by encouraging or pressuring students into academic engagement.43 Peers can distract individuals from learning if they encourage individuals to participate in social activities (instead of studying).43   1.4 Contexts of the studies contained in this dissertation and the research objective  According to Butler and Schnellert’s model,54 how an individual engages with a task depends on the interaction between the individual and the task context.54 Students’ self-regulation under this model is, therefore, context-specific and not necessarily stable. Therefore, it is important to investigate the factors influencing learning under a specific set of conditions. The contexts of the studies reported in this dissertation are the OSCEs required to be undertaken by first-year medical students at the University of British Columbia. In the medical education program investigated in this research, a “Formative OSCE” (FOSCE) took place half way through the school year and is called such because the educators who designed it intended it to serve a formative function and presented it as such to the students. The “Summative OSCE” (SOSCE), in contrast, took place at the end of the school year and is called such because the educators who designed it intended it to serve a “summative” function. These labels will, therefore, be used throughout this dissertation as shorthand to reflect how the OSCEs were described to students by the medical program. 16    Across two linked studies, this dissertation was conducted to enhance understanding of the factors first-year medical students report as influencing their learning through OSCEs. Sub-questions include “how do these factors interact?”; and, “How do students prioritize different goals in relation to the factors of influence?”  The OSCE was chosen because it is one of the most prominent methods for assessing clinical skills used throughout medical training.72 It was first described by Harden et al. in 197573 and is now readily used in the assessment of clinical skills throughout all stages of medical training.72 An OSCE is:  An assessment tool based on the principles of objectivity and standardisation, in which the candidates move through a series of time-limited stations in a circuit for the purposes of assessment of professional performance in a simulated environment. At each station candidates are assessed and marked against standardised scoring rubrics by trained assessors.72  According to Miller’s pyramid of assessment,74 the OSCE assesses that a student can “show how” they would perform in a simulated environment, which contrasts with other assessments that assess what a student “knows” in an exam that does not require performance or what a student “does” in an actual clinical workplace.72 The simulated environment of the OSCE might lead students to perform not exactly as they would in identical tasks in authentic clinical workplaces.75  Given the broad use of OSCEs as a method to assess clinical skills, there is increasing interest in their value for supporting and influencing students’ learning (i.e., the extent to which they an 17  serve a formative function).6–8,10,46,76–80 It is, thus, an ideal starting point to begin exploring students’ understanding of learning through assessments and factors that influence that learning.  Understanding students’ perceptions of factors influencing learning through OSCEs might be particularly beneficial for first-year medical students because of the explicit and significant focus on developing clinical skills in the formal pre-clerkship curriculum that first-year and second-year medical students are undertaking. The pre-clerkship portion of medical school usually refers to the first half of medical school, most commonly (in North America) the first two years in a four-year program. Pre-clerkship students mainly learn in classroom settings with minor exposure to authentic clinical workplaces. However, these students are often eagerly interested in getting to clinical workplaces and prepare in a dedicated manner to that end despite having limited clinical exposure.81   The pre-clerkship portion of the curriculum is often contrasted to the clerkship portion, which is usually the second half of medical school where medical students rotate through clinical specialties and learning takes place primarily in authentic clinical workplaces.  Given that medical schools tend to have more central control over the design of the pre-clerkship curriculum compared to the clerkship curriculum, it is commonly easier for educators to attend to factors influencing students’ learning through pre-clerkship OSCEs, an important factor when considering how to intervene to address barriers to learning associated with assessment.  18  Chapter 2: Methodology To understand how first-year medical students conceive of their learning through assessments, and what factors influence their learning, I used a constructivist grounded theory methodological approach. In this chapter, I will describe the research paradigm that informed my research and how constructivist grounded theory fits within it, the data collection and analysis process, and processes that built rigour into the research.   2.1 Research Paradigm A paradigm refers to a set of philosophical assumptions that researchers use to guide their thinking.82 Different paradigms are distinguished from each other through their ontological, epistemological, and methodological beliefs and approaches.83 Because this research aimed to construct an understanding of students’ perspectives of learning through assessments, a constructivist or interpretivist paradigm was most suitable.  Constructivist paradigms stem from the philosophy of Edmund Husserl’s phenomenology and Wilhelm Dilthey’s (along with other German philosophers’) study of interpretive understanding called hermeneutics.82,84 Hermeneutics is “the study of interpretive understanding or meaning”,82 and the term is used by constructivist researchers as a way “to interpret the meaning of something from a certain standpoint or situation.”82 According to Wilhelm Dilthey, the goal of social science research is to understand the lived experiences of humans.85 He also believed that because natural reality is different from social reality, different methods are necessary to study human behaviour than those used to study natural sciences.85   19  A constructivist paradigm supports a relativist ontology and a subjectivist epistemology.86 Ontology refers to the nature of reality.86 Relativist ontology means that reality is viewed as socially constructed. Therefore, there can be multiple, diverse interpretations of the same phenomenon and there is no ultimate “correct” way of knowing.87 Social constructions of reality can conflict and change.82 The belief that there is no objective reality also means that the results of a study cannot definitively be generalized to a broader population because reality cannot be “captured” and there is no guarantee that individuals in the broader population construct their reality in the same way as the participants in a study.82 However, it is still possible for people to share the same construction of reality within and across different groups.88 Because there is no objective reality to be discovered, the researcher’s goal is to understand the participants’ constructions of knowledge.82   Epistemology refers to the theory of knowledge and the relationship between the knower and the would be known.89 It speaks to how one make claims about what one knows. Constructivist researchers take a subjectivist and transactional approach to their research, meaning that the research findings are created (i.e., data collected) through interaction between the researcher and the participants.82,89 Therefore, the product of research is influenced by the researcher and again should not be seen as reflecting a fundamental and stable social truth.   2.1.1 Methodology The methodology refers to the “aims, concepts, strategies, and methods used to gain knowledge about a particular phenomenon under investigation.”88 The methodology selected for this research is constructivist grounded theory (CGT), which was first described by Kathy Charmaz 20  in 2000.90 A constructivist version of grounded theory was chosen over the original version of grounded theory developed by Glaser and Strauss, or the later version developed by Strauss and Corbin, because those versions are grounded in a theoretical stance of post-positivism91, which does not fit with the assumptions of an interpretivist paradigm. However, I will first briefly discuss the origins of grounded theory prior to describing the constructive version that was adopted for this dissertation.  2.1.1.1 Development of grounded theory Grounded theory was first discussed by sociologists Barney Glaser and Anselm Strauss in The Discovery of Grounded Theory in 1967.92 At the time, quantitative experimental research was dominant and Glaser and Strauss described the grounded theory method to provide a systematic approach to analyzing qualitative data that would satisfy the rigour of the quantitative paradigm.93 In addition, Glaser and Strauss used grounded theory to shift sociological research from testing theories to generating theories “grounded” in the analysis of qualitative or quantitative data.93 Defining components of Glaser and Strauss’s version of grounded theory include: concurrent data collection and analysis, constructing analytic codes and categories from data, constant comparison between data, codes, and categories during each step of the analysis, advancing theory development during each step of data collection and analysis, memo-writing, sampling aimed at theory construction (theoretical sampling), and conducting the literature review after data analysis.93–95   Glaser and Strauss eventually proceeded in separate ways. Strauss collaborated with Juliet Corbin and wrote Basics of Qualitative Research: Grounded Theory, Procedures and Techniques 21  in 1990, which focused on providing methodological steps to make the use of grounded theory more concrete.91 Glaser96 criticized Strauss and Corbin’s approach for being too prescriptive and forcing data and analysis into preconceived categories,97 such as the use of axial coding,96,98 which requires data to be organized according to causal conditions, context, intervening conditions, action or interaction strategies, and consequences.99   Further, both versions of grounded theory (Glaser and Strauss, and Strauss and Corbin) are set in a post-positivist paradigm91, which has an objectivist epistemology and a critical realist ontology.100 This means that the goal of research from these perspectives is to discover truth100 or external reality91 while recognizing that research can only progress towards the discovery of the truth given that researchers have biases that influence what is observed.100  In the 1990s, Kathy Charmaz (as well as other scholars) moved away from a post-positivist paradigm.92 In 2000, she outlined a constructivist version of grounded theory in a book chapter titled Grounded theory, objectivist and constructivist methods.101 Objectivist grounded theory (OGT) was the term Charmaz gave to versions of grounded theory rooted in the post-positivist paradigm. Charmaz believes that some features associated with grounded theory (e.g., memo writing, theoretical sampling, and coding practices) can be used across paradigms.92 However, other features, such as the timing of the literature review, are different between the objectivist and constructivist versions. In the next section, I will identify some key characteristics of CGT and highlight similarities and differences between CGT and OGT.  22  2.1.2 Constructivist grounded theory  The goal of any grounded theory study is to develop a theory that is “grounded” in data.92 For my research, I was interested in developing an understanding of how students used assessments to learn that was “grounded” in students’ experiences. CGT does not treat the data collected about participants’ experiences as objective information, but instead, data are seen as reconstructions of participants’ experiences.101 The product that is created from a CGT is “a picture that draws from, reassembles, and renders subjects’ lives. The product is more like a painting than a photograph.”102   2.1.2.1 The role of the researcher  The term “constructivist” in CGT refers to the subjectivity of the researcher and his or her role in the construction and interpretation of the data.92 This means that the researcher’s background and assumptions shape their interpretation of data.103 In contrast, OGT characterizes researchers as neutral observers and value-free experts.92 Though OGT acknowledges that researchers come with biases, it is believed that their biases can be neutralized through data analysis by comparing data with data, data with categories, and categories with categories.104   In CGT, the influence of the researcher’s perspective does not claim to be neutralized, but instead, researchers explain how “their standpoints, positions, situations, and interactions have influenced their analytic renderings.”104 CGT recognizes that the researcher (or “viewer”) creates the data and subsequent analysis through interaction with the participants (the “viewed”).101 Therefore, the researcher cannot be seen as separate from the data and analysis because what they see “shapes what he or she will define, measure, and analyze”.101 This perspective means 23  that construction of data is not executed by the researcher, but is constructed in a collaborative manner by both the participant and the researcher through their interactions and relationships.92,103   Because the theory generated from a CGT study depends on the researchers’ views, different researchers can construct different (but equally meaningful) understandings of the phenomenon under study. In contrast, OGT assumes that different researchers will each be led to discover the same external reality. In CGT, it is possible for different researchers to arrive at similar findings if participants have comparable experiences, and researchers bring similar questions, perspectives and concepts to analyze the data, but such alignment is not necessary and does not serve as the goal of research or a marker of credibility.101  2.1.2.2 The role of previous knowledge or sensitizing concepts CGT accepts the use of sensitizing concepts to help researchers develop initial ideas and questions to raise about the research topic.92 Sensitizing concepts refer to “background ideas that inform the overall research problem”,101 such as concepts uncovered through a literature review. Sensitizing concepts can be used at multiple stages of the research from the development of interview questions to the analysis of data.105 In contrast, in OGT, literature review is delayed until after the analysis to limit the influence of previous knowledge on the analysis.97   In CGT, if the sensitizing concepts turn out to be irrelevant to the data, they can be dismissed.92,106 Though sensitizing concepts may be useful for guiding the research process, it is important to stay open to what is in the data92 for the theory that is generated to be reflective of 24  the data in the study. In other words, the use of sensitizing concepts aids in theory construction by helping to “stimulate our thinking about properties or dimensions that we can then use to examine the data in front of us”,107 but they should not constrain the researcher’s thinking.107   Two main sensitizing concepts were used in the research presented in this dissertation: Butler and Schnellert’s Integrative Model of Self-Regulation54 and Gielen et al.’s39 characterization of the consequences of assessment.   Gielen et al.39 characterized three stages at which learning occurs as a consequence of completing an assessment: pre-assessment effects, pure assessment effects, and post-assessment effects. Pre-assessment effects refer to the learning impact of assessment on students before the assessment. Pure assessment effects refer to the learning impact that stems from student participation in an assessment. Post-assessment effects refer to the learning impact of assessments after an assessment (i.e., effects that arise from looking back after the completion of the assessment task). This concept influenced data collection by prompting me to ask students about learning before, during and after the assessment. It also encouraged me to think about the factors that influenced each learning effect.   Butler and Schnellert’s Integrative Model of Self-Regulation54 also influenced my thinking about students’ understanding of learning through assessments. A diagram of the model is seen in Figure 1. In this model, self-regulation refers to “the ability to control thoughts and actions to achieve personal goals and respond to environmental demands.”108 This perspective suggests that learning does not happen just because an assessment (or another educational task) is present. 25  Instead, students control their engagement and learning from an assessment to work towards their goals or personal objectives. This idea cued me to focus on what students’ personal objectives or values are and how students may use an assessment to work on these objectives.  The model also highlights students’ engagement in an educational activity as an interaction between what students bring into the activity and the characteristics of the learning environment. Therefore, how students learn in the context of an assessment might be influenced by both factors related to the student and factors related to the assessment environment. This idea cued me to consider what interactions may be present that could influence students’ learning through assessments that take place during medical school.  The model also outlines processes that students engage in to work toward their personal objectives, known as cycles of strategic actions. The first step in the cycle of strategic action is task interpretation and setting personal objectives. Students then plan and enact strategies to work towards objectives. Finally, students monitor their progress toward objectives and any necessary adjustments. Breaking down the steps that students may engage in when working towards objectives helped me to consider factors that may influence students’ learning at individual steps of the process.  2.1.2.3 Timing of data collection and analysis In both OGT and CGT, data collection and analysis are conducted at the same time.92 This process allows researchers to build theory because interesting insights from the analysis can be 26  explored in subsequent data collection and the data gathered in subsequent data collection can help advance analysis.103  2.1.2.4 Products of research CGT and OGT also differ in the product of the research (i.e., the theory that is produced). For OGT, the theory that is produced is expected to be generalizable and universal97 and aims to explain the phenomenon but also use it for prediction.101,109 In contrast, in CGT, the term theory refers to an “adequate understanding [of a phenomenon] that is bound by specific contexts and purposes”109 and is “partial, conditional, and situated in time, space, positions, action, and interactions”.104 Researchers in other substantive fields can use the hypotheses and concepts in a theory from a CGT study to help address similar research problems, but there is no expectation that the findings will be universal.101  2.1.3 Summary In this section, I outlined the paradigm that guides the research conducted for this dissertation and highlighted features of the methodology used. Further details of CGT methodology related to data collection and analysis will be described in the relevant sections that follow.  2.2 Data collection To address the research objectives, this project was organized around a single focus (to enhance understanding of factors students report as influencing their learning through OSCEs) that was examined in two linked studies separated by the assessment context within which data were 27  collected. In this section, I will describe the two contexts, participant details, method for data collection, and ethical considerations.  2.2.1 The research context CGT advocates for studies to be conducted in participants’ “natural settings”101 to help best capture participants’ constructed realities. The studies reported here were conducted in the context of the year-one curriculum of the Doctor of Medicine Undergraduate Program (MDUP) at the University of British Columbia. The MDUP is a four-year program located in the largest university in Western Canada. It has a class size of 288 students who are spread across four sites (198 of whom have Vancouver as their main site of study). More specifically, the assessment format that was used for the studies was the OSCEs offered to first-year students within this program. This assessment was chosen because of its widespread use throughout all stages of medical training and in the licensing of physicians. Further, it is a performance-based examination that is intended to reflect authentic practice better than written or other knowledge-based exams. The broad use of OSCEs may help enhance the transferability of the research findings.   This context was chosen because the program offers two OSCEs to first-year students (i.e., two similarly structured assessment moments, offered in reasonable proximity to one another, within a single curriculum). One, however, was labelled by the program as being used to fulfill a formative function (offering students mid-year feedback on their progress) whereas the other was labelled by the program as being used primarily to fulfill a summative function (determining at the end of year-one whether or not students were qualified to proceed to their second year of 28  training). This distinction was important because existing research suggests learning through assessments might be influenced by whether the assessment is labelled as summative or formative with extreme views indicating that “summative” exams are about measuring learning whereas “formative” exams are about encouraging learning. Thus, across two linked studies, data from both types of experiences were examined to develop a comprehensive understanding of factors influencing students’ learning through OSCEs. Prior to discussing how differences in the function and purpose of each OSCE were implemented, I will first provide an overview of the general OSCE format.  During each OSCE, students rotated through different rooms in which they were asked to perform a clinical task. At each “station”, students were given instructions on a piece of paper on the door outside the room and given two minutes to read the instructions and to think about the station. Students received a notebook and pen to take notes throughout the OSCE. After two minutes, students entered the station and had eight minutes to complete the designated clinical task. While performing the task, students were observed and assessed by a physician examiner. After completion of a station, students left the room and proceeded to the next station. During the OSCEs, there were also rest stations. These stations were the same length as any other station, but the student was not given a clinical task to perform. Students also had Post-encounter Probes (PEPs) for some stations, which involved answering questions about the preceding station.  2.2.1.1 Characteristics of the OSCEs Prior to each OSCE, students were given descriptions of what to expect during each assessment. The information was mainly conveyed to students through emailed documents (Appendix A). 29  Using these documents, I will discuss the stated purpose and unique features of the summatively-labelled and formatively-labelled OSCEs and how the labels relate to intentions for the OSCEs to fulfill a summative or formative function. For the sake of brevity, the “Formative” and “Summative” labels of the OSCEs will be used throughout this dissertation as shorthand for the function intended by the medical program.  2.2.1.1.1 Characteristics of the Formative OSCE The Year-one Formative OSCE (FOSCE) took place on December 12, 2015, at the end of the first semester of medical school (i.e., half way through the first-year). The FOSCE contained five stations based on the content taught in the first semester of the program (content that would be further elaborated on in the second semester). There were two history-taking stations and two physical examination stations and one mini Post-encounter Probe. The stated purposes of the FOSCE were to help students become familiar with the OSCE format and to prepare them for the Year-one Summative OSCE. These purposes were consistent with a formative function because they outlined an intention to help students improve in their capacity to perform well in clinical simulations. The design of the FOSCE also emphasized its intended formative function by ensuring that time for feedback was provided, thereby offering students guidance on how they could improve. Immediately after the end of each station, students were given five minutes of verbal feedback from examiners. This was intended to give students specific and direct information about their performance for their continued improvement. Students were told that, during the five minutes, they would have the opportunity to ask questions and seek clarification on the feedback they received. Students also received a written feedback report (including examiners’ comments) within eight weeks of the exam for the sake of more formally and 30  consistently reminding students of ways in which they should guide their further learning through the remainder of year-one.  It should be noted that some features of the written feedback report had the potential to undermine the stated intention of this assessment as a formative exercise in that they were more consistent with a summative function. Specifically, global (summary) judgements of students’ performance were included along with students’ score for each FOSCE station and the class average. While these scores were not used for explicit summative purposes, the assignment of a score and comparison to other students has the potential to imply a summative function despite the formative label.   2.2.1.1.2 Characteristics of the Summative OSCE The Year-One summative OSCE (SOSCE) took place at the end of the year-one curriculum on April 9, 2016, approximately eight months after students started medical school. Students were informed there would be eight stations based on the content of the year-one curriculum including three history-taking stations, four physical examination stations, and one post-encounter probe. The timing of this assessment was more consistent with the stated summative function because it allowed an assessment of content that was expected to have been well-learned prior to taking the exam (as opposed to content that was being learned at the time of the FOSCE). For the summative OSCE, students were informed they needed to achieve greater than or equal to 60 percent to pass the course. If students did not achieve such a mark, they would either fail the course or be required to complete a supplemental OSCE. These consequences (or stakes) associated with the SOSCE are consistent with the stated summative function because they focused the exercise on making summary judgements of students’ performance (i.e., pass or fail). 31  The reduced emphasis on a formative function was also evident in the choice to not provide students with immediate verbal feedback after each station. Instead, students only received individual score reports containing summary judgements such as ratings on each of the assessed competencies, station scores, and the class average. Differences between the FOSCE and the SOSCE are summarized in Table 1.    Formative OSCE Summative OSCE Timing 4 months into year-one 8 months into year-one Length 4 stations 8 stations Impact on progression No impact on whether the students pass or fail year one Must achieve at least 60 percent to pass year one Feedback (verbal) 5-minutes of verbal feedback provided by the examiner immediately after each clinical task None When written feedback is provided Advertised as coming six weeks after the FOSCE, but actually received approximately eight weeks after Three weeks after the SOSCE Feedback (written) Station scores: students’ score, standard deviation and class mean Station scores: students’ score, standard deviation and class mean, pass/fail and safety issues  Comments about competencies (e.g., listening skills and communication) based on relevant examiner ratings (e.g., borderline satisfactory for logical flow, introduces self) Rating for competencies (e.g., attitude, questioning skills) rated between unsatisfactory and exceeds expectations that were aggregated across stations to generate a student score, class mean and standard deviation Written narrative feedback from examiners  Table 1: Characteristics of the FOSCE and SOSCE  32  2.2.2 Participants The research participants were first-year students in the MDUP at the University of British Columbia. All participants completed at least three years of an undergraduate degree prior to entering the MDUP. The sampling strategies was criterion sampling, which means selecting cases that meet a predetermined criterion of importance.110 In the context selected, only year-one medical students who participated in the FOSCE and SOSCE could participate in the studies. This criterion excluded students who were in other years of training in the medical school and any year-one students who did not participate in the formative and summative OSCEs. Participation in both OSCEs was important because students were asked to compare their experiences in the second study. More senior students were thought to be too distant from the FOSCE and SOSCE experiences to be able to speak to their learning through the first-year OSCEs in a manner that reflected their lived experience at the time of the assessment.   2.2.2.1 Participant recruitment For both studies, participants were recruited through an email invitation sent by the MDUP Office of Student Affairs to a year-one student listserv. Participants were asked to contact the researchers by responding to the recruitment email.   For interviews about the SOSCE, I also contacted participants who were interviewed about the FOSCE experience to invite them to participate in interviews about the SOSCE. Students were informed in each recruitment email that they would receive a gift card in the amount of 10 Canadian Dollars for participating in the study.  33  2.2.2.2 Sample size  I drew on the Malterud et al.’s111 concept of information power when considering the sample size for the studies. Information power is a concept that suggests that the more information a sample holds, the fewer participants will be required. Five factors influence the information power of a sample: the aim of the study, sample specificity, use of established theory, quality of dialogue, and analytic strategy.111 In the following sections, I will explain each one of these factors as it relates to the studies.  Study aim: The sample size is influenced by whether the study has a narrow or broad aim, where a study with a narrower aim requires a smaller number of participants than a study with a broader aim. The research presented here had a relatively narrow aim because it focused on one level of medical training in one institution and one assessment format.  Established theory: Having some theoretical perspective helps to enhance the information power because it can help explain relationships between different components of the empirical data in coherent ways.111 Sensitizing concepts were used as established theories to inform the research. However, because this research follows the principles of CGT, I remained open-minded and made an effort to not dismiss findings that did not fit the theoretical perspective.92   Sample specificity: This term refers to the specificity of experiences, knowledge or properties of the participants. A less extensive sample is needed when participants hold characteristics that are specific to the study compared to participants who may not hold characteristics specific to the 34  study.111 The studies had high sample specificity because the selection criteria ensured that all participants experienced the phenomenon under study in a consistent manner.  Quality of dialogue: Background knowledge and experience in the field of study helps to strengthen the quality of dialogue between researchers and participants as it allows them to directly approach the topic of interest.111 I was the sole interviewer in the study and had background knowledge and experience with similar OSCEs because I am a student in the same program. Being a medical student may have also helped to enhance the quality of data because it may have helped me to build rapport with the participants better than if I was in a position that may have been considered authoritative by the participant (e.g., an instructor role). A full discussion about the impact of my position relative to the participants is covered in the section on researcher position and reflexivity.  Analysis strategy: The current research was exploratory, with the aim being to develop a model of factors that influence students’ learning through OSCEs. To broaden the range of factors captured, multiple students were interviewed about each assessment experience and two contexts (the FOSCE and the SOSCE) were chosen.  With consideration of Malterud’s criteria, and considering the sample size requirements to achieve them in two studies8,112 with similar aims (e.g., exploratory study), similar study populations (e.g., medical students) and similar contexts (e.g., learning in a specific context), a sample size of 15-20 participants was anticipated to sufficient. This estimate was continuously re-evaluated, however, as the study and analysis progressed.  35   For the FOSCE, 21 participants were interviewed. For the SOSCE, there were some challenges recruiting participants because that study took place during students’ summer vacation. In total, 15 participants were interviewed, of which 12 also participated in interviews about the FOSCE, (i.e., three were new participants).   For the FOSCE, participants’ ages ranged from 21-30 years old with a mean of 23.7. There were 9 male and 12 female participants. For the SOSCE, participants' ages ranged from 21-30 years old with a mean of 23.6. There were 5 male and 10 female participants.  2.2.3 Timeframe for data collection The timeframe for data collection was two to four months after each assessment. This timeframe was selected for several reasons. Because the scope of the research objective includes how assessments may influence learning after an assessment, conducting interviews immediately after the assessment took place may have limited exploration of this aspect of students’ experience. Second, to authentically represent the assessment experience, it was important to include all components of the assessment experience, which included the written feedback sent to students three to eight weeks after each OSCE. Therefore, the interviews were delayed until students had received all documents related to the assessment. Based on discussions with my supervisors and committee, I closed the data collection window after four months because of concerns about students’ ability to recall their experiences after long delays. In addition, the timeframe for data collection was chosen to accommodate students’ academic schedules. Figures 2 and 3 show the timeframe for data collection for each assessment.  36       2.2.4 Method The term “method” is used “to describe specific data collecting and analysis techniques.”88 The type of data used in the studies was data generated from individual intensive interviewing with research participants. This type of interviewing involves “gently guided one-sided conversation that explores a person’s substantial experience with the research topic.”92 Intensive interviewing fits well with CGT because it allows for open-ended exploration of individual participant perspectives, while at the same time allowing the opportunity for researchers to direct the Figure 2: Timeframe for data collection for the FOSCE Figure 3: Timeframe for data collection for the SOSCE    37  interview.92 Further, intensive interviewing allows for interactions between the researcher and participants, which is important in CGT for researchers to understand the multiple views of reality that participants may construct.82 Intensive interviewing is also suitable for CGT because it allows for in-depth exploration of an area in which participants have first-hand experience.92   Interviews were scheduled to be one hour in length but ranged from 34 minutes to one hour and 19 minutes. To accommodate demands in students’ schedules, they were given the option to participate in person, via Skype or via telephone. For interviews about the FOSCE, most of the interviews were conducted in person. For interviews about the SOSCE, most of the interviews were conducted via Skype or telephone because the interviews took place during students’ summer vacation period.  For the FOSCE, the first four interviews took place prior to students receiving their written feedback report because of delays in the medical program’s distribution of the report. Therefore, a second follow-up interview was conducted with three of the participants (one participant refused further participation) after they received the written feedback report.  2.2.4.1 Interview procedure Semi-structured interviews were conducted with the support of an interview guide (Appendix B). To give participants control over the direction of the interview, I did not necessarily ask all the questions in the interview guide of each participant and the questions were not necessarily asked in a sequential order.  38  The interview began and ended with open-ended questions to invite participants to engage in a discussion of their experiences.92 The middle of the interview guide contained questions about areas of theoretical interest informed by sensitizing concepts and preliminary data analysis.   After the initial draft of the interview guide was developed, I revised the interview guide based on feedback provided by my supervisors. Prior to the first participant interview, I further refined the interview questions through three practice interviews with year-two and year-four medical students from the same school as the participants. Students were asked to draw on their experiences in their most recent OSCE. I also asked the interviewees to provide feedback on my interview skills.  Generally speaking, similar interview guides were used for the interviews about the formative and summative OSCE experiences. For the SOSCE, students were also asked to compare their experiences learning through the SOSCE and the FOSCE. In addition, because data collection for the SOSCE occurred during data analysis for the FOSCE, data collection and analysis for the SOSCE experience was influenced by the FOSCE study.   2.2.4.2 Data management  For CGT, transcription of recorded interviews is recommended to preserve the details of the interview.92 Interviews were audio-recorded by me and then transcribed by a third party transcription service. Transcripts were subsequently reviewed by me for accuracy through comparison with the audio recording. The review process also helped me to become more intimately familiar with the data. Furthermore, I chose transcription of recorded interviews to 39  allow me to focus on listening to the participant during the interview for the sake of identifying potential leads to pursue.   NVivo Pro 11 and 12, Microsoft Word, and the diagramming software Scapple were used to document the analysis. Nvivo Pro allowed me to maintain the connections between codes and categories with the data and facilitated constant comparison between data, codes and categories. Initial memos for categories were created in Nvivo Pro and subsequent sorting, and integration of memos were completed using Microsoft Word and Scapple.  2.2.5 Ethics Institutional ethics approval was obtained through the Behavioural Research Ethics Board at the University of British Columbia. Approval to recruit medical students as participants was obtained from the Research Access Committee in the Faculty of Medicine at the university. Participant consent was obtained prior to study participation. Because I was a member of the Research Access Committee (but was not involved in the review of my research), I was sensitive to a few unique areas of ethical consideration when requesting medical student participation. First, to ensure that students did not interpret participation in this study as part of their curriculum, participants were informed that I did not work for the MDUP, their participation did not influence their marks, and that they could withdraw from the study at any time without consequences to their academic performance. Second, to avoid conflicting with students’ academic demands, I built flexibility into the interview scheduling process by offering students evening and weekend interview slots and offering both in-person and remote options for the interviews.  40   2.3 Data analysis  During the analytic process, the sensitizing concepts were explicitly used to scaffold and organize the findings of the current research when they aided in theory development. However, to limit sensitizing concepts from constraining my thinking, line-by-line coding (defined below) was completed for all the interview transcripts and components of the sensitizing concepts were not used if they were not reflective of participants’ narratives.   In the following section, I will discuss the steps involved in data analysis in CGT. Note that these processes do not necessarily occur linearly and can move back and forth between steps.92   Prior to discussing the analytic steps in CGT, I am going to note some overarching concepts that are key to the analytic process. One of the methods of documentation that is present throughout the data analysis process is memo-writing. Memos refer to informal notes that are written throughout the research process.92 One use of memo writing is to document the outcome of the constant comparison method, another concept that is important to the analytic process.92 The constant comparison method means “that every part of the data (i.e., emerging codes, categories, properties, and dimensions as well as different parts of the data) are constantly compared with all other parts of the data to explore variations, similarities and differences in the data.”113   In addition, memo-writing was used as a strategy to enable reflection after each interview. Memos documented feelings and thoughts that came out of each interview and noted any new insights that may need to be followed-up on in subsequent interviews. 41   Initial coding: The analytic process in CGT begins with initial coding. To accomplish this, I used a combination of line-by-line coding and incident-by-incident coding. Line-by-line coding refers to naming each line of written data.92,94 Line-by-line coding helps researchers see undetected patterns that may be missed during generalized observations of the data.92 In addition, line-by-line coding helps researchers stay open to the data and to identify implicit concerns and explicit statements in in-depth interview data.92 Incident-by-incident coding refers to comparing events across different participants. This can include, for example, comparing experiences during the verbal feedback session across different participants. This approach helps to identify properties of an emerging concept.92 Initial coding was conducted for all interviews to help me become intimately familiar with the data and to encourage me to stay close to the data.  Focused coding: Focused coding refers to using the most significant or frequent codes from the initial coding to organize other data. The purpose of focused coding is to develop more theoretical or conceptual codes than typically arise during initial coding.    Theoretical coding: The function of theoretical codes is to help establish relationships between focused codes and to help researchers to develop an analytic story about the data.92 In CGT, the role of previous knowledge in the development of theoretical codes is unclear.92 For the research reported here, sensitizing concepts (an example of previous knowledge) were used to develop the analytic story for the data. For example, components of Butler and Schnellert’s Integrative Model of Self-Regulation,54 such as the interaction between students and environmental factors, and the processes in the cycle of strategic action influenced the structure of the theory. The 42  sensitizing concept of Gielen et al.’s39 characterization of the consequences of assessment39 helped me organize students’ understanding of learning in OSCEs in a temporal manner.  Theoretical sampling: Theoretical sampling refers to the collection of additional data to aid the development of the theory by refining and delineating the properties of a given theoretical category.92 Theoretical sampling relies on abductive reasoning, which refers to the process of drawing inferences to account for findings, and then returning to the data to determine whether the inferences are supported by the data.92   Given the restricted timeframe for interviewing participants due to students' schedules and memory decay, it was not possible to conduct the full analysis prior to returning to the field to collect more data. Therefore, theoretical sampling was conducted by modifying the interview guide as the interviews progressed.114 Questions were modified based on insight gained during the interview as well as through written reflections after each interview. Theoretical sampling was also conducted during an interview by asking further questions when a participant mentioned an aspect of their experience that was of theoretical interest.92 In addition, concurrent comparison between data and emerging categories was used as another strategy to guide theoretical sampling.   Theoretical sorting, diagramming and integrating memos Sorting, diagramming and integrating memos are interrelated processes and can occur simultaneously. Diagramming, or making visual representations of the analysis, was a key strategy that helped me sort and integrate categories. Diagramming was used to form my analysis 43  as well as to report it.92 I had written many memos in the analytic process and found it difficult to sort and integrate them because it was difficult to remember all of them and see the connections between the emerging categories. Because the sensitizing concepts cued me to temporal aspects of learning through the assessment, as well as interactions between student and environment factors influencing each learning effect, diagramming helped me to consider these multiple dimensions together. Diagramming also helped me to sort and integrate the factors that were influencing students’ learning through an assessment and the learning effects of the assessments that students perceived.  2.4 Rigour of the research process While conducting these studies, reflexivity, triangulation and theoretical sufficiency were concepts that were used to enhance the rigour of the research process.   2.4.1 Researcher position and reflexivity  Reflexivity refers to “an attitude of attending systematically to the context of knowledge construction, especially to the effect of the researcher, at every step of the research process.”115 In CGT, reflexivity is important because researchers are not seen as neutral observers. Therefore, “explicating the place from which the researcher starts provokes a need to reflect upon his or her underlying assumptions and heightens his or her awareness of listening to and analyzing participants’ stories as openly as possible. As well, it provides the reader with a sense of the analytical lenses through which the researcher gazes at the data.”116  44  In this section, I will explicate the experiences that have influenced my selection of research topics and the methods of investigation. I will then discuss strategies that helped me maintain reflexivity throughout the research process.   My background and experiences as a novice medical student led me to be interested in the topic of learning through assessments in the same population. In my experiences, I did not see a clear separation between my learning through assessments based on whether it was labelled by the program for formative use or summative use. Specifically, I had one experience in the second year of medical school where I had a failing mark on one OSCE station that I did not expect and could not understand why I had received that mark. It was not until I sought out an explanation from the course director that I understood why I received that mark. At the time, I was motivated to learn about what I did wrong on the station because it would influence the quality of care I could provide to patients. I was not concerned about my progression in the program because I could fail multiple stations with no consequences to my progression. As a student, I also believe that educators have a responsibility to support students’ learning and that one way of doing so is through supporting learning through assessments.  In addition, my exposure to the literature also drove my interest in understanding student perspectives of the commonly expressed adage that “assessment drives learning”. For example, the sensitizing concept of self-regulation led me to focus on student perspectives because it is students’ personal objectives and their understanding of how to work towards personal objectives that shape their engagement in assessments. The sensitizing concept of assessment consequences helped to broaden my understanding of learning through an assessment to include learning that 45  takes place before, during and after the assessment. The sensitizing concepts also led me to see limitations in existing research that framed students’ difficulty with learning through assessments as attributable to deficits in the feedback from the assessments. I saw these studies as limited because learning effects might occur at multiple points in time. Also, the availability of feedback might not be influential if students did not want to learn from an assessment or did not depend on external feedback to learn.  My past experiences and exposure to the literature also led me to be interested in what characteristics of assessments students understood to influence their learning, as opposed to understanding the perspectives of the educators delivering the assessments.   Regarding data collection, I took many steps to ensure that I was capturing students’ experiences in a comprehensive manner. Although interviews were guided by an interview guide influenced by sensitizing concepts, general orienting questions inviting students to describe their experiences helped give students the flexibility to tell their story in the manner they desired. In addition, the decision to audio record and transcribe interviews instead of relying on notes taken during the interviews or my recollections of the interviews also helped to ensure I had access to students’ experiences at any time during the analysis. I also attended to and tried to minimize power imbalances between the participants and myself during the data collection process.116 Some strategies adopted to minimize power imbalances included letting the participants select the location and time of the interview and allowing participants to have control of the conversation.   46  I also analyzed my position relative to the participants and considered the impact of my position on the quality of data collected and my interpretation of results. My position relative to the participant population was between an insider and outsider. Therefore, I was an “insider-outsider”.117 In some ways, I considered myself as an insider, which refers to being a member of the same group as the participants,117,118 which is one who shares similar identity, language, and experiences.117,119 I was an insider because I am also a student in the MDUP and have had experiences in similar OSCEs as the ones under study. Being perceived as an insider can be advantageous because it “allows researchers more rapid and more complete acceptance by their participants. Therefore, participants are typically more open with researchers so that there may be a greater depth to the data gathered.”117 Also, as previously mentioned, my status as a medical student may also have helped enhance the quality of data collected because I was not in a position of authority.  There were also multiple reasons, however, why I considered myself as an outsider relative to the participants. First, although I was registered as a year-two student in the MDUP at the time the studies were conducted, I have not been taking any courses in the MDUP curriculum since 2014. Therefore, I was not immersed in the MDUP curriculum during the research. Another reason why I was an outsider was because the curriculum that the participants were in had some differences compared to the curriculum that I was in as a year-one medical student. Specific to the phenomenon under study, the curriculum participants were in taught certain clinical skills earlier than the curriculum that I was in. Also, the formative OSCE was more extensive in size and content coverage (four stations covering physical examination and history taking) compared 47  to the formative OSCE that I participated in (two stations covering only communication skills) when I was a year-one medical student.  Not having had first-hand experience as a medical student for several years may have negated one of the disadvantages of being an insider, which is when the researcher might experience difficulty separating their own experience from the experiences of the participants.117 Throughout the process of data collection and analysis, I did not find myself actively thinking about how my past experiences compared to the experiences of the participants and instead, I focused on comparisons between participants. Analytic processes like initial coding and constant comparison helped me immerse myself and stay grounded in participants’ experiences. Memo-writing after each interview helped me attend to how information from a specific interview compared to my previous understanding. In addition, because the interviews were transcribed, my supervisors also coded the first two transcripts, and through comparison of coding results, it helped me to see if my initial coding was being limited by my past experiences. My supervisors also reviewed my coding, offered alternative interpretations and questioned my interpretations throughout subsequent stages of analysis/coding to help me remain reflexive.  My exploration of the literature and the use of sensitizing concepts also placed me in an insider-outsider position because I had a better understanding of learning and students’ experiences compared to someone who is a complete outsider. In addition, my position was constantly evolving as I read more literature and interacted with more participants throughout the research process. Again, the aforementioned analytic processes and memo-writing helped me stay grounded in the data. 48   Regardless of my position relative to the participants, I shared Dwyer and Buckle’s belief that when interacting with participants, it is important to be “open, authentic, honest, deeply interested in the experience of one’s research participants, and committed to accurately and adequately representing their experience.”117  2.4.2 Triangulation Triangulation is a strategy used in research to capture “the richness and diversity of perspectives on a phenomenon.”120 Using Denzin’s121 categorization of the types of triangulation, two types of triangulation were built into the proposed research: data triangulation and investigator triangulation. Data triangulation works under the assumption that “understanding a social phenomenon requires its examination under a variety of conditions.”122 Data triangulation is operationalized by using different sources of data (e.g., across different participants, time and space) in the same method.121,122 In the research reported here, data triangulation occurred through interviewing multiple participants in each type of assessment and interviewing participants in two OSCEs at two points in time. The two-month window for data collection for each study also allowed for variation in time within each study. Having multiple participants was particularly helpful during data collection and analysis because contrasts in students’ experiences helped direct me to points of exploration in subsequent interviews and alerted me to potential factors that influenced variation in students’ experiences.   Investigator triangulation, which refers to the participation of multiple individuals in the research process,122 was the second type of triangulation built into the research. In CGT, the goal of 49  investigator triangulation in the data analysis process is not to only accept codes that multiple team members generate when they analyze the data. Instead, variation among team members is interpreted to be due to differences in perspectives, social locations, and personal and professional experience of the team members.92 Any differences that arose were scrutinized as potential new insights instead of dismissed.92 Researcher triangulation was operationalized in multiple ways. My supervisors coded transcripts, reviewed my coding and memos, and engaged with me in ongoing discussions about my interpretations. In addition, my supervisory committee reviewed my interpretations and offered their perspectives throughout the data collection, analysis and the write-up of the research.   2.4.3 Theoretical saturation and sufficiency  In CGT, theoretical saturation is said to be achieved “when new data does not create theoretical insights or generate properties or new properties of categories.”92 In line with other qualitative researchers,120,123–125 I do not think of the theory constructed from this research as a final product. Instead, it is an initial attempt to start understanding a phenomenon because there is always a possibility to learn something new or different if more data are available. To reflect this belief, I strived for theoretical sufficiency instead of theoretical saturation. Theoretical sufficiency refers to a stage at which new data seem to fit the categories without requiring further changes to the categories.123 The analytic strategies of CGT, such as constant comparison, and review of my coding by my supervisors helped me to gauge my progress towards theoretical sufficiency in the studies.  50  Chapter 3: FOSCE study results As a reminder, the research objective, to “enhance understanding of the factors first-year medical students report as influencing their learning through OSCEs,” was addressed in this dissertation through completion of two linked studies of two assessment contexts. Results derived from the first context, an OSCE labelled by the educational program as formative, will be presented in this chapter.   It is also important to remember that Gielen et al.’s39 model of the assessment effects taking place before, during, and after an assessment as well as Butler and Schnellert’s Integrative Model of Self-Regulation54 were used as sensitizing concepts in this study. Central to Butler and Schnellert’s model is students engaging in cycles of strategic action aimed at working towards the personal objectives they have. Steps of the cycle include interpreting tasks, planning, enacting strategies, monitoring and adjusting. In this study, task interpretation refers to how students deciphered the requirements of the FOSCE126 and was reflected in the personal objectives they set. Students then planned and attempted to enact strategies for achieving their personal objectives. Some students’ strategies involved monitoring their FOSCE performance and adjusting their learning approach as well as their understanding of the tasks they needed to complete. According to Butler and Schnellert’s model, how students engage in strategic cycles to work towards their personal objectives depends on an interaction between what students bring to the educational task (i.e., student factors) and characteristics of the learning environment. In this study, the learning environment included the FOSCE design itself and the broader medical school learning environment outside of the FOSCE. This idea, combined with abductive analysis of our findings, using principles of constructivist grounded theory, led to the generation of the 51  model illustrated in Figure 4. The results, therefore, will be organized and discussed as a function of the components represented in that model.   The overarching structure of the model created from the current research is influenced by the two previously discussed sensitizing concepts. Gielen et al.’s39 model of the assessment effects is not visually represented on the model, but the student factors, assessment design factors, and learning environment factors reflect influences on students’ learning within all three stages of Gielen et al.’s assessment effects. Butler and Schnellert’s model of self-regulation54 is reflected in the model in terms of (a) how students engage in cycles of strategic action to work towards personal objectives, (b) the influence of contextual factors (e.g., the assessment design and the learning environment), and (c) “what students bring” (e.g., student factors) to the context of their learning through OSCEs. The novel aspects of the model (based on the data analysis conducted) are mainly represented in the components underlying the headings of student factors, the assessment design, and the learning environment.  Although in my report of findings the components of the model will be discussed in a sequential manner, that does not mean earlier steps in the strategic learning cycle are no longer relevant at a later point in the cycle. For example, a student’s interpretation of the task is still in place when they enact strategies and task interpretation can be changed as a result of reflection in action during participation in the activity. Also, while presentation of the study results will describe a single cycle of strategic action to illustrate the factors that influenced the extent to which students felt able to work towards personal objectives at individual stages of the cycle, within a single FOSCE, students engaged in many cycles of learning to address their personal objectives 52  and their engagement did not necessarily result in completion of all steps of a given cycle. As findings are presented, relevant components of Figure 4 will be highlighted using boxes with dotted lines.  Figure 4: Factors influencing first-year medical students’ regulation of learning in a FOSCE  3.1 Students’ task interpretation and personal objectives for the FOSCE  Figure 5: Students’ task interpretation and personal objectives  53  The first step in the cycle of strategic action is task interpretation. How students interpreted the task of completing the FOSCE was reflected in the personal objectives they set (Figure 5). Personal objectives refer to aims students are interested in trying to work towards or accomplish. They were central in guiding students’ engagement in the FOSCE by, for example, influencing the strategies they planned or chose to use to address their objectives. In this study, students did not always explicitly state their personal objectives as guiding their engagement. In some cases, I inferred students’ personal objectives from their reports of thoughts and actions that occurred in the FOSCE.  Students’ personal objectives (and thus, their interpretation of what was meant to be accomplished during the task of FOSCE completion) were influenced by their values, interests and their understanding of the context in which their objectives were located. As a result, these factors influenced students’ interpretation of the extent to which the FOSCE could help them work towards personal objectives they held for different (non-FOSCE) contexts such as actual clinical work.   In the following section, I will first describe the personal objectives that students held in terms of their types and the context or setting in which they were set. I will then describe how students’ understanding influenced their interpretation of which objectives were relevant to and addressable through the FOSCE.   54  3.1.1 Types of personal objectives In this study, students had two main types of personal objectives: those oriented towards achieving their desired level of performance (e.g., to be prepared for clerkship) and those oriented towards achieving a positive emotional state (e.g., to feel less nervous). These objectives led students to interpret and engage in the FOSCE in ways they believed could improve their emotional state or performance. A student could pursue either or both types of personal objectives depending on what they valued, but it was also possible that the two types of objective created conflicts for one another.   3.1.1.1 Personal objectives oriented towards improving performance or working towards achieving a desired level of performance  Regarding improving performance, students showed variation in the desired level of performance they wanted to accomplish when faced with the FOSCE (response to assessment). Some students’ desired level of performance was expressed as being relative to their peers. “…Part of my goals is doing well on an OSCE… I want to be good on OSCE and I want to outperform people on the OSCE.” (P10). This quote illustrates, right from the outset, that the program’s intentions and students’ objectives do not necessarily align as this student appears concerned with the summative level of their performance even though the OSCE was created to serve a formative function.  Other students’ desired level of performance was more criterion-referenced, such as meeting the expectations for their level of training. “Just because from the first [formative] OSCE, the 55  feedback I got they said I was doing fine, and then basically like I’m at where I should be for a first-year [student]. So, yeah, it didn’t really change how I practice.” (P16).  A third form of performance improvement objective arose when students spoke more generally about working towards being prepared to perform in a specific context (e.g., clerkship), which is more consistent with a formative function since it is focused on improving performance than are the first two types of performance improvement objectives which are about achieving a particular standard (besting peers or meeting a particular performance criterion). “I feel like I’m studying because I’m preparing myself for actual clerkship years.” (P1).   3.1.1.2 Personal objectives oriented towards having a positive emotional state or improving one’s emotional state Butler and Schnellert’s Integrative Model of Self-Regulation54 highlights the role of students’ emotional state in influencing their engagement in cycles of strategic action. Emotion refers to one’s “affective responses when presented with or engaged in an activity.” 54 In this study, students had personal objectives of wanting to maintain or improve a positive emotional state (e.g., feeling confident) across multiple contexts (e.g., within OSCEs and in the clinical work context).   When performing clinical skills in each context, students may have experienced negative emotional states like lack of confidence, stress, or nervousness about performing tasks because they had limited experience performing the skills. For example, some students experienced negative emotions about the FOSCE because they had never participated in an OSCE before. 56  “So, you know, a lot of us were really, really, really stressed because we’d never done any questions before, we didn’t know what type of questions they were going to ask us.” (P2).  Students wanted to improve or to maintain a positive emotional state for multiple reasons, including inherently valuing a positive emotional state when performing skills and valuing a positive emotional state as a strategy for enabling them to work towards their desired level of performance.   As an example of the first reason, one student prepared for the FOSCE to “make me feel confident in what I was doing [during the FOSCE], so that it wasn’t going to be a really stressful experience…” (P5), and another prepared for the FOSCE to “…not to embarrass ourselves.” (P9). Similarly, some students expressed wanting to maintain a positive emotional state because they perceived that a negative emotional state would prevent them from performing in a manner that would represent their skills. For example, one student perceived negative emotions like nervousness led them to forget steps they would not have forgotten otherwise. “…when I’m nervous I tend to say things that…may not be as necessary or I may not remember what I– like I go blank.” (P2).  The objective of improving or maintaining a positive emotional state for the sake of performance improvement was not limited to performance on the FOSCE. For example, experiencing the FOSCE helped one student to become less intimidated about the need to sit these sorts of exams in higher stakes contexts. “I was just thinking about even though I was like, a first-year medical student, thinking about writing like, the Canadian Board Exams and this huge OSCE, and going 57  into that situation it seems like a little intimidating to me, so after doing it [the FOSCE]…I was like oh, this isn’t that bad. (P15). Others indicated that the FOSCE could build their confidence for the end-of-year-one OSCE, which was an important factor to enable her to feel like she could do well. [The FOSCE] was really reinforcing because I think it’s rare for you to get feedback from someone who is in a position where they know what they’re doing and give you such positive feedback, like “you did really, really well like I was really impressed with this, I was really impressed with that. You know, you summarized that so nicely, you’re so comprehensive”. So, now when I go into my next OSCE I might have more of a confidence boost… (P2).  The mechanism through which this could take place was perceived, in part, to be increased familiarity with the types of encounters students might experience in OSCEs.  Ultimately, I felt the purpose [of the FOSCE] was to not um, to get as familiar with the system so that, come April, when we do it [the SOSCE], we’re not freaking out, I guess, or not - And also, I guess that allows us to put, like, if we get out some of the jitters, then we do better, or present ourselves better, calm us down a bit. (P9).  Finally, it is noteworthy that the other contexts to which positive emotions in the context of the FOSCE could extend include the clinical workplace. For example, one student indicated that the feedback he received from the FOSCE boosted his confidence and led him to feel more comfortable to take on more responsibility in clinical workplace settings.  Yeah, it definitely makes me more confident, which I suppose is part of that. I think that then you’re more willing to try other things. So if you got a good mark that says, “Well you did your resp[iratory] exam” great, then you know maybe next time at family practice, you don’t just do the resp[iratory] exam that you learned, you do maybe the resp[iratory] exam that you learned and on a different patient and different time, you can try something else or you can explore a little bit more because you know you’re competent enough to do that. You’re not going to either embarrass yourself, you’re not going to do something wrong or terrible. You can be confident in your skills. (P10).  58  3.1.2 Students setting personal objectives of relevance to a variety of contexts As demonstrated in the examples from the previous section, students expressed wanting to improve their performance or emotional state across a variety of contexts including the FOSCE at hand, future OSCEs, and clinical work contexts. Again, individual students varied in where their priorities appeared to lie and did not necessarily have personal objectives focused upon each context.   The extent to which students could use the FOSCE to address personal objectives set in the context of clinical work and future OSCEs depended on whether they perceived a relationship between the FOSCE and those other contexts. Interactions between students’ understanding of the contexts within which their personal objectives were set (a student factor) and the characteristics of the content assessed (an assessment design factor) shaped whether students interpreted a relationship between the FOSCE and the other contexts, and thus, whether the FOSCE was an opportunity to work on these objectives. That is, variation in students’ understanding of the FOSCE and other contexts led to differences in the type of personal objectives students believed could be addressed through the FOSCE. Consistent with Butler and Schnellert’s model,54 students’ understanding of the learning required to perform well in different contexts and, thus, their perceptions of relevance of the FOSCE, were not necessarily stable as they shifted in response to their experience during and after the FOSCE. These variations in students’ understanding will be addressed in the following two sections.  59  3.1.2.1 Variation in students’ understanding of the clinical work context  Variation in students’ understanding of clinical work led to variation in the extent to which the FOSCE was perceived as relevant (i.e., could be used to work towards personal objectives they held for themselves in the clinical work context). Sources of variation included students’ understanding of the clinical workplace itself and of its clinical skills performance expectations. When identifying differences between the FOSCE and clinical work, students identified differences in specific features between the two contexts. Therefore, it was possible for students to perceive some FOSCE elements as relevant to the clinical work context and others as not relevant. Students’ understanding of clinical work was dynamic and could be modified by new information.  Regarding features of the clinical workplace, students showed variation in perceptions of the environmental conditions under which they would be expected to work. For example, one student felt that the time-pressured situation she experienced in the FOSCE was similar to the time-pressured situations in which she would be performing future clinical work. Therefore, performing skills under time pressure for the FOSCE allowed her to work towards personal objectives she held for the clinical work context. …It's quite rare to have a scenario where you can do those [physical] exams in a somewhat pressured environment, like there's a time limit…I think knowing how to do things well, while you’re under pressure is going to be very important for the future…in the real world where when you’re actually practicing on the wards or going through clerkship, different rotations. And you might have to do something that you know well but because you’re under so much pressure you might do a bad job of it because you are flustered and there are things that you miss when you’re flustered. (P2).   In contrast, another student did not have time pressure as part of their understanding of the features of the clinical work context. As a result, when the student had trouble finishing a station 60  within the time limit of the FOSCE, he did not feel that addressing this deficit was relevant to the clinical work context because “…if I saw that patient in clinic I’d take more time” (P17). Therefore, this student’s understanding of the clinical work context limited their use of the FOSCE to address personal objectives of relevance to the clinical work.   Students’ understanding of a particular context was not stable and when it was modified in response to new information it also modified the extent to which students felt they could use the FOSCE to address personal objectives in the clinical work context. For example, one student’s experiences before the FOSCE led her to perceive the FOSCE-based expectation of performing complete physical examinations for a body system as different from the expectations in clinical workplaces. This difference limited the extent to which the student could use the FOSCE to address personal objectives in the clinical workplace setting. However, after the FOSCE, when she experienced how the use of complete physical examinations contributed to positive patient outcomes in an actual clinical workplace, she realized complete physical examinations were part of clinical work. The student’s modified understanding about the performance expectations in the clinical workplace led her to perceive the performance expectation of complete physical examinations as relevant to both the FOSCE and workplace. As a result, she reinterpreted OSCEs as a possible opportunity to address personal objectives in the clinical workplace context. The following quote demonstrates the student’s shift in her understanding of the relevance of complete physical exams in response to clinical interactions after the FOSCE.  At family practice [post-FOSCE]…the doctor was really old so he actually didn’t do a few of the things you actually need to do in a cranial nerve exam but we had been taught that, as students, we did it. Through the extra test we did, we found out she had a cranial nerve deficit and we sent her to the ER right away…In the past, I didn’t realize that. I didn’t realize that if you don’t do everything you really might miss something. I think it was just that one experience that was like, “Oh wow, what we are learning actually [complete 61  physical exams] carries so much weight”…I think it’s really important to focus on clinical skills and know how to do it and do the whole thing. And I guess, yes, that is probably reflected in OSCE because that’s clinical skills. (P3).   3.1.2.2 Variation in students’ understanding of OSCE-style assessments  Variation in students’ understanding of OSCEs themselves also influenced their interpretation of the personal objectives they could address through the FOSCE. Influential aspects of their understanding included the role of assessment strategy when completing OSCEs and their understanding of future OSCEs. Students’ understanding was again modifiable in response to new information.  Through previously encountered assessments, some students developed an understanding that part of performing well on an assessment required an understanding of assessment strategy (skill or technique implemented for the sake of performing well on a specific type of assessment). The perceived need for assessment strategy in the FOSCE and other OSCEs meant that how well one does is not believed to be solely dependent on skill level but is also dependent on one’s understanding of how to perform well given the particularities of the assessment. Other terms that students used to describe assessment strategy included “testmanship”, “gamesmanship”, “assessment skill” or “assessment technique”. For example, one student noted that “There’s a technique you can use to answer a multiple-choice question or a written question well. And even if your knowledge is the same between two people but one person knows the technique of writing an exam, they can do well on that exam. They’ll do better than the person who doesn’t.” (P10).  62  Despite the majority of students not having participated in OSCE assessments before medical school, some had assessment strategy as part of their understanding of OSCEs. “You can prepare for something, if you don’t know how the test [OSCE] is being asked or questions are being asked, or how it’s run, you can only do so well.” (P11).   Having assessment strategy as part of their understanding of OSCE-style assessments had several implications for students’ interpretation of the personal objectives they could work on through the FOSCE to enable fulfillment of their personal objectives. It led students to interpret differences between the FOSCE and clinical work as part of assessment strategy and, thus, led them to prioritize working within the FOSCE context towards personal objectives of relevance to future OSCEs (e.g., gaining a better understanding of OSCE assessment strategy). Having assessment strategy as part of their understanding of OSCEs, however, limited the extent to which students felt able to use the FOSCE to address personal objectives relevant to clinical work because the strategies used to succeed on the FOSCE might not be applicable or relevant.  For example, one student believed one component of OSCE assessment strategy was to verbalize the steps in their performance and she wanted to clarify whether she was verbalizing appropriately throughout the FOSCE. However, seeing verbalizing as a component of assessment strategy limited the extent to which the student could use the FOSCE to address objectives relevant to the clinical work context because any clarification about verbalizing would only be relevant in the context of future OSCEs.  I was really curious because we talked a lot about what you need to verbalize and how much you need to verbalize what you’re doing. I was curious to know what...in the physical exam stations, I was curious to know what the observers or examiners thought about that and how I was doing with that. It was something that...we talked about in clinical skills a 63  lot, but we hadn't actually practiced a lot. So it was still feeling, kind of, weird and I still wasn’t sure. (P5).  In contrast, some students did not have assessment strategy as part of their understanding of the FOSCE (i.e., they did not anticipate major differences between OSCE assessments and the clinical work context). As a result, these students performed on the FOSCE in a similar way as they did in the clinical workplace. The perceived similarities between the two contexts also meant that students perceived they could use the FOSCE to address learning objectives of relevance in both the contexts of future OSCEs and clinical work.  Some students who did not enter the FOSCE considering assessment strategy became aware of the role of assessment strategy in OSCE assessments after FOSCE examiners informed them of the differences between clinical work and OSCEs during verbal feedback. Students’ newfound understanding of OSCE assessment strategy diminished the extent to which they felt compelled to use information from the FOSCE to work towards personal objectives relevant to the clinical work context because it reduced perceptions of relevance.   For example, one student learned that being explicit about asking questions was important for succeeding on the OSCEs based on examiner feedback. For example, one of the pieces of feedback was to do a FIFE [a strategy to elicit patient perspectives], and I thought I had FIFEd really completely, but I learned that in the exam, what the examiner told me was, “because it is an exam [OSCE or FOSCE], be very explicit…that was something that I didn’t do it nearly that kind of– structured as in having a conversation with the person but I felt that I got all of that information and the feedback was just, I think he did it, but because it’s an exam, here’s the thing that you should probably do. (P7).   64  Another student learned the importance of being quick to be successful on OSCEs based on examiner feedback.  Initially I thought that was sort of clinical medicine [clinical work], OSCE; they’re both the same. But then the feedback…was structured in a way where it was applicable to the OSCE…so on the feedback they said “Listen, with an OSCE…” And as soon as you start it like that, in my mind it goes “Okay this [goes in the] OSCE compartment”, even though they’re not separate. But it’s like “okay, for an OSCE”, this is the way the person started the feedback, “you need to do so and so this fast. And if you don’t get there you don’t get the point, so you need to do that…at the end of the day I didn’t see it so I can’t score it”, if that makes sense. So that’s highly practical for the OSCE...(P17).  Variation in students’ understanding of future OSCEs, such as the SOSCE and the licensing or board exam OSCEs, also shaped their interpretation of the relevance of the FOSCE in helping them work towards personal objectives relevant to the context of future OSCEs. For example, one student assumed that the FOSCE and SOSCE would be very similar and, thus, believed students could use the FOSCE to address personal objectives in the context of the subsequent SOSCE. “I assume that the format of the [F]OSCE is similar or the same as the summative OSCE that we’ll have this year.” (P5).  Students who thought the FOSCE was similar to the board OSCE found that the FOSCE helped them become less intimidated by the board exam OSCE, thus improving their emotional state. [OSCEs] was definitely something that I was intimidated by. Yeah, I was just thinking about even though I was like, a first-year medical student, thinking about writing like, the Canadian Board Exams and this huge OSCE, and going into that situation it seems like a little intimidating to me, so after doing it [the FOSCE]…I was like oh, this isn't that bad. (P15).  In contrast, other students anticipated that there would be differences in the design of the stations in the FOSCE and the SOSCE and thus, felt more limited in their use of the FOSCE to address personal objectives of relevance to the SOSCE. “…I know for our actual summative ones it’ll be 65  a bit different because we’ll have to do full exams and full history taking, whereas that one was just kind of the mini-interview part that we had done, yeah.” (P13).   Significant differences between the FOSCE and the licensing OSCE also limited the extent to which students could use the FOSCE to draw inferences about their performance in the licensing OSCE. “…and one of my friends was just saying that her– she had a clinical skills tutor kind of be like, don’t pay too much attention to whether you passed or failed because it’s not really the way that they do it, you know, in the nationwide OSCEs [licensing OSCE].” (P8).   3.1.3 Section summary When students encountered the FOSCE, their own understanding of the task before them shaped the personal objectives they set. For example, students’ desire for a positive emotional state and to perform to a desired achievement level in the contexts of the FOSCE, the clinical workplace and future OSCEs shaped the types of objectives they focused on for the FOSCE. However, interactions between students’ understanding of the contexts within which they set their personal objectives (a student factor) and the characteristics of the content assessed (an assessment design factor) shaped whether students interpreted a relationship between the FOSCE and other contexts (e.g., clinical work or future OSCE contexts) and thus, their perception of the extent to which they could use the FOSCE to address personal objectives in those contexts. In the next section, I will discuss how students strategically used the FOSCE to address their personal objectives.   66  3.2 Planning and enacting strategies to work towards personal objectives  Figure 6: Strategies students planned and enacted to work towards personal objectives  In the previous section, I discussed the personal objectives students developed through engagement with the FOSCE as a result of their interpretation/understanding of the task. To work towards those objectives, students planned and attempted to enact a set of strategies they perceived would be effective, moving them through the next two stages (planning and enacting) and, in some cases, the two remaining stages (monitoring and adjusting) of the cycle of strategic action (Figure 6).  The term strategy is used here to refer to how students work towards their objectives. The strategies generally follow Gielen et al.’s taxonomy of assessment effects39 because various strategies for working towards personal objectives were implemented before (preparation), during (practicing skills within the FOSCE), and after (performance monitoring and adjustment of learning or understanding) the FOSCE. As a result, I have represented the various strategies 67  chronologically even though, as mentioned earlier, the stages did not necessarily occur in order or unidirectionally.  Even when a strategy was believed to be effective for helping a student work towards personal objectives, the extent of strategy enactment (i.e., the extent to which they actually used the FOSCE to work towards their personal objectives) was influenced by interactions between factors related to the student, FOSCE assessment design, and the medical school learning environment. In the following sections, each strategy that students used and the interactions between factors influencing its enactment will be discussed.   3.2.1 Working toward personal objectives by preparing for the FOSCE (pre-assessment effects) One strategy students used to work towards their personal objectives was to prepare for the assessment before it took place. Students who did prepare for the FOSCE as a learning strategy could use this strategy independent of a need to complete the cycles of strategic action by monitoring and adjusting learning during or after the FOSCE itself because this strategy took place prior to the FOSCE occurring. Students, however, showed variation in how they approached preparing for the FOSCE (e.g., practicing skills and reading notes).  Preparing for the FOSCE helped students work towards their desired goals by improving their performance on the FOSCE and in any other contexts they saw as relevant to the FOSCE. For example, one student perceived that preparing for the FOSCE would help improve his performance in clinical workplaces that he would encounter during clerkship. 68  I think it was more like as a medical student and a future...an aspiring like practising physician you do things that you have to know. So it was more like I’d rather like spend the time and effort and nail it down now [through preparing for the FOSCE] so that in clerkship I know how to do these things and I can like advance my techniques and all different things like that. And also it was fun. But I thought it was fun because like I really like clinical skills in family practice just because you’re not looking at slides or textbooks and like you’re taking all the theoretical stuff and putting it into practical knowledge in a practical sense. (P16).  Students also used the strategy of preparing for the FOSCE to help achieve a positive emotional state (e.g., feeling like they did well and not making a fool of themselves) in the FOSCE. For example, when one student was asked what motivated him to prepare for the FOSCE, he said, “…I just think a lot of me didn’t want to look like a fool. Like I didn’t want to look like I didn’t know anything, and I think it’s also human nature, especially with medical students, to want to do well. So it was a combination of wanting to do well, not want to kind of be embarrassed in a station where I had no idea what to do and kind of sit there” (P14).   Some students also prepared for the FOSCE to support enactment of another strategy, monitoring performance and adjustment of learning or understanding after the FOSCE. Specifically, some students perceived that how one prepared influenced what decisions about adjustments could be made in a valid manner using the student’s FOSCE performance. In other words, intended use of performance monitoring and adjustment of learning after the FOSCE can influence and be influenced by students’ actions prior to the FOSCE. To elaborate, students showed variation in their understanding of how they should prepare for and perform in the FOSCE to obtain the most valid information to guide later adjustments. Some students perceived that using the FOSCE to determine what they retained from the first semester of medical school would provide the most valid information with which to adjust their approach to learning or understanding. Because of 69  their interest in judging their knowledge retention, these students minimized their preparation in advance of the FOSCE. They perceived that preparing too much would give them a false representation.   For example, one student wanted to determine the content that she should prioritize when learning after the FOSCE by using the FOSCE to determine what she retained based on her first semester of medical school. “I kind of wanted to not prepare as much [for the FOSCE] and see how well I could do and just see how much I actually need to work on it for the future [after the FOSCE] and kind of– because I’ve never really done that before where I don’t really prepare and see actually how well I can get by.” (P13).   In contrast, other students believed that to obtain the most valid information about how they should adjust their learning or understanding from the FOSCE required them to perform to the best of their abilities on the FOSCE. For example, to adjust their understanding of the performance criteria in a manner that would allow improved performance involved becoming aware of deficits in their performance of which they are unaware. Preparing for the FOSCE was, thereby, treated as a means to reduce examiners’ focus on areas of improvement of which students were already aware. So if I didn’t prepare [for the FOSCE] I think I would be hearing things, in the feedback, that I already knew myself like oh, you should probably, you know, review how to do this particular physical skill or, you know, when we take a patient history we always need to ask them for their medications. Like, you totally forgot about that. So these things that I know and had I just done some preparation beforehand…because, you know, the feedback time is not hugely extensive and the examiner or the observer is not…can't tell you everything each time. It’s good to try and focus it on the things that you don't already know yourself, or could figure out. (P5).  70   3.2.1.1 Factors influencing students’ preparation for the FOSCE   Figure 7: Factors influencing enactment of preparing  For students who relied on preparing before the FOSCE as a strategy to work towards their personal objectives, interactions between their time management skills and task prioritization interacted with concurrent tasks in the learning environment (Figure 7) to influence the extent to which students could prepare for the FOSCE (i.e., enact the strategy). When students were preparing for the FOSCE, they had to also attend to multiple concurrent academic demands including learning new content from classes and preparing for multiple summative assessments that took place after the FOSCE. Some students were able to manage their time effectively to use preparing for the FOSCE as a strategy to work towards their personal objectives. “I think I was pretty disciplined [when preparing for the FOSCE] and, sort of, set up the schedule of how I needed to study for stuff, like well enough in advance, that I felt...that I was able to get through everything I wanted to get through [for the FOSCE] and yeah, I mean, I was trying not to be too stressed out about studying everything for those written exams.” (P5). 71   In contrast, others had trouble managing their time and did not prioritize preparing for the FOSCE.  I struggled with the balance of having the [F]OSCE on the Saturday and then still having classes the following week, and then the Monday, Wednesday, Thursday, Friday, Saturday having the big exams. So it was a very, very big adjustment time management wise to balance all these things and have enough time to prepare for everything...I would have prepared more [for the FOSCE], because like I said I weighed my options and the weight of each exam. And although it [the FOSCE] was you know a very important exam, it was formative so I had allocated less time to it compared to the ones I was more stressed about. (P11).  3.2.1.2 Section summary In summary, preparing for the FOSCE was one strategy used to plan and enact movement towards personal objectives. Some students who believed performing the best they could was required to obtain the most valid information about how to adjust learning may have tried to use preparing in advance to a greater extent than those who believed they would obtain the most valid information by not preparing. Student factors such as time management skills and task prioritization interacted with learning environment factors such as the presence of concurrent tasks to influence the extent to which students could use this strategy to address their objectives.   72  3.2.2 Working toward personal objectives through practicing skills during the FOSCE (pure assessment effects)  Figure 8: FOSCE as an opportunity for skills practice  Another strategy students used to address personal objectives through the FOSCE was treating the FOSCE itself as a practice opportunity. Students believed practicing skills within the FOSCE helped them work towards personal objectives in multiple ways.  For example, students perceived that practicing their skills in the FOSCE helped them work towards their personal objective of achieving their desired level of performance by enhancing their knowledge. “…every time that you repeat something, you remember a little bit better.” (P20).  Another student perceived that “…by going through and doing a physical exam it helps me to solidify you know what I have to do for a certain physical exam, what I have to look for and what I– like what it means if I’m looking for something and what different signs might mean, 73  just things like that.” (P4).  Some students also used the strategy of treating the FOSCE as a practice opportunity to work towards their personal objective of an improved emotional state, such as improving their level of comfort when performing a skill. For example, one student thought practicing their skill on the FOSCE would help her become more comfortable with patients. “Practicing I guess just kind of going through the movements like the motor memory stuff and just becoming comfortable with it and like even taking a blood pressure can be very awkward, so doing it over and over and over and practicing you get more comfortable with the technique and making yourself and the patient more comfortable.” (P13).  3.2.2.1 Factors influencing skills practice Students highlighted multiple reasons why they interpreted the FOSCE as an opportunity to use the strategy of skills practice to work towards their personal objectives (Figure 8). The FOSCE provided them a formal opportunity to practice their skills in a comprehensive and independent manner. Students emphasized the importance of such opportunities by highlighting how difficult it is to find them during their day-to-day training in medical school.   For example, students mentioned that having a large learning group in their clinical skills classes limited their access to opportunities to practice complete skills. “I think it was just the ratio of, like, students to volunteer patients, so it wasn’t great, so we would each kind of do a portion of the exam, but we didn’t get an opportunity to do the whole thing.” (P6).   74  Variation between instructors also shaped students’ access to independent practice opportunities outside of the FOSCE. For example, for courses set in authentic workplace contexts, instructors showed variation in how comfortable they were letting medical students practice their skills. “…every family preceptor—family practice preceptor is different in how he or she feels comfortable in like incorporating medical students in his or her practice. But, yeah, with me like in family practice at least, like I haven’t had too much of a chance to like take good patient histories. Or like patient histories in a way that it was like the one from the [F]OSCE.” (P21).   Some students also perceived the FOSCE to be a good opportunity to use skills practice as a strategy to work towards personal objectives because they perceived it as lower risk for negatively impacting their progress towards personal objectives compared to other practice opportunities outside of the FOSCE. For example, one student who valued a positive emotional state (a personal objective) did not engage in practice opportunities in classes because she was worried about the negative emotional impact associated with performing skills in front of her peers. She perceived that practicing her skills in the FOSCE carried less risk of negative emotional impact because she only performed the skills in front of the examiner.  I would have to be really proactive, really confident, go up and perform the whole exam in front of everyone, and then be, like, so what did I screw up? Which is, like, there’s lots of mental barriers against doing that kind of thing. So, you know, I do my best to get myself to– to, like, put myself out there in those situations to learn. But sometimes you didn’t do the readings and you don’t want to go up there and totally screw up in front of everyone. (P8).   In other instances, students interpreted practicing their skills in a FOSCE as preferable over other practice opportunities because it did not negatively impact upon objectives related to their progression in training or negatively impact patients’ health. One student reported that “[The 75  FOSCE] was a good opportunity to practice skills that we had learned and a good opportunity to be in that kind of setting before it actually counted for marks.” (P6). Another said “I guess for one it’s kind of a safer setting, so there’s really no pressure of you doing wrong in a way that might negatively impact someone’s health, which might be the case if you were out on– you know in a hospital ward or in a clinic and you measured the blood pressure wrong or you miss something on a physical exam.” (P4).  In summary, some students believed practicing their skills during the FOSCE to be an effective strategy for working towards personal objectives because of the formal practice opportunity it provided (an assessment design factor) and because the FOSCE practice opportunity was perceived as having a lower risk of negative impact compared to other practice opportunities in the medical school learning environment (a student factor).   76  3.2.3 Working toward personal objectives by monitoring performance and adjusting one’s approach to learning or one’s understanding (post-assessment effects)  Figure 9: Working toward personal objectives by monitoring performance and adjusting one’s approach to learning and understanding of OSCEs  In the previous section, I focused on what students did before and during the FOSCE to address personal objectives. These strategies did not necessarily depend on completing cycles of strategic action based on their FOSCE experience. That is, use of the strategies did not necessarily require students to monitor their FOSCE performance or consider if (or what) adjustments in learning or understanding should be made after the FOSCE. Here I will discuss post-assessment learning effects by describing strategies students described as helping them work towards personal objectives by monitoring their performance to determine if and how they should adjust their learning approach and/or their understanding of OSCEs (Figure 9). Adjustments in learning included adjusting preparation for the next OSCE or changing their learning approach for the future curriculum. Adjustments in understanding included seeking improved clarity regarding performance criteria or of how OSCEs work. Students made these adjustments mainly after the FOSCE took place (although such changes were not exclusive from adjustments being made 77  during any given FOSCE station).   As a reminder, as previously mentioned in the section on preparing for the FOSCE, some students perceived preparing in advance of the FOSCE to be necessary to obtain valid information to monitor performance and adjust their learning or understanding. However, some students had difficulty with fully preparing because they had trouble managing the concurrent demands in the learning environment prior to the FOSCE. Being unable to prepare fully would have limited the extent to which these students believed they could obtain valid information about how to adjust learning if performing the best they could was a prerequisite.  Students perceived that monitoring performance and adjusting learning or understanding could help them work toward performance and emotional state objectives. For example, one student wanted to receive external feedback from FOSCE examiners to improve patient interactions. “I would like to know what I could be working on to be more competent…it would be nice to get honest constructive feedback from you know examiners to better my way of interacting with patients, whether it’s asking more structured questions or you know in any way.” (P1).   Determining how to adjust their learning or understanding of OSCE processes through the FOSCE also helped some students work towards their objective of improving their emotional state. For example, becoming familiar with OSCEs helped students feel more confident and less anxious about future OSCEs because they perceived they better understood how to prepare. “Yeah, well, now I’m less anxious about it because I feel like I know what to expect. And even if it’s a little bit different, I feel like I know how to prepare better. And so if I can prepare better, I’ll feel more 78  confident, so I won’t– so I’ll feel more prepared obviously. But yeah. So I feel more confident about what to do to be ready for the OSCEs.” (P8).  For some adjustments students became aware of the need to make (and planned to make) based on their FOSCE experience, participants pursued additional practice opportunities after the FOSCE to begin to implement and help them work towards their personal objectives. For example, one student required further patient interaction opportunities to improve the structure of their history taking. During the interaction [patient interaction in a clinical skills class after the FOSCE] I found myself– like I tried to really think of things that I’ve learned through the [F]OSCE and try to implement, such as you know having better structure when I’m asking questions or making sure you know the patient is comfortable. You know like little things that I felt that I should have done but I didn’t do during the [F]OSCE I try to do that as much as I can. So in a way I’ve been practicing and trying to improve on it. (P1).  To obtain the information necessary to monitor performance and make adjustments in learning or understanding through the FOSCE, students mainly relied on information they generated themselves through reflection in and on action as well as external feedback or information from FOSCE examiners.   Reflection-in-action refers to the information students tracked while they were engaged in the FOSCE station.33 Reflection-on-action refers to the information they tracked regarding their performance after participation in a FOSCE station.33 Note that reflection-on-action could occur during the FOSCE, such as when students were engaged in the verbal feedback session for an station, or after the entire FOSCE experience was finished. Information students tracked included their feelings and thoughts, as well as cues from patients and examiners. 79   For example, to adjust his learning or preparation for future OSCEs, during the FOSCE one student monitored how well he could recall the steps when performing a skill for which he had prepared. Because the student experienced some difficulty recalling steps during the FOSCE (which negatively impacted his performance), the student adjusted his subsequent preparation for future OSCEs. I was tracking my ability to recall what I was doing and what I was doing next…when I did the respiratory exam I was surprised at the amount that I had forgotten. I was like whoa, I didn’t prepare for this as well as I had thought and it’s showing based on how I’m remembering, how I’m recalling everything. So it was really good to have that opportunity to do that because I otherwise you know would have went into the summative with the plan that I had. So now I’ve changed it a bit and done like a better summary so that when I go into the OSCE at least I have a summary so I can go through everything…(P11).  Patient cues were also influential to students’ monitoring of performance during the FOSCE. These included the student’s rapport with the patient, the amount of information gathered from the patient or progress made with the patient, and non-verbal patient cues such as their facial expression and body language. For example, one student perceived that he performed well on a station based on patient reactions that suggested he did not need to make any adjustments.  But I think with like a history one, like I feel like if– again like I have a sense of comfort with the patient and he or she feels comfortable talking with me…I think when they smile with you, right, and when the conversation is natural, I think that gives you the impression that yeah, I was able to, yeah, build that rapport with the patient and it tells me that, like yeah, I think I did do well in the station. (P21).  Students who wanted to maintain a positive emotional state on the FOSCE, or to improve their emotional state through the FOSCE, similarly used multiple types of information to monitor performance, including cues from their own performance, patients and examiners. For example, the lack of fluency when recalling steps on the FOSCE led one student to not feel as confident as 80  she wanted.  I just realized that for the real thing [SOSCE] I probably want to be a little bit more confident in my physical exam skills and just kind of being a bit more smooth going through the full exam. Because I did get through everything but I just felt like I could’ve been more confident and instead of having to kind of think through things and it’d become more, I would want it to be more automatic than kind of having to work really hard to think about what’s next. (P13).  Students also tracked the features of the FOSCE to work towards their personal objectives. For example, one student learned new information about how the OSCE worked through the FOSCE and planned to use it to adjust his preparation approach for future OSCEs.  So it [the FOSCE] kind of gives you know a really good opportunity to learn before doing an actually– like a summative exam and then failing because you didn’t prepare properly. So it gave me an understanding of what it’s like and then how to prepare for the one at the end of this year and the end of each following year…Just the structure of the entire thing, like coming in and like giving your sticker, like just how the whole system works…So knowing how its run I can then build my, I guess plan, or procedural skills based on that. (P11).   Beyond relying on information from their reflection in and on action, students also used information from external supports (e.g., FOSCE examiners) to support performance monitoring and adjustment of learning or understanding. As a reminder, the design of the FOSCE included two deliberately created external support opportunities. The first was the five-minute verbal feedback session immediately after each FOSCE station. The second was a written feedback report that students received approximately eight weeks after the FOSCE. In subsequent sections, I will include examples of students using information or feedback from external support opportunities to work towards their personal objectives.  In sum, performance monitoring and adjustment of learning or understanding was undertaken mainly by relying on information from reflection in and on action as well as external supports 81  and additional practice opportunities. In the following sections, I will discuss how variation in students’ preferences for the sources and types of information and practice opportunities influenced their performance monitoring and adjustment of learning or understanding. Further, I will discuss how interactions between student, assessment design, and environmental factors influenced students’ access to desired information and practice opportunities.  3.2.3.1 Variation in sources of information students used to monitor performance and to adjust learning or understanding   Figure 10: Perceived credibility of information from reflection in and on action and external support opportunities  As previously mentioned, reflection in and on action and information or feedback from external supports were two main sources of information that influenced students’ performance monitoring and adjustment of learning or understanding. However, students’ decision to use a source of information to monitor performance or adjust learning or understanding was dependant on the credibility with which they perceived each information source. In general, students’ perception 82  of the credibility of the information from a source was informed by the amount of information the source had access to and the credibility of the source themselves (Figure 10).  Regarding the amount of information, one reason why students considered information from reflection in and on action to be credible (and, thus, valued and used it as a source of information to guide monitoring and adjustment) was that they believed themselves to be uniquely aware of information about their performance. For example, one student said that if he had performed poorly on an assessment, only he would know some of the circumstances that may have impacted his performance, which would allow him to understand how he should adjust his learning. Like with your own personal feedback, like only I know where I’m at right now, and my circumstances. So I think that changes– like that also changes like my feedback. So maybe if like I didn’t do so well in a test, it’s like, like yeah I was super ill that one week, right, I know I didn’t do that well because I– I wasn’t– I wasn’t in the position where I could do well. (P21).   Similarly, students’ perception of how much information the FOSCE examiners had about their performance influenced the credibility of their feedback. For example, one student preferred feedback from FOSCE examiners over feedback from preceptors or instructors in other curricular learning opportunities because they perceived that examiners paid closer attention to their performance (and thus had more information) given that examiners focused solely on observing a single student’s performance.  In the family practice [course], you get people [preceptors/instructors] watching your interactions with patients, and in some ways that’s better because it’s a genuine interaction, but the family doctors are very busy and not necessarily just focussing on you and your communication skills. And on the flip side, communication skills in those classes, or clin[ical] skills they’re focussing on you, but they’re also focussing on the seven other students in the room, where this [FOSCE] is a really great opportunity where, it’s not real, but you go in and the examiner watches you and looks just at you. (P9).  83   As another example, one student used a FOSCE examiner’s suggestion to adjust their perceptions of the performance criteria for a physical examination technique despite no preceding observer having identified her expectations as an issue because she doubted the quality of observation in the learning experiences she had before the FOSCE. “In clinical skills I’ve had very few tutors that correct any technique about anything. We had one person in our group for three weeks in a row that would wear their stethoscope backwards.” (P12).   In contrast, other students did not always trust the quality of observations in the FOSCE and, as a result, did not always use examiner feedback to monitor performance and adjust their learning or understanding. For example, one student did not use feedback from an examiner because he perceived that the examiner was not paying close enough attention and did not hear him perform a step of a skill, leading to incorrect identification of a deficit in the student’s performance. “…then the OSCE examiner said, ‘Oh next time you have an interview like this with trauma, ask them if they’ve injured that limb before.’ And then I just sort of brushed it off. I was like ‘Okay I know I asked the question but you just didn’t hear it, so I’m not going to worry about that.’” (P17).   The feedback provider’s memory of students’ performance and thus, the amount of information they had, similarly influenced the perceived credibility/accuracy of the feedback. On the FOSCE, students perceived that the FOSCE examiners had the best memory of their performance immediately after the station when the verbal feedback session took place. In addition, students felt that if the feedback provider and the student both had a clear memory of the performance, it helped to facilitate discussion about the students’ performance. “…[immediate feedback] is 84  really helpful. Because it’s fresh in your mind, you just experienced it, you– like, you– I think it’s easier to communicate what went wrong and what went right, because you’re both, like, remembering what just happened. So there’s less, like, recall bias. It’s not really recall bias but, like, there’s– it’s just more accurate.” (P8).  Beyond the amount and quality of the information the examiner possessed, the credibility of the person who generated the feedback also influenced students’ perception of the feedback’s credibility.  Regarding reflection in and on action, some students did not feel they were credible judges of their performance because they were uncertain about the performance criteria or expectations against which they should judge their performance. “The other ones [physical exam stations], I kind of had no idea [how I performed]. So, yeah. I don’t know what they [FOSCE examiner]– not look for, but– yeah. It’s harder. It’s harder to know. I feel like I needed the feedback [from FOSCE examiners] to know how well I was doing on my physical exams.” (P8).   Another student shared similar uncertainty about the performance criteria. “I guess the main thing is you don’t know what we’re supposed to know…So it’s kind of just hard to gauge where I have to be, and where I should allocate my time.” (P16).  Students also questioned whether they were able to be credible judges of their performance even when they knew the performance criteria because they were sometimes uncertain about how to apply the criteria to judge their performance. “I guess sometimes it’s just that an idea that, like, I 85  had a checklist and I don’t know if I hit the points or not. With that said, you don’t know if, like, you actually got the right lower lobe, or, like, if they were really– It’s hard to know completely how well you did.” (P9).   Similarly, students judged the credibility of external feedback or information based on the credibility of the examiner. For example, some students questioned the credibility of feedback they received from instructors in classes because they were concerned about the quality of assessment criteria used by the instructors. “...I think there’s basically a lack of standardization amongst different preceptors [instructors], where if you only relied on those workplace-based assessments you’d be in trouble.” (P17).  In contrast, the standardization provided by the use of a grading sheet (a formal assessment rubric) to assess students’ performance during the FOSCE was perceived by some students as enhancing the rigour of judgement.  I take formative [FOSCE] as, like, there’s going to be a very formal feedback in which they’re going to use some grid or some rubric to provide the feedback, whereas the other feedback we get kind of just from, say, the family practice preceptor [instructor] may be very much, like, informal, such as, I thought the way that you, you know, listened to that individual’s chest for this cardiac exam was really good. You were proficient. You did this, this, and this. But there’s no, they’re not following this professionalism, had a friendly handshake, kind of thing, rubric. (P7).  Further contributing to the greater perceived competence of FOSCE examiners relative to instructors in classes was that some students presumed examiners were less likely to cut corners. I think that it’s [the FOSCE] valuable because, I guess, if … because everyone will go through clerkship and electives, and so there is like, a qualified physician assessing you or like, kind of, observing what you’re doing, but it’s just so hard because like, there’s such variability among all physicians, and so someone’s expectations and someone … 86  like, some attendings might be more willing to cut corners or certain things, and so I think it’s like, really important to have like, a standard. (P15).   Some students thought the FOSCE examiners were more credible compared to other course instructors for providing information that would facilitate their continued post-FOSCE pursuit of assessment-related personal objectives because they believed FOSCE examiners to be involved in other OSCEs that students will have to take whereas instructors were forced to speculate. It [the feedback] was reliable [be]cause you knew all those people [FOSCE examiners] like they’re OSCE examiners, they probably do real OSCEs or at least they have the template for the OSCE. So, you knew that you could trust whatever they say and that it’s directly and entirely what you need to work on to improve for like this big exam [OSCE] that you’re going to take at the end of med school. (P2).  For this reason, one student chose to prioritize the FOSCE examiner’s feedback over an instructor’s feedback when there were conflicting opinions. “I guess I’m a little bit confused because I don’t know which one to really listen to. I think I might go with her [FOSCE examiner’s] view because she was actually, like, marking. But it was nice. It was more comforting to hear it from the person marking the OSCE than from someone [clinical skills tutor] who was potentially just speculating about what needs to be done.” (P3).   However, other students did not perceive FOSCE examiners to be credible informants on what would happen in future OSCEs and did not use the examiner’s feedback. For example, one student did not use a FOSCE examiner’s feedback to work towards their personal objective of improving performance in future OSCEs because they were told by course instructors that FOSCE examiners were not like “true” OSCE examiners. “But there was one station that was like a bit weird because the tutor was giving me feedback telling me to do something that was very different from what 87  we’d actually learned… They [family practice course instructors] kind of just said that they don’t care whether you do it and that the examiners weren’t necessarily true OSCE examiners...” (P13).  In rare cases, students felt they were more competent judges of specific aspects of their performance than FOSCE examiners. As a result, they did not use the examiner’s feedback to monitor performance and adjust learning or understanding of the assessment. For example, one student did not change her approach to a patient interaction based on a FOSCE examiner’s feedback because she had observed positive patient responses to the approach she used in clinical workplaces before medical school. When asked about whether she will use the examiner’s feedback to adjust her performance criteria, the student responded,  No. I think you can, like, if someone says, “oh, I’m fine”, then you say, “well, you don’t sound fine”. And then it sounds like you’ve actually listened. I think you can get more out of a patient that way. And working in a clinic before [medical school], I’ve also had– I’ve heard patients say to their doctor, “you’re the only one who ever asks me how I'm doing.” And that was, like, something they really appreciated. (P9).    In sum, in this section, I discussed how students’ perception of the credibility of the information or feedback about their performance that was generated by themselves and by external sources influenced their decision to use the information to monitor performance and adjust learning or understanding. Variation in students’ perception of credibility led to variation in students’ information use in the FOSCE.   88  3.2.3.2  Variation in the types of information and opportunities students used to monitor performance and adjust learning or understanding  Because students held diverse personal objectives, their interviews reflected diversity in the types of information they needed to monitor performance and adjust learning about practice or understanding of their assessments. Information students indicated they required included information about specific strengths and areas of improvement, global judgements of performance, and information about how to prioritize learning needs.   Hearing about strengths helped students understand what aspects of performance did not require an adjustment to their approach to learning or preparation, which was seen as valuable. “Well yeah, I think it’s [hearing about strengths] just positive reinforcement, right, like you continuing to do something that you’re doing well. So and again that just encourages me to prepare the same way.” (P21).  For students who had personal objectives related to their emotional state, feedback about their strengths was similarly beneficial. For example, feedback about strengths improved one student’s confidence. “And it was like a confidence-booster because they [the FOSCE examiners] would say things, like, okay, you seemed like you were very confident and you seemed like you knew what you were doing with the blood pressure, and you did it well.” (P3).  Also, feedback about their strengths enabled students to maintain a positive emotional state when they received constructive feedback. “I think it is good to get the positive stuff, ‘cause that kind of like cushions any blows from the constructive stuff.’” (P19). 89   Students were also interested in information about what would (and how to) improve their performance. Learning of an area of improvement about which they were previously unaware helped students understand there may be a problem in their performance criteria (a component of their understanding of OSCEs) as well as providing guidance about what adjustments to make.   For example, as previously mentioned, some students were unaware of the use of assessment strategy in the FOSCE and, as a result, applied the performance criteria they used with patients in the workplace when performing in the FOSCE. “So I’ve never done an OSCE before. And so when I first heard about what an OSCE was, it was just, Oh well, you do– it’s a practical exam. You do the skills that you’re supposed to have learned throughout the year in the exam.” (P10).   Examiner feedback about areas of improvement helped students become aware of assessment strategy, which they incorporated into their understanding of the OSCE performance criteria. “There was also feedback about ‘well they give you vitals on the door, you need to address this at the beginning in this way to make sure I know what you’re talking about and give you the marks. And you were talking about cyanosis but never said cyanosis, so you don’t get the marks.’” (P10).  For feedback about both strengths and areas of improvement, students valued specific feedback linked to student actions over general statements about how “good” and “bad” their skill was because it helped them understand how to adjust learning to improve performance.  90  In addition to narrative feedback about strengths, areas of improvement, and how to improve, some students also valued knowing the examiners’ global judgements about their skills, such as the numerical mark for each FOSCE station, and the class average mark for each station. Despite global judgements generally being more associated with the summative function of assessment, some students believed it helped them use the FOSCE for a formative function because the performance level they used to monitor whether they needed to make adjustments in learning or understanding was based on a specific global level of performance. For example, one student used the global percentage score of their FOSCE performance to determine if they needed to adjust their learning approach. “…then getting [the feedback report] back I go, okay, well, I scored sixty percent. Clearly something’s not happening that needs to be happening. Or I go, oh, I scored eighty. I’m just going to keep doing what I’m doing [in learning approach].” (P7).  Some students monitored performance to determine whether they needed to adjust their learning or understanding based on how they performed relative to the class average. For example, one student used how much higher she was performing against the class average to make learning decisions. “I just use it[the class average] as a gauge of, like, how well I’m learning, basically, like if I was five percent above the average last time and only two percent above the average this time, why did that change happen? I must have done something differently, or maybe other people were learning faster. What can I do to, like, catch up with the class?” (P8).  Students showed variation in the level of performance they wanted to achieve relative to their peers. Some aimed to outperform their peers and “…want[ed] to outperform people on the OSCE” (P10) whereas other students aimed to be at the class average. “I’d like to stay above the class 91  average, sort of– I don’t really like being below the average but– so those are– currently, like just this year, those are my goals.” (P8). Variation in students’ desired level of performance means one specific level of performance might lead some students to make adjustments in learning or understanding but lead others to not make adjustments.  Knowing global ratings like how they performed relative to the class average also impacted students’ emotional state. For example, the student in the previous quote mentioned knowing that he was above the class average “…helped my confidence.” (P8).   Another student mentioned feeling encouraged and confident because they performed above the class average. “Yeah. Like, I think, maybe for a few of them [FOSCE stations], I was a little bit above [the class average], and like– but most of them, I was like right in the middle. And I think that that’s comfortable, and it means that I’m not making patients really, like, not doing something really badly.” (P19).  This was not a universal reaction, however, as some students did not value comparing their performance against the class average. One reason was being worried the class could fail to meet the minimum standard set by the program such that by aiming to perform at the class average, the students risk not meeting the minimum standard. Instead, these students judged their performance based on other standards such as the narrative feedback they received. “Yeah. I feel like the class average is not the best way really because the class flunks then wow, I’m above average! I think– I think a lot of it is just like the narrative comments, I felt that’s very encouraging.” (P21).  92  Through the FOSCE, students were able to identify many areas of improvement across different FOSCE stations. For some, part of understanding how to adjust learning included prioritizing learning needs within each station and between different stations. For example, one way students prioritized their areas of improvement was by focusing on improvements that both she and the examiner identified over deficits that only she identified. “…because I’m pretty hard on myself I guess for constructive feedback so maybe if they see that is an actual concern then I should address it, whereas something else that I thought was a concern that maybe they didn’t think was a concern then I can kind of be like, okay so it’s okay to kind of let that go and not worry about it too much…” (P13).  Making adjustments post-FOSCE was not necessarily straightforward as some students required further practice opportunities to implement the adjustments in order to work towards personal objectives. For example, one student required further patient interactions to improve structure in their history taking. During the interaction [patient interaction in clinical skills after the FOSCE] I found myself—like I tried to really think of things that I’ve learned through the [F]OSCE and try to implement, such as you know having better structure when I’m asking questions or making sure you know the patient is comfortable. You know like little things that I felt that I should have done but I didn’t do during the [F]OSCE I try to do that as much as I can. So in a way I’ve been practicing and trying to improve on it. (P1).  In sum, in this section, I described how students showed variation in the types of information and opportunities they wanted to use to monitor performance and adjust learning practice or understanding of the OSCE. Variation arose through diversity in students’ personal objectives and diversity in the information students believed would allow them to best work towards each objective.  93   Despite their desires, students were not always able to obtain the information or opportunities necessary to monitor performance or adjust their learning or understanding through the FOSCE. In the following three sections, I will discuss how interactions between factors related to the student, the assessment design, and the broader learning environment shaped students’ access to necessary information or opportunities.  3.2.3.3 Factors influencing students’ access to information from reflection in and on action As previously mentioned, students valued the FOSCE because it provided a formal opportunity to practice their skills with some perceiving the opportunity as lower risk with respect to negatively impacting their progress towards personal objectives compared to other opportunities in the learning environment. Practicing their skills on the FOSCE was also ideal for students to obtain information to guide performance monitoring and adjustment of learning or understanding because it provided students an opportunity to judge their own performance through reflection-in-action and reflection-on-action.   However, despite the opportunity to do so, some students who valued and used reflection in and on action as a source of information to monitor performance and adjust learning approaches or understanding of OSCEs, could not fully enact strategies to obtain all the information they wanted because of interactions between factors related to the student (e.g., time management skill and task prioritization, and memory and attention capacity), the FOSCE assessment design 94  (e.g., concurrent OSCE tasks), and the learning environment (e.g., concurrent curricular tasks). These factors are represented in Figure 11.   Figure 11: Factors influencing access to information from reflection in and on action  One FOSCE design factor was the requirement to perform in a sequence of FOSCE stations, which led some students to experience difficulty balancing their time between performing in the FOSCE stations and reflecting-on-action regarding their performance from a previous station. Limiting reflection-on-action diminished some students’ access to information that might have been useful for monitoring their performance and adjusting learning or understanding. “Even I think after like a [FOSCE] station, you don’t have much time, right, to think about oh, I did really well on this or I did really poorly on this one– it’s just the station’s done, go to the next one. So like you have that maybe like moment where you feel good or bad and then it’s like okay, got to, you know, switch that and concentrate on the next question.” (P21).  95  Another FOSCE design factor that influenced students’ access to information from reflection in and or on action was the requirement to perform a complete clinical task, which interacted with the student factor of attention and memory capacity. Some students believed they may not have had the attention capacity to keep a complete mental record of their performance as they may not have been consciously aware of some actions that took place during their performance. As a result, they may have not had all the information necessary when engaging in reflection in and on action. “You sometimes get that feeling that when you’re doing something you don’t realize what you’re doing, so sometimes like going through it she [a FOSCE examiner] would say something, ‘oh you did this’, and I’d be like ‘oh, really, I didn’t notice that I did that.’” (P16).  Interactions between factors in the learning environment and those related to the student also influenced their access to information from reflection in or on action. Because some students needed to attend to preparing for other assessments after the FOSCE, these concurrent tasks interacted with students’ time management skills and task prioritization, leading some students to limit their engagement in reflection-on-action about their FOSCE performance after its completion. The [F]OSCE in December happened a week before our like Gong Show of a week. And so like as much as you want to reflect, I can guarantee that every student who walked out literally just turned back into study mode. And so we drove to the anatomy lab, other people went to like, I don’t know, do whatever studying they had to do. But I’m not going to lie, but the [F]OSCE just did not matter after that. (P17).  Such concurrent tasks’ interference with engaging in reflection-on-action immediately after the FOSCE further caused deterioration of students’ memory of their performance and limited the extent to which they could use reflection-on-action as a source of information to move towards their personal objectives. For example, one student reported having trouble recalling his FOSCE 96  when he attempted to reflect on the experience after receiving the written feedback report eight weeks after the FOSCE took place. As a result, he found it difficult to generate information through reflection-on-action at that time. Yeah, because the grade report came like six weeks after, or like six weeks into the New Year after Christmas break, I didn’t really remember much of it [the FOSCE]…I just forgot a lot of stuff. Like everyone would be getting it back, and they’d be like station one, station two, station three and four, and then I had to try and spend a few minutes trying to figure out what station one actually was, we all had a different station. We all had it...well I guess like we all started at different places, but to try and figure out did I start at station one, two, three, or four. (P16).  In sum, the extent to which students could access information from reflection in and on action depended on multiple interactions between factors related to the student (e.g., time management skill and task prioritization, and attention and memory capacity), the FOSCE assessment design (e.g., concurrent OSCE tasks), and the learning environment (e.g., concurrent curricular tasks).   3.2.3.4 Factors influencing access to information from external support opportunities  Figure 12: Factors influencing access to information from external support opportunities  97  As previously mentioned, some students valued information or feedback from external support opportunities to monitor performance and adjust learning approaches or understanding of the OSCE. Multiple interactions between factors related to the student, the FOSCE design, and the medical school learning environment influenced students’ access to information from such external supports (Figure 12). For ease of organization, I have structured the discussion of these interactions using four characteristics of external support opportunities: the formal nature of the external support opportunity, the opportunity for dialogue, the modes of external support, and the timing of external support. External support opportunities include both opportunities related to the FOSCE and those available in the broader learning environment.  The formal nature of the external support opportunities (e.g., the provision of verbal and written feedback) in the FOSCE was one assessment design factor that students valued to access information that could enable them to work towards personal objectives. As previously mentioned, some students did not engage in practice opportunities that were available outside of the FOSCE because doing so might have negatively impacted their progress towards their personal objectives (e.g., risk of experiencing negative emotions). Similar concerns also sometimes deterred students from engaging with external support opportunities outside of the FOSCE. For example, students sometimes felt guilt (a negative emotion) when asking for external feedback because they perceived their instructors were busy attending to patient care duties. In contrast, students felt more comfortable seeking feedback during the FOSCE because it offered a formal opportunity to support their learning (i.e., faculty were deliberately present for that purpose).  I mean it ‘cause you knew it was your time, you knew it was your time to learn and you could ask anything you want and like the person giving you feedback isn’t in a rush 98  because he has to see another patient. Like they might be in a family practice office or in a hospital and you almost feel guilty sometimes in a clinical setting when there’s so much going on to like, you know, say excuse me preceptor [instructor], can we discuss this some more? But during the exam time they’re not going to go anywhere in those five minutes. There’s nowhere for anyone to go. So, it was really protected to be your time. (P2).  Such findings suggested that “self-regulation” takes place in a socially-situated context as students felt a need to navigate personal relationships and priorities. “So, I think [the FOSCE is] good because it forces you to receive the feedback. Like, they’re going to tell you what you did wrong. So it’s kind of– it’s forced. It’s forced learning. But that’s good because then you don’t– you can’t– there’s like no option of not getting the feedback, so you just deal with it.” (P8).  Feeling comfortable to engage in dialogue with examiners during external support opportunities in and after the FOSCE also influenced the extent to which students could obtain the information they needed in a variety of ways (e.g., to clarify examiner feedback, to seek feedback about aspects of their performance the examiners did not address, or to seek support when encountering uncertainties in reflection in and on action). For example, one student asked for clarification about how to adjust her technique to improve performance when the FOSCE examiner identified a deficit in her technique.  So I definitely asked for clarification, especially in the last station when she was like, oh, you were…percussing wrong. I was like, “oh, really? Okay, so show me exactly where I was” and she was like, “well, you tap on this part” and I went up to her and was like, “this part, like this?” Because I want to know it. I want to learn it properly. So, and then I was like, okay, and I did it like this, and I demonstrated, and I made sure that she looked and was like, “yeah, like that.” And I was like, “great! Thank you very much.” (P8).  In some cases, students did not encounter difficulty monitoring their performance or making adjustments until they received the written feedback report, in response to which there was no 99  opportunity to engage in dialogue with the feedback provider to gain additional clarification. For example, one student became uncertain about how to adjust learning when she encountered information on the written feedback report that differed from what she thought she was supposed to do. So he [the FOSCE examiner] just elaborated [in the written feedback report] and said, oh it jumped from FIFE to HPI [history of the presenting illness], whereas he didn’t actually give me that feedback in the OSCE setting [during verbal feedback session], he just said it seemed a bit disorganized. But I didn’t realize that was what he was getting at, that I was jumping back between FIFE and HPI, so I guess I could’ve asked more if he’d told me that in the real situation [during verbal feedback session]. (P13).   The mode or format of external support opportunities was another factor that influenced students’ access to information they desired because it influenced the diversity and amount of information that was available. Further, the number of modes of external support (verbal and written in the FOSCE) also played a role. For example, with respect to verbal feedback opportunities, students mentioned that the examiner was able to comment on every step the student performed in the station. “Yeah, just like I think my first station I had someone [an examiner] who had been doing OSCEs for a long time, so she was able to like list everything I did and like the order I did it and comment on each of them which was really nice.” (P16).  In the absence of a verbal feedback session, it may have been difficult for examiners to extensively document specific pieces of information about students’ performance because they only had two minutes between when one student left the station and when the next student entered.  I guess the stations I would have wanted, like the one where there is mental health and talking about organization, like there was no additional writing or feedback on that, and so I guess that was probably like, the one station that I would’ve benefited from, if the invigilator would have had a bit more time to...like, once I left the room, those two 100  minutes were reading the next prompt, to like write out...like oh, I didn't have time to tell you in our like, quick dialogue [verbal feedback session]…(P15).  The availability of verbal feedback also helped one student prioritize areas of improvement based on the seriousness of the tone of voice when examiners delivered the feedback.  So, an example [of examiner feedback] would be like, you know, ‘I felt the student could have better introduced the session and better summarized the session’. If you see that in writing it’s just like oh, I did everything wrong, I did the intro wrong, I did the summary wrong, but if I’m telling you in person like, ‘hey like I thought the student could have introduced that better and summarized it a little better’. You can tell from my tone that I’m not—that it’s not that serious. Like I’m just telling you because I think it’s something that might be good for you to realize. (P2).  Though students valued the detailed information from the verbal feedback session, this mode of external support interacted with students’ memory capacity to influence the amount of information students had access to because some students had trouble retaining information from the verbal feedback session given that no permanent record of the information was available.   Students cited multiple factors that impacted their memory (and thus, use) of verbal feedback including the stress experienced during the FOSCE, having to move through multiple FOSCE stations, or not thinking about the FOSCE after it happened. Students suggested having a permanent record of the feedback to aid their retention, such as having a replication of the verbal feedback on the written feedback report. “I was kind of hoping for more written feedback, because like, as much as I, like, remember certain parts about the immediate feedback, I don’t remember all of it, ‘cause like, you can only kind of hold onto so much, so many things, as you’re moving from room to room. So I would have liked a little bit more written feedback.” (P19).  101  At the same time, students valued the written feedback report because it provided unique information that they could use to monitor their performance with respect to things that may not have been available otherwise. For example, students who compared their performance against the class average would have had a difficult time obtaining that information from the verbal feedback session immediately after the station because a single examiner would not have a good sense of how the whole class performed on the station.  The availability of global ratings for each station throughout the written feedback report helped students prioritize their learning across stations by focusing them on addressing areas of improvement from the station on which they performed the worst. “I looked at the worst percentage I got. I was trying to think back to which station that was, and what it was that I did or didn’t do in that station that I could improve on relative to all the other ones.” (P16).  Therefore, the mode and number of modes of external support opportunities partially influenced the type, diversity and amount of information available from external support opportunities. Further, interactions between students’ memory capacity and the mode of external support influenced the amount of information students had access to from external supports.   The timing of when external support opportunities took place was another design factor that interacted with student factors to influence students’ learning. Students who relied on global ratings, or other information from the written feedback report, had to wait eight weeks to receive it, meaning that their access to valued information was delayed. The delay influenced some 102  students’ decision-making about when they could determine how or if they needed to adjust their learning or understanding of OSCEs.  Some students thought they could approximate global ratings and, hence, were not delayed in their monitoring and adjustments in the same way. For example, one student used the information about strengths and weaknesses from the immediate verbal feedback session to approximate their marks (global rating) by weighing the proportion of comments about strengths and areas of improvement the student received from the examiner. “I think after the sessions [FOSCE]…they [FOSCE examiner] gave me feedback. And it’s mostly positive and like not that– not too many negative things. And I– I like wasn’t concerned that I had failed miserably, or like it was really– I had like been really unprofessional or anything like that.” (P19).   However, students were not always able to arrive at the global rating in the written feedback report when generating impressions based on information from the verbal feedback session. The mismatch may have arisen due to the program using different methods or different information than students to generate global ratings. The mismatch also meant that students’ performance monitoring was misguided and may have led to inappropriate adjustments in their learning or understanding. For example, one student thought he had done poorly on the OSCE overall based on the immediate verbal feedback but realized he had done much better after receiving the written feedback eight weeks later.  At the time I feel like right after the [F]OSCE [after the verbal feedback session], I thought I did really bad. I thought it was like very stressful but now that I’ve received [written] feedback, I did above average on like almost every station…I think the [verbal] feedback that I got in the OSCE at the stations was so lengthy perhaps that it made me– I walk out of there being like I was totally saying that I did all these things, all these things wrong. I didn’t do well at these stations. Whenever you give feedback, you say a few things that 103  went well and then you start the long list of million things that you did wrong. So I think that maybe played into it but it was definitely like you missed this, you were doing this technique wrong. (P10).  Therefore, delays in information or feedback may impact students’ performance monitoring by delaying students’ engagement in these processes or potentially leading to faulty monitoring or adjustments if students fail to come to similar conclusions as their examiners when generating the information on their own.   The timing of external support opportunities further influenced students’ adjustment of learning by interacting with students’ memory of their performance. For example, one student had trouble retaining a mental record of their performance by the time he received the written feedback report, which they used to help monitor performance. As a result, he expressed difficulty using reflection-on-action to generate information about how to adjust learning at the delayed time. I looked at the worst percentage [across FOSCE stations] I got [on the written feedback report]. I was trying to think back which station that was, and what it was that I did or didn’t do in that station that I could improve on relative to all the other ones… because the grade report came like six weeks after, or like six weeks into the New Year after Christmas break, I didn’t really remember much of it. (P16).  Students who did not obtain all the information they wanted through external support opportunities associated with the FOSCE sometimes harnessed external support opportunities in the second semester of classes (a post-FOSCE medical school learning environment factor) to obtain further information.   104  For example, one student was unsure about why a FOSCE examiner wanted to change the structure of his history taking and sought clarification about the feedback from a clinical skills instructor in the second semester of medical school.  I think I was talking to one of my clinical skills tutors who I think they do the–who is like involved in the OSCE examining. And he said that structure was actually like a point on the OSCE like your organization or structure is–you get marked on that. And I think actually I remember seeing that on the exam feedback. And so maybe that’s why the examiner was kind of so insistent on the structure. (P10).  In summary, multiple interactions between student, assessment design, and learning environment factors influenced students’ access to information from external support opportunities necessary to monitor performance and adjust learning or understanding in the FOSCE and the broader learning environment. Student factors included desired information, perceived risk of engaging in external support opportunities, and attention and memory capacity. Assessment design and learning environment factors related to external support design included the formal nature of the external support opportunity, the opportunity for dialogue, the modes of external support, and the timing of external support.  105  3.2.3.5 Factors influencing access to practice opportunities for implementation of learning adjustments post-FOSCE  Figure 13: Factors influencing access to practice opportunities for implementation of learning adjustments post-FOSCE  As previously mentioned, in addition to information, some students required further practice opportunities to implement adjustments in learning. For these students, the extent to which they were able to do so depended on interactions between student factors (e.g., desired information and practice opportunities, and time management and task prioritization) and factors in the learning environment (e.g., availability of relevant practice opportunities after the FOSCE and concurrent curricular tasks). These factors are highlighted in Figure 13. This issue is worth considering separate from those that preceded this section given the fact that the medical program so deliberately intended this OSCE experience to be formative and, thus, should consider how it establishes or prevents affordances for follow-up.  Some students found the design of the second semester medical school learning environment made it difficult for them to access relevant practice opportunities after the FOSCE. For 106  example, after the FOSCE, one student wanted to work on their approach to handling unexpected situations, which they had trouble with during the FOSCE. However, they had trouble doing so because similar opportunities were not readily available in courses in the second semester of medical school. “Yeah, it’s definitely been something that I have tried to like, do in family practice, but even still, it’s hard to…not many people that I’ve encountered are, kind of, randomly throwing things in. They’ve made an appointment with their preceptor for a very specific thing, and so I can ask like, ‘is there anything else you’d like to talk about today’…it’s, I guess, more of a difficult thing to practice.” (P15).  In contrast, some students identified the spiral course design they experienced in classroom-based courses like clinical skills as facilitating further opportunities to practice the skills assessed on the FOSCE on which they wanted to improve. “Clinical skills this semester is kind of a repeat of the clinical skills we did last semester with a few more things and a little bit more detail. So I get a chance to implement the feedback that I got [from the FOSCE]…” (P10).  In addition, some instructors influenced students’ access to relevant practice opportunities in the second semester of medical school. For example, some created opportunities to practice skills using an OSCE-style format, which was particularly valued by students who were interested in prioritizing areas of improvements that they perceived to be specific to OSCE-style assessments. “So this term [after the FOSCE], like we’ve been doing it in like that kind of OSCE way, and so this term I’ve had like some preceptors like for cardiac and resp[iratory] in specific, they’re like ‘okay, we’re going to you know like do this OSCE-style’…” (P21).  107  In addition to students’ access to relevant practice opportunities after the FOSCE, students’ skill in time management and task prioritization interacted with the concurrent tasks in the learning environment after the FOSCE to influence the extent to which they could implement adjustments. For example, some students could not attend to implementing adjustments from the FOSCE and focusing on the second semester curriculum simultaneously. They decided to prioritize more pressing demands in the second semester curriculum over setting aside time to seek out or create further practice opportunities that were not part of that curriculum to address areas of improvement identified through the FOSCE. Like so for example with the [F]OSCE, I would love to spend half a day or a day every week practicing what I learned that week, but due to other priorities like keeping up with classes because of the volume of stuff they decided to introduce, keeping up with family practice, keeping up with Friday assignments or modules. So doing all those things it kind of almost hinders learning in some respects, because there’s so much we have not enough time to actually spend on certain things that we would like to spend more time on. (P11).  In sum, interactions between student factors (e.g., desired information and opportunities, time management skills and task prioritization) and factors in the learning environment (e.g., concurrent tasks and availability of practice opportunities) influenced the extent to which students could access further practice opportunities to adjust learning. Note that additional practice opportunities often had an instructor present and, thus, also provided potential opportunities for students to seek information they desired if they felt comfortable to do so.   3.2.3.6 Section summary To monitor performance and adjust learning approaches or understanding of OSCEs in a manner that helped students work towards their personal objectives, students relied on multiple sources of information including information they generated through reflection in and on action and 108  information from external sources. Some students also relied on additional practice opportunities after the FOSCE to implement adjustments.   Variation in students’ perception of the credibility of the information led to variation in the sources of information they used. Students also showed variation in the types of information and resources they desired to work towards personal objectives. The extent to which students could obtain the information or practice opportunities they desired depended on interactions between factors related to the student, FOSCE design, and the broader learning environment.   Regarding information from reflection in and on action, the extent to which students could access information depended on the perceived risk of practice opportunities, time management skill and task prioritization, and attention and memory capacity as well as the availability of formal practice opportunities and the existence of concurrent OSCE and curricular tasks.   Regarding information from external support opportunities after the FOSCE, the extent to which students could access information depended on the information students desired, the perceived risk of external support opportunities, time management skill and task prioritization, and attention and memory capacity as well as the availability of formal external support opportunities, opportunity for dialogue, the mode and number of modes of external support opportunities, and the timing of external support opportunities.   Finally, regarding further practice opportunities after the FOSCE, the extent to which students could access information depended on the what practice opportunities they found desirable, 109  students’ time management skills and task prioritization as well as the existence of concurrent demands and the availability of practice opportunities.  3.3 Chapter summary The overall research objective guiding completion of this study was one of enhancing understanding of the factors first-year medical students report as influencing their learning through OSCEs. This chapter focused on students’ experiences learning in the context of a FOSCE, the first of two assessment contexts examined. Overall, this study showed that personal objectives, the strategies students chose to address their personal objectives, and interactions between student, FOSCE (assessment) design and learning environment factors all influenced students’ efforts to enact learning strategies as well as the focus of those efforts.  Personal objectives included improving or achieving both a desired level of performance and a positive emotional state, but the extent to which students interpreted the FOSCE as an opportunity to address those objectives in a context outside of the FOSCE depended on the perceived similarity between the FOSCE and those other contexts. The strategies students selected to work towards their personal objectives aligned well with Gielen et al.’s39 model of the assessment effects. Before, during, and after the FOSCE, however, interactions between factors related to the student, assessment design, and the broader learning environment all influenced whether or not students felt able to engage in such strategic learning cycles and, hence, the extent to which students believed they could learn through participation in the assessment.   110  In each case there was considerable variation of perspective across student suggesting that a one-size-fits-all approach regarding optimization of the learning through assessments may not be achievable. For example, some students preferred the formal practice and external support opportunities in the FOSCE over other learning opportunities because they perceived it as having a lower risk of negatively impacting their progress towards personal objectives. Others worried more about the risk inherent in not doing well on the FOSCE even though it was labelled as being for formative purposes by the education program. What additional factors and insights arise from considering the same issues in the context of a summative OSCE will be the focus of the next chapter.   111  Chapter 4: SOSCE study results   Figure 14: Factors influencing first-year medical students’ regulation of learning in OSCEs   In the last chapter, I began to flesh out the details of a model describing the factors influencing first-year medical students’ learning through OSCEs based on students’ experiences surrounding a FOSCE. In this chapter, I further develop this model by exploring students’ experiences learning in the context of a SOSCE. The first-year SOSCE was chosen as a second assessment context because existing research suggests that the label assigned to an assessment is an important design factor that influences students’ learning through assessments. Therefore, this second OSCE context was thought to offer a similar but contrasting situation that creates the opportunity to broaden understanding of the range of factors influencing students’ learning through OSCEs. It is not the intention of the current research to claim any factors as unique to only the FOSCE or SOSCE context. Rather, the overall model, based on students’ experiences learning through the FOSCE and SOSCE, is represented in Figure 14.  112  As a reminder, the general structure of the model was developed based on two sensitizing concepts: Butler and Schnellert’s Self-regulation framework and Gielen et al.’s model of assessment effects. From Butler and Schnellert’s model, key influences include cycles of strategic action, students driving their engagement, and interactions between student factors and other contextual factors. From Gielen’s model, the key influence is recognizing learning through assessments as a process that can take place before, during, and after the assessment. Through exploring students’ experiences in the FOSCE and SOSCE, the goal is to develop a more expansive understanding of factors that might influence the personal objectives students have and the strategies they use to pursue those objectives and, hence, influence learning. The purpose of discussing variations in students’ experiences within or across assessment contexts is to explore the extent to which the model allows a common framework to accommodate variations in experiences.  To organize this chapter, similar to the previous chapter, I will first discuss students’ task interpretation and personal objectives. I will then discuss how interactions between student, assessment design, and environmental factors influenced students’ enactment of strategies to work towards their objectives. For the sake of brevity and clarity, this chapter will focus on additional insights gained from exploring students’ experiences learning in through the SOSCE. These findings are integrated into the model developed in the last chapter based on students’ experiences through the FOSCE (see Figure 14). Factors that were already discussed in detail in the last chapter will be touch on only briefly in this chapter to re-orient the reader to the model. As findings are presented, relevant components of Figure 14 will be highlighted using boxes with dotted lines. 113   4.1 Personal objectives for learning in the context of the SOSCE  Through the SOSCE, similar to the FOSCE, students had personal objectives they wished to pursue related to improving their emotional state or performance in a variety of contexts including the SOSCE itself, future OSCEs, and clinical work environments. As a reminder, students’ personal objectives were sometimes inferred from their actions and behaviours through the assessment. For example, one student wanted to maintain positive emotions in the SOSCE itself by “go[ing] into each station with confidence.” (P24).  Another student valued the practice opportunity provided by the SOSCE because he anticipated more OSCEs during his training. The SOSCE, therefore, was considered valuable because it gave “more experience doing OSCEs because I know it will be a component of how we’re tested for the next like four years…” (P6).  Similar to the FOSCE, students’ interpretation of the extent to which they could address their personal objectives through the SOSCE continued to depend on their perception of the relevance of the SOSCE to the context in which their personal objectives were set.  For example, some students felt that the FOSCE was not like (or relevant to) real OSCEs and were looking forward to the SOSCE to experience a real OSCE. “Well I guess the first one I knew it was formative or whatever so I maybe didn’t take it as seriously as I could’ve or not really like the real thing so it was nice to actually experience the real thing for once and see what it’s really going to be like.” (P13). 114   With respect to personal objectives set in the clinical work context, students maintained personal objectives for clerkship and for work in the clinical workplace. For example, one student’s goal was to be confident in their physical exam skills in the SOSCE because it’s an important skill as a physician. “…the physical exam and being sort of confident with it, that was sort of my main goal of going into it…they’re just important skills to know as a clinician.” (P5).  Again, students’ personal objectives appeared modifiable throughout an assessment, such that they were not necessarily the same before and after. This could occur because students’ understanding of the contexts in which their personal objectives were set remained dynamic between the FOSCE and the SOSCE. This meant that variation in the extent to which students could address personal objectives through the SOSCE and FOSCE could arise due to variation in their understanding of contextual relevance.   For example, during the FOSCE, one student felt limited in using the assessment to work towards personal objectives he held for his clinical work because he saw differences between the two contexts, which he attributed to OSCE assessment strategy. However, more clinical exposure in the second semester of medical school led him to understand that the components of performance he interpreted as “assessment strategy” were actually used in clinical work. In other words, such refined understanding led this student to interpret the SOSCE and information from the SOSCE as more relevant to the clinical work context once he had greater exposure to clinical work. I’ve realized in doing more clinical work [since the FOSCE] is that those tips and tricks that they teach you like yeah they’re kind of just teaching to the OSCE but a lot of the times 115  the tips and tricks that they teach you and it’s like you need to comment on these pertinent negatives and these random things that you’re finding or not finding and it’s like yeah they tell you that because they’re actually important…I think the disappointment that I felt in December was that I felt like the [F]OSCE was just another exam that you could practice these tips and tricks…If you know how to study and how to prepare for an OSCE, you’ll do better but that doesn’t mean that that’s not important information. The things that you need to, the weird tips and tricks that you need to know to do well on an OSCE are also important tips and tricks to doing well in clinical practice…So my opinion on that has definitely changed…Yeah I think they [marks from SOSCE] do mean more now and again that’s from talking to physicians and doing more clinical work, whereas I think I realize now that the stuff in the OSCE that they are kind of testing you on in the OSCE is actually important stuff that you should learn. I find that now I think the [S]OSCE marks are more meaningful than what I thought they were in December [the FOSCE]. (P10).  With respect to the types of personal objectives, similar to the FOSCE, students desired having a positive emotional state and working toward their desired level of performance when they encountered the SOSCE. Regarding emotional state objectives, improving or maintaining wellness seemed to be more prominent in the SOSCE than it was in the discussion with participants after the FOSCE. The desire to maintain wellness (an emotional state) was not explicitly identified by students as a personal objective in the context of the SOSCE, but its influence could be seen in students’ actions after the SOSCE (e.g., not engaging in adjustment of learning). Factors that might account for this will be discussed later in the chapter when strategies students used to address personal objectives are considered.  Personal objectives students had related to improving performance (or achieving a desired level of performance) included wanting to pass the SOSCE, wanting to “do well”, or wanting to perform better than peers. Because of the consequences associated with not passing the SOSCE, students sometimes expressed a multiplicity of desires with respect to the level of performance, such as wanting to pass the SOSCE, but also wanting to perform at a higher level. For example, when talking about her preparation, one student was focused on performing better than passing 116  but was also concerned with not failing the assessment. “…I did [prepare] more than [to] pass. But I think there’s a mark—as soon as there’s a possibility of failing it pops into your head. You want to make sure that you don’t fail. It’s not that I didn’t want to strive for higher.” (P9).  Other students focused on improving their level of performance through the SOSCE, by identifying areas of improvements to inform adjustment of learning. I mean it’s fairly easy to pass an OSCE so I don’t think– that’s not really my– like of course, I want to pass but for me it’s more about testing myself and seeing if I can actually do this…I’m not so much concerned about what the medical like– what med school thinks about me doing badly, I’m more concerned about “Okay, can I do this in the real world one day?” Like “Where do I need to improve my skills?” (P25).   In some cases, students had goals that were oriented towards both formative and summative functions. For example, one student wanted to both pass the SOSCE (more consistent with a summative function) and strived to learn through the SOSCE (more consistent with a formative function). As another example, one student discussed how “I think as we discussed the ultimate goal is to learn but there's also the added layer of I do need to pass. But again, I'm striving to learn either way.” (P9). Wanting to improve or maintain wellness was sometimes at odds with working towards other objectives. When students perceived there to be academic consequences associated with not achieving their desired level of performance, such as not being able to progress in their training if they did not achieve a passing mark, some students prioritized accomplishing performance objectives instead of emotional state objectives. In contrast, when students perceived there to be minor or no consequences associated with not achieving their desired level of performance, some prioritized emotional wellness over working towards their desired performance. Examples of 117  these competing agendas will be described later in the sections discussing strategies students applied to work towards their personal objectives.   4.2 Planning and enacting strategies to work towards personal objectives Overall, students continued to use the strategies outlined in the FOSCE results chapter to work towards their personal objectives. In addition, some students expressed additional strategies to address emotional state objectives (e.g., improving wellness), which conflicted or were at odds with the strategies used to address performance objectives. Therefore, prioritization of one type of personal objective appeared to have influenced the extent to which students could work on another type of personal objective before, during, or after the SOSCE.   As mentioned in Chapter 2, differences in the assessment design between the FOSCE and SOSCE designated by the medical program included the presence or absence of academic consequences, the scheduling of the assessment, the characteristics (e.g., the amount) of content assessed, and the design of formal external support opportunities. Some students identified these factors as creating variation in their learning in the context of the SOSCE relative to the FOSCE. However, there were other differences in the FOSCE and SOSCE related to student factors, environment factors and other assessment design factors that also led to variation in learning both within and across these assessments. In the following sections, I will use the chronological separation of Gielen et al.’s categorization of assessment design, as before, to highlight each strategy students’ used and the interactions between factors that shaped the extent of their strategy enactment.   118  4.2.1 Working toward personal objectives by preparing for the SOSCE Prior to the SOSCE, students continued to use preparing for the assessment as a strategy to work towards objectives related to their desired emotional state and desired level of performance. For example, one student was motivated to prepare because she wanted to achieve her desired level of performance of passing the assessment and to achieve positive emotions such as being confident. When asked why she prepared for the SOSCE, the student responded,  Well, one thing is you know, we need to pass the exam. So clearly some level of studying has to happen for that to come true. But also I think, I just, I like feeling confident in the clinical situations where I need to do a physical exam. So in those sort of stratifications of family practice when I want to do an abdominal exam it makes me feel good and I think it makes the patient feel good as well to know steps of the exam. (P5).  In the following section, I will discuss how interactions between factors related to the student, assessment design, and the environment influenced students’ preparation and how differences in some of these factors between the FOSCE and SOSCE led to differences in students’ preparation across the two OSCEs (Figure 15).  119  4.2.1.1 Factors influencing preparation for the SOSCE  Figure 15: Factors influencing students’ preparation for the SOSCE   For some students, the academic consequences associated with not passing the SOSCE influenced their plan for preparation and was their motive for prioritizing performance objectives. For example, “I think April OSCE [SOSCE], because it was suddenly a summative as opposed to formative evaluation, there was more pressure to know it because this time you could technically fail. So, I would say that maybe it made you consolidate [prepare for] the information just a little bit more.” (P9).  Another way students perceived academic consequences as influencing their preparation was through prioritization of personal objectives when they encountered concurrent tasks in the learning environment. For example, one student had a concurrent extracurricular task (an environmental factor) he had to attend to while preparing for the SOSCE and he could not manage his time (a student factor) to attend to both tasks. Because of the academic consequences associated with the assessment, he prioritized preparing to work towards performance objectives   120  over sleeping to maintain well-being (an emotional state objective) prior to the SOSCE. In contrast, had the OSCE been perceived to not have academic consequences, he would have prioritized maintaining wellness. This example also demonstrates that, in some cases, improving performance and improving emotions require different strategies given that the former encourages preparation and the latter may restrict preparation. So, let's say for example, if the OSCE was formative, I would prepare enough to feel comfortable that I’m going to do an adequate job on the OSCE, but say for example the [a concurrent extracurricular activity] thing was going on where I was staying up till three four o’clock in the morning and working on that. Instead of setting my alarm for say eight the next morning getting up and then working on OSCE stuff with friends, I would get like my rest for example and just, like that’s a small example. But in terms of time allocation, I would certainly prioritize something based on formative [meaning containing no academic consequences] and summative [meaning containing academic consequences], and that’s I know not how it’s supposed to be, but at the end of the day that is how it is. (P17).  That said, some students did not perceive the presence of academic consequences as influencing their preparation because they prioritized performing to a similar desired performance level (or to experience similarly positive emotions) regardless of whether the OSCE was labelled as formative or summative. These students generally perceived there to be consequences associated with not performing well on the assessments. For example: I know they label everything as summative or formative but I don’t really think of it like that. Just because, I don’t know, I think everything is part of your training and it counts even if they say it doesn’t count…I mean, if you did really poorly on an exam that’s formative– like I’m not sure how much they [the program] care but like for myself, I wouldn’t want that for myself. Just because that would probably make me feel bad and, I don’t know, you kind of want to do well, just for your own pride and for your own learning…I just think it’s part of the training and there’s a reason why they’re getting you to do an OSCE. Probably because they want you to learn something. So I think it’s important to put your effort into it. (P25).   For some students, the presence of academic consequences also influenced their preparation by inducing greater negative emotions. Experiencing more negative emotions while anticipating the 121  SOSCE than the FOSCE led them to prepare more for the SOSCE. “…you’re more stressed [about the SOSCE] because you’re worried about not passing…a certain level of stress is good I think because it gets people to do things more efficiently and just force you to learn [through preparing].” (P20). This quote helps to illustrate how performance and emotional state objectives can be entwined with one another.  It is important to note that academic consequences were not the only factor that negatively impacted students’ emotional state. As evidenced by the FOSCE results section, negative emotions were not exclusive to the SOSCE (i.e., students could feel negative emotions during the FOSCE as well). As a result, some students prepared similarly for the FOSCE and the SOSCE to improve their emotional state, including a desire to feel comfortable, despite the apparent academic consequences differing.  I mean at the end of the day if I thought I was going to fail a formative OSCE I also wouldn’t feel comfortable with that. I mean that’s a personal thing like I wouldn’t be able to walk in and do something and just be like, yeah you know what, today I’m just going to fail this one, because it doesn’t matter, like I still prep for it. (P17).  In sum, interactions between student factors (e.g., emotional responses to the presence of academic consequences and time management/prioritization skills), assessment design factors (e.g., the presence of academic consequences) and environmental factors (e.g., concurrent tasks) influenced the extent to which students prioritized performance objectives over emotional state objectives. In some cases, improving performance was at odds with improving emotion because they required conflicting strategies, with the former requiring preparation and the latter requiring limiting preparation. It is not the presence of academic consequences between the FOSCE and SOSCE, therefore, that led to variation between students in terms of their prioritization of personal 122  objectives (and differences in their preparation), but differences in students’ perceptions of consequences.   Beyond academic consequences, characteristics of the content assessed also influenced students’ preparation. Some students perceived differences in the content assessed between the FOSCE and the SOSCE, including differences in the complexity, type, difficulty, and the amount of content assessed, which led them to prepare differently than how they prepared for the FOSCE. For example, one student studied more for the SOSCE compared to the FOSCE because there was more content assessed on the former. “I don’t think I studied like specifically harder or like spent more time studying for the summative one…absolutely I studied more because there was more material to learn but I don’t think I proportionally studied more for the summative exam.” (P19). Therefore, students’ personal objectives (a student factor) interacted with the characteristics of the content assessed (an assessment design factor) to influence their preparation leading to variation in students’ preparation between the FOSCE and the SOSCE.  Some students also perceived differences in the scheduling of the FOSCE and SOSCE as influencing variation in their preparation. For example, because the SOSCE took place at the end of the school year, one student wanted to achieve a higher level of desired performance compared to the FOSCE, which led her to prepare more for the former than the latter. “I felt like at the end of the whole year we should know more than what we knew at December.” (P3). This example highlights the dynamic and flexible nature of student factors across time.  123  Students who perceived a relationship between the FOSCE and SOSCE could use the FOSCE as guidance regarding how to adjust their preparation for the SOSCE, which was one of the medical program’s intended purposes for the FOSCE. Students who believed that they could achieve their desired level of performance on the SOSCE by doing so prepared in similar ways for the SOSCE and the FOSCE. For example, one student was satisfied with her level of performance on the FOSCE, so she assumed a similar approach would help her achieve similar results on the SOSCE. “I would say that I was kind of trying to prepare less [for the SOSCE] than I kind of felt necessary because I tend to over-prepare for things and given how well the earlier [F]OSCE went, I was like, ‘Oh well maybe I don’t need to actually kill myself preparing for this and it’s more important to focus on other stuff.’” (P13).   Some students reported a better understanding of OSCEs when they encountered the SOSCE compared to when they encountered the FOSCE. This better understanding led some students to adjust their approach, including preparing more for the SOSCE because they perceived that adjustments would improve their performance. For example, when a student was asked why she prepared more for the SOSCE than the FOSCE, she responded, Because I felt like I actually knew what I was supposed to do [for the SOSCE]. Like, with the December one, going into it, I was like, I don’t even know what an OSCE is. And then, I was like, okay, I’ve gone through an OSCE. I know what it’s supposed to be, so I could put more effort into it, and my effort will have more– will bear more fruit. And also, we just had so many more skills to learn. So I had to spend more time on it, and I was more interested in it. (P24).  Therefore, differences in students’ understanding of assessments, such as how to prepare to achieve their desired level of performance (a student factor) interacted with the scheduling of the 124  assessment (an assessment design factor) to prompt variation in students’ preparation, depending on whether students perceived a need to adjust their preparation between the assessments.   4.2.1.2 Section summary In sum, students continued to use preparing for the SOSCE as a strategy to work towards personal objectives. Interactions between student factors (e.g., students’ personal objectives, response to the presence of an assessment and to the presence of academic consequences associated with an assessment, understanding of the relationship between the FOSCE and the SOSCE, and time management skills and task prioritization), assessment design factors (e.g., academic consequences, characteristics of the assessed content and assessment scheduling) and environmental factors (e.g., concurrent tasks) shaped students’ preparation. Differences in these factors between the FOSCE and SOSCE sometimes led to variation in students’ preparation between the two OSCEs and differences between students sometimes led to variation within the SOSCE.   Of special note, improving students’ emotional state before the SOSCE was sometimes at odds with improving their emotional state or performance during the SOSCE because the former required students to limit their preparation for the SOSCE and the latter required preparation for the SOSCE. This conflict was particularly evident when interactions between concurrent tasks in the environment, students’ time management skills and task prioritization, and students’ response to academic consequences associated with the assessment led some students to prioritize performance objectives over their emotional state before the SOSCE.   125  4.2.2 Working toward personal objectives through skills practice during the SOSCE and factors influencing strategy use Similar to the FOSCE, students used practicing their skills during the SOSCE as a strategy to work towards their personal objectives. For example, one student used the SOSCE as a practice opportunity to work toward her desired level of performance. “I guess it [the SOSCE] helps because I want to be proficient at doing physical exams so that was like a good opportunity to practice.” (P6). Interactions between student factors and assessment design factors influenced students’ strategy enactment (Figure 16).   Figure 16: Factors influencing skills practice in the SOSCE  Students continued to value the formal opportunity afforded by OSCEs to practice their skills. Though some students might have found the SOSCE to be a higher risk practice opportunity compared to the FOSCE because of the academic consequences associated with the former, students saw and valued other aspects of the opportunity as lower risk compared to some practice opportunities in the learning environment, such as having the chance to perform skills without peers present.    126  I think I like the format of the OSCE. How it’s just one examiner, one patient and myself as the student. In the room I find sometimes when we have the tutor, the patient and four of us in the room for clinical skills, I find it a bit distracting to have my other classmates there when I’m trying to perform a physical exam skill or history taking or maybe I feel more self-conscious. So I kind of like having it just be myself as the student in the OSCE room. (P5).  Students identified differences in the assessment design between the FOSCE and SOSCE, such as differences in examiner behaviour, as influential in their strategy enactment. In the SOSCE, some examiners interrupted students during the station, not allowing them to perform all the skills they perceived to be necessary. “…I was under the assumption that like organization is so important that you have to like do this step before that step. And then like being focused on that kind of threw me off on the actual exam [SOSCE] because they [SOSCE examiner] would tell me to stop and move on and then that would be like, okay I was in the middle of that step.” (P23). As a result, some students did not get to practice specific components of skills, thus limiting the extent to which they could use the SOSCE to practice their skills as a strategy to work towards their personal objectives.  Through these observations it became clear that interactions between student factors (e.g., perceived risk of engaging in practice opportunities) and assessment design factors (e.g., examiner behaviour and a formal practice opportunity) shaped students’ use of skills practice as a strategy to work towards their personal objectives during the SOSCE. Differences in these factors between the two assessments sometimes led to variation in students’ use of skills practice as a means towards addressing their personal objectives.  127  4.2.3 Working toward personal objectives through monitoring performance and adjusting learning or understanding after the SOSCE  Similar to the FOSCE, some students continued working on personal objectives related to improving their emotional state and performance after the assessment by completing cycles of strategic action regarding their SOSCE experience that entailed performance monitoring and adjustment of learning or understanding. Although performance monitoring and adjustment of learning or understanding mainly occurred after the SOSCE, events that took place before and during the SOSCE influenced students’ engagement afterwards. Note that the term “after the SOSCE” can refer to after performance in a single SOSCE station or after the entire SOSCE.   Similar to the FOSCE, interactions between factors related to the student, the assessment design, and the broader environment that students learned and lived in influenced their use of performance monitoring and adjustment of learning or understanding. Therefore, differences in these factors between the FOSCE and SOSCE could lead to differences in monitoring and adjustment of learning or understanding across the two assessments. Key factors will be discussed in the following sections.  128  4.2.3.1 Factors related to students’ understanding of the assessment context   Figure 17: Factors related to students’ understanding of the assessment context influencing adjustment of learning or understanding  Students’ understanding of the content assessed, a component of their understanding of the assessment context (a student factor) interacted with assessment scheduling and the characteristics of the content assessed (assessment design factors) to influence students’ perception of the necessity and the urgency of adjusting learning or understanding. This interaction similarly influenced the content on which they chose to focus when adjusting their learning or understanding (Figure 17). Such variations were seen between the FOSCE and SOSCE and within the SOSCE itself.  As previously mentioned, one difference between the SOSCE and the FOSCE was when each OSCE was scheduled. For students who perceived a relationship between the FOSCE and the SOSCE, their understanding of OSCEs (a student factor) interacted with the scheduling of the assessment (an design factor) to influence their engagement in adjustment of learning or understanding. Going into the SOSCE, some students did not feel it was necessary to use   129  adjusting or refining their understanding of OSCEs as a prominent strategy to work towards their objectives because they believed they had already refined their understanding of OSCEs through the FOSCE and, thus, there was not much new to learn after the SOSCE. “Yeah I think definitely the December one [FOSCE] was like, ‘I need to learn how this exam works and how to perform in this exam’ but then when it came to April [SOSCE]…I don’t think I really had any learning goals in mind for the OSCE in April [SOSCE]. (P10).  That said, some students still claimed to have learned they had misconceptions about OSCEs as a result of the SOSCE. “I think probably I thought that I had probably learned everything I could from December [FOSCE about OSCEs] but obviously that’s just not true in hindsight.” (P10). In response, these students further refined their understanding of OSCEs to strategically work towards their objectives. For example, based on prior experiences, including the FOSCE, the student from the previous example thought he had understood OSCEs to be a clinical skills oriented assessment and did not expect content assessable on written exams to show up on the SOSCE. As a result of his SOSCE experience, he adjusted his understanding of OSCEs to include a broader scope of assessable content. It was the history station with the thyroid condition and I was really unprepared for thyroid. I hadn’t really properly studied. The thyroid was like the easiest week out of the endocrine block [of classes] that we did, thyroid week was by far the easiest so I was like, “I will study that two days before the [written] exam and then it’ll be fine.” I did not expect to see it on the [S]OSCE and I don’t really know why because it’s perfectly reasonable they would put something we had learned about in a lot of detail in the OSCE for a history session. (P10).   Students’ understanding of the content assessed (a component of their understanding of the assessment context) also interacted with the assessed content (an assessment design factor) to 130  influence how urgent students felt they needed to make adjustments in learning or understanding after the SOSCE. For example, one student who believed the assessed content would be covered at a later point in the curriculum (i.e. they would have further learning opportunities to develop relevant skills) did not believe it was urgent for them to plan and implement adjustments to address deficits after the SOSCE and, thus, tended not to implement adjustments on their own. So I guess part of it [not working on deficits] is because we didn’t do a lot of counselling in first-year [skill assessed on the SOSCE], I figured that would be something that we got more of in second year. And like through clerkship and everything and from talking to like upper years that is kind of the feeling that I’m getting, so I wasn’t too worried about it, because I feel like that’s the direction … Like I’ll get more training in it in second year and third year and so on. (P16).  In contrast, other students planned and implemented adjustments on their own in the summer after the SOSCE because they assumed that the content assessed on the SOSCE would not be addressed in the curriculum again. When one student was asked how he was working on the skills from the SOSCE, he responded, “through shadowing and then also just a little bit of occasional review…I know next year [year-two] the second that the first day it starts then you’re already learning new things. At least summer was a little bit of a block at times where you could kind of focus and think about some of the things that you’ve learned the year before.” (P17).  Students’ understanding of the assessed content also interacted with the assessment scheduling in a manner that influenced the urgency students felt to adjust learning. As mentioned in the FOSCE study, students anticipated they would need to use the skills assessed on the FOSCE soon in the SOSCE that took place four months later, which led them to experience urgency to understand and implement adjustments in their learning or understanding. By comparison, after 131  the SOSCE, which was followed by an one-month project course and three months of summer vacation, some students reported feeling less urgency.   For example, because summer vacation followed the SOSCE, one student did not pay close attention to deficits in her performance to understand how to adjust her learning. In contrast, when she participated in the FOSCE, she anticipated that she would need to use the skills relatively soon. As a result, she paid closer attention to deficits on the FOSCE.  I mean I guess I learn from both [the SOSCE and FOSCE] but the April one [the SOSCE] was just kind of like an end point I guess whereas the December [the FOSCE], which is like a middle point so it’s just like going to go from there and then I have to move on to the April one after the December one. So the December one [the FOSCE] I was like, “Okay I need to for sure go in here and pay attention to what errors I’m making and stuff so I can apply it to the April one [the FOSCE].” Whereas I guess I didn’t really go into the April one [the SOSCE] thinking like that even though I probably could apply for next year…Yeah it [the SOSCE] was the last thing so it was like, “Okay, let’s just go do it.” I don’t really need to be open to what you’re not doing and paying attention to things you can improve. (P13).  In contrast, some students did feel urgency to adjust learning or understanding because they perceived the amount of time between the SOSCE and the need to use their knowledge and skills as short despite it being objectively longer than the period of time between the FOSCE and the SOSCE. For example, even though clerkship (when students would need to use skills assessed on the SOSCE) was approximately one year away, one student perceived that it was a relatively short period of time and, therefore, felt like it was important for him to work on the skills (i.e., implement adjustments) over the summer. “...over the summer I realized that clerkship is like less than twelve months away and then so it was like, I actually need to know this stuff [content assessed on the SOSCE] and I need to perform well.” (P10).  132  Students’ decision about what aspects of their performance they should prioritize improving was also dependent on their understanding of the assessed content. For example, one student perceived that it was easier to implement adjustments identified as necessary from the SOSCE compared to the FOSCE because his physical examination skills (mainly assessed on the SOSCE) were less ingrained compared to his communication skills (mainly assessed on the FOSCE).  Because if someone says like, don’t forget both appendicitis signs, you just won’t, but if someone says that, when you’re building a rapport, you’re sort of like the interaction feels cold for example, I don’t know if that’s feedback that someone’s got. But I could imagine that feedback would be a lot more difficult to incorporate…Because I think that some of the communications skills, you’re bringing a lot of past history through those skills, so the skills that I have for example and the skills that you have is sort of a culmination of interactions that we’ve had over the past whatever years it’s been. And that’s sort of, part of it is our personality, part of it is our disposition all of those kind of things. And if that doesn’t align as well with the patient, it’s harder to make changes to that set of traits than it is to add something to your physical exam or work more, workshop your physical exam and something that you’ve just learned like a couple of months ago and is relatively new at. (P17).  Therefore, interactions between the characteristics of the content assessed on the SOSCE (an assessment design factor) and students’ understanding of the ease of adjustment of the assessed content (a student factor) shaped their use of learning strategies. Differences in the content assessed between the FOSCE and SOSCE, therefore, sometimes led to differences in students’ adjustment of learning or understanding.  In sum, students’ understanding of the assessment context, including the ease of adjustment of assessed content, proximity to skill use, and whether there will be further learning opportunities for the assessed content interacted with assessment design factors such as the content assessed and assessment scheduling to influence students’ adjustment of learning or understanding after 133  the SOSCE. Areas of influence included the necessity of and the urgency of adjusting learning or understanding, and the content students chose to focus on when adjusting learning or understanding. Variations in the characteristics of the content and assessment scheduling between the FOSCE and SOSCE and in students’ understanding of the assessment context led to variations in students’ adjustment of learning or understanding between the FOSCE and SOSCE and within the SOSCE.  4.2.3.2 Factors influencing students’ access to information to address performance objectives when prioritizing emotional state objectives  Figure 18: Factors influencing students’ access to information to address performance objectives when prioritizing emotional state objectives  After the SOSCE, students showed variation in their prioritization of performance and emotional state objectives as well as the strategies used to work towards each type of objective. Some students who prioritized improving their emotional state might have limited their performance improvement because improving their emotional state could require students to limit their performance monitoring and adjustment of learning or understanding whereas improving     134  performance required engaging in those activities (Figure 18). In other words, some students had strategies that placed working towards performance and emotional state objectives in contradiction.   Interactions between factors related to the student, assessment design and the broader environment sometimes led students to experience more significant negative emotions in the SOSCE compared to the FOSCE (Figure 18). As previously mentioned, before the SOSCE, interactions between students’ response to the presence of academic consequences and their time management skills/task prioritization (student factors), presence of academic consequences in the SOSCE (an assessment design factor), and the presence of concurrent tasks (an environment factor) led some students to prioritize performance objectives by preparing extensively at the expense of their emotional state (e.g., wellness). Events occurring during the SOSCE itself also contributed to some students’ negative emotional state. For example, this could arise due to a mismatch in students’ understanding of the SOSCE (a student factor) and the content assessed on the SOSCE (an assessment design factor). “I guess because I went in [to the SOSCE] with this confidence. I wasn’t expecting some of the patient scenarios that we got and I wasn’t expecting, I don’t know, I guess the level of difficulty of the exam was not the same that was advertised to us. And so I think that’s why I was getting stressed out because it just wasn’t what I was expecting in the exam…” (P19).  Another example of mismatch between students’ understanding or expectation of the assessment and assessment design that negatively impacted students’ emotion during the SOSCE was the unexpected interruptions by examiners that students experienced during the SOSCE.  135  Even during the neuro station, I think you kind of feel yourself like freaking out a little bit and especially– I think the preceptor [the SOSCE examiner] probably tries to not react and not show what they think but in a few– like a few times, you can kind of see what they think about your performance…he [the examiner] kept telling me to move on and to skip that part. And that kind of made me even more flustered. (P25).  Therefore, the greater prominence of negative emotions during the SOSCE when compared to the FOSCE arose due to multiple interactions between student factors (e.g., students’ response to the academic consequences of assessments, time management skills and understanding of the assessment), assessment design factors (e.g., characteristics of the content assessed, examiner behavior and the presence of academic consequences) and factors in the learning and living environment (e.g., concurrent tasks).  While the SOSCE may have highlighted areas of improvement to address, because the students in this study successfully passed the SOSCE and, thus, avoided the negative academic consequences, some who were not at their desired emotional state prioritized improving emotional state after the SOSCE by not engaging in further efforts to adjust learning or understanding. That is, one could prioritize emotional objectives again by treating the SOSCE as something that was in the past, without potential for further academic consequences.   Students who prioritized attending to their emotional state in the SOSCE showed variation in how they wanted to improve their emotion or maintain a positive emotional state after the SOSCE. This affected their preferences for external support and their decisions to reflect-on-action. Some students used the same information to improve their emotional state and to improve their performance (through adjusting learning or understanding) whereas other students’ approach to improving their emotion was different or at odds with the information needed to 136  improve their performance. As a result, prioritizing the improvement of emotions was sometimes detrimental to improving performance by limiting students’ access to information that could help them adjust their learning or understanding.   Regarding information from external supports, although students did not receive immediate external feedback after each SOSCE station, discussion of how they would want feedback or information to be provided in formal external support opportunities reinforced that some students prioritized improving their emotional state. Similar to reports from the FOSCE study, some students were concerned about having negative emotions while engaged in an OSCE station because it might impact performance, which in the case of the SOSCE would have put them at risk of not passing. That said, one student continued to want immediate feedback after each station like he received in the FOSCE because he anticipated that it would improve his emotional state by immediately resolving the worries he had about his performance.  Because I remember in December during that OSCE we had instant feedback [verbal feedback session after each station]. You did your thing and then you got feedback right away. I almost would’ve preferred that at the April OSCE too. I don’t know if that’s not possible because of logistical reasons or maybe they don’t want you to feel. . .like let’s say you had a station and you bombed it and then the person’s like, “Oh yeah, you forgot all this stuff” and you go to the next station and you feel like crap so you bomb it again. So maybe they don’t want that but I thought that it would’ve been nice and it would’ve for me definitely settled a lot of my worries and concerns about my performance if I’d gotten feedback on how I did right away. (P10).  This student also valued immediate feedback because it was an efficient way to understand how to adjust his learning to improve performance. “It would’ve been so much easier if the examiner during the thyroid station had been like, ‘You need to work on your systematic approaches to taking a history.’ That would have saved me like three hours of thinking that I did. It’s more efficient I think to have that feedback right away.” (P10). Therefore, this student provides an 137  example that suggests prioritization of emotional state objectives need not negatively impact progress towards performance improvement objectives. He wanted the feedback to be delivered in the same way to address both types of objectives.  However, other students wanted no feedback or no negative feedback (i.e., feedback about areas of improvement) immediately after each SOSCE station because they worried about it negatively impacting their emotions in subsequent SOSCE stations, thereby increasing their risk of not passing and of experiencing academic consequences. For example, when comparing their experiences learning in the FOSCE and the SOSCE, one student mentioned that she would not want feedback after each SOSCE station to minimize stress and maintain confidence in subsequent stations in the SOSCE.  I think I would prefer to go through all the stations and then have the verbal feedback in a separate round. Afterwards…So yeah, so do all the stations and then go around and get all the feedback. That would probably be best. Just because I could imagine getting feedback and then right after the station and then having to go do a station, that could be stressful or not great if you felt like you had a negative feedback and feeling less confident because of that. (P5).   Another student wanted only positive feedback after each station or for the feedback to be delayed a few weeks until her negative emotions were reduced (i.e., she wanted a “chance to cool off”). I feel like positive feedback would be good [after each station] but negative feedback would not be helpful in that situation. I mean maybe after all the stations are done, I think that would be helpful or on like the written report if they gave us actually more feedback instead of just a couple words, that would actually be more useful I think too because then it’s a few weeks after the exam so you kind of have a chance to cool off from the immediate effects or whatever and kind of reflect on it a bit. I think that might be an opportunity to actually give more elaborate feedback. (P13).   After the SOSCE, students’ focus on improving their emotional state also influenced adjustment of learning or understanding by shaping the external supports they sought. Some students sought 138  out peers as a source of external support immediately after the SOSCE because they perceived that doing so would help them improve their emotional state about their performance. For example, when one student was asked what encouraged him to talk to his peers about the SOSCE after it was done, he responded,  I felt that I did really poorly and I wanted to see if other did poorly as well to make myself feel better about how I did…So I actually came out of that feeling, after talking to other people –a little bit more comfortable because it felt like other people had done quite poorly as well too. Some of the people that I talked to had truly just bombed things and so in a sick way that made me feel better about myself. So yeah it made me feel a little bit better but I was definitely still concerned about my performance. (P10).  Note that even here the focus of seeking external support was on improving emotion rather than improving performance, resulting in the information students sought being different than what they needed to adjust learning or understanding in some cases. That is, focusing on improving emotions might have limited students’ access to information from external supports that could be used to improve performance. For example, one student focused on “blowing off steam” to improve their emotional state rather than talking about the “content” assessed on the SOSCE when speaking to peers after the SOSCE. Honestly, after the OSCE it’s like part of the thing that people enjoy, like part of the fun is that you just kind of hear the mess ups that people in different OSCE rooms. Like it kind of just adds a light humour to the day like, oh this person did this, and that’s hilarious and like oh this happened and wonder about oh so awkward here, that kind of stuff. But yeah, I mean, to give you the complete honest truth, we’re not really talking content as much as we are sort of just blowing off steam. (P17).  Other students were deterred from seeking out their peers as a source of external support because they thought it would worsen their emotional state. “You can tell me that I got it wrong or I tell you got it wrong or they make each other feel crappy when we should be celebrating that we actually completed one of the exams. That’s just me.” (P1). Choosing not to seek external support 139  to improve a positive emotional state might have again limited their access to information that could help them adjust learning or understanding to improve performance.  Students’ prioritization of emotional state objectives over performance objectives (and its capacity to limit their adjustment of learning or understanding by shaping access to information from external support opportunities) similarly influenced their capacity to engage in reflection-on-action. For example, one student did not want to engage in reflection-on-action after the SOSCE because of the negative experience she had during the SOSCE. “I was kind of shocked and kind of irritated afterwards where I was just kind of like, ‘Oh that was so dumb’ and I probably could’ve reflected more on it but I was just like overly frustrated.” (P13). By not engaging in or limiting reflection-on-action, it might have limited students’ access to information that would enable improved performance.   Students’ prioritization of emotional state objectives was not necessarily stable across different assessments and was not exclusive to the SOSCE. One student who focused on emotional state objectives in the FOSCE by limiting her reflection-on-action focused on improving her performance after the SOSCE by engaging in reflection-on-action in search of areas of improvement.  I think in December [for the FOSCE] I was probably so nervous about it that I didn’t reflect as much on the process. And I was kind of just so happy to have it done with. And I think in April I was able to reflect more and kind of take more from it. I think in December [for the FOSCE] I was a bit disappointed myself for not being as confident as I would like to be with the patients. And I think it’s kind of bad—it’s sort of bad to be too judgmental on yourself and I think in April I kind of had the attitude of “Okay, let’s take this as a learning experience”. And I feel like that really helps with reflection and you kind of just look at your performance and without judging yourself, you just look at where you need to improve. (P25).  140  Similarly, when negative emotions were reduced after the SOSCE, some students became willing to reflect-on-action to understand how to adjust their learning or understanding. “I just feel like immediately after the exam like I felt like and I was able to think a little bit more calmly, immediately I was able to be like okay here are the things that I like generally missed from these stations.” (P19). In this way we see that the prioritization of emotional state objectives over performance objectives may influence some students’ access to information from reflection, thereby influencing their capacity to adjust learning or understanding, but in a dynamic way.  In sum, for some students, the prominence of negative emotions in the SOSCE seemed more pronounced compared to the FOSCE. The degree to which this was the case appears to have been influenced by differences in design of the SOSCE compared to the FOSCE (e.g., academic consequences, examiner behaviour and characteristics of the content assessed) interacting with student factors (e.g., students’ understanding of the assessment, response to academic consequences of assessments and time management skills and task prioritization) and environmental factors (e.g., concurrent tasks). The greater prominence of negative emotions in the SOSCE may have led some students to prioritize emotional state objectives over performance objectives following the assessment. However, it is important to note that some students had negative emotions after the FOSCE as well; these students also chose to prioritize emotional state objectives over performance objectives. In either case, prioritization of emotional state objectives sometimes limited access to information from external support opportunities and from reflection-on-action that might have been useful to improve performance because improving one’s emotional state required different information or limited engagement altogether. Therefore, 141  variation in how students prioritized personal objectives after an OSCE might have led to variation in their adjustment of learning or understanding between and within the study contexts.   4.2.3.3 Factors influencing students’ access to information or practice opportunities when prioritizing performance objectives In the prior section, I discussed how some students’ prioritization of improving emotional state objectives over performance objectives influenced the extent to which they could improve performance by influencing their access to information required to adjust learning or understanding. However, similar to the FOSCE, some students continued to focus on improving their performance after the SOSCE through completing cycles of strategic action. For example, after the SOSCE, one student who wanted to improve performance in the future adjusted his learning approach by documenting steps he forgot on the SOSCE, so he would know how to focus his future learning.  As soon as I left the OSCE I wrote down three or four things that were clinical things that I forgot that I can’t forget in the future. But I just jotted them down and then I just kept that…So for example like, like when I knew it was appendicitis, but then I was trying to think of the appendicitis signs, I could only think of one, but I knew there were two, I just couldn’t remember the second one. So for example, I wrote down that, the second one. Just the little things like that for example. (P17).  Students also sought ways to improve their performance by adjusting their understanding of the OSCEs through their SOSCE experience. For example, one student refined her understanding of the scope of assessable content when experiencing a mismatch between her understanding of what was assessable and what was included in the SOSCE. I guess going back to the thyroid example and the cholesterol [stations on the SOSCE], so that was never really taught in our clinical skills but that was kind of just from our general—it came up at some point in the year so I guess that kind of just showed me like…even if something is not directly taught in clinical skills then they can still test 142  whatever we learned in the other things. So that kind of just, I think for future OSCEs I’ll remember that. And I’ll make sure that I’m up to date on major diseases. (P25).  Another example of adjusting understanding after the SOSCE occurred when students adjusted their understanding of to whom they should listen to develop their understanding of the expectations of an OSCE. “Well, I definitely want to take advice that I get with a grain of salt ‘cause it wasn’t anything like I thought it was going to be and then that was very stressful.” (P19).  As previously mentioned in the FOSCE results section, to monitor performance and adjust learning or understanding, students depended on information from reflection in and on action, information or feedback from external support opportunities and, in some cases, additional practice opportunities. Interactions between factors related to the student, assessment design and the environment students learn and live in influenced their access to information and opportunities and, thus, the extent to which they could work towards performance objectives. (Figure 19). Differences between student, assessment design or environmental factors, therefore, could lead to variation in students’ performance monitoring and adjustment in learning or understanding.  143   Figure 19: Factors related to students’ access to information or practice opportunities when prioritizing performance objectives  As mentioned in the FOSCE study, reflection-in-action was a source of information students used to monitor performance and adjust their learning or understanding. Also mentioned was some students’ perceived difficulty attending to concurrently performing in the station and reflecting-in-action. In the SOSCE, some students experienced additional challenges attending to reflecting-in-action while performing.   As previously mentioned, some students responded with negative emotions to the academic consequences associated with the SOSCE. In response to the negative emotions, some students went into the assessment planning to focus their attention on performing in the SOSCE instead of focusing on how to improve performance through reflection-in-action. Note that this action did not seem to be aimed at improving students’ emotions so it was different than students prioritizing their emotional state objectives. “…I think I took the December one for more a learning experience overall. Whereas the April one is more of a performance thing. I wasn’t     144  really going into it thinking I was going to learn stuff from it. Whereas the December one I was like, ‘Okay, let’s go learn what OSCE’s like and how I can do better next time.’” (P13).   When asked why she was thinking more about learning in the FOSCE, she responded,  Because it wasn’t for marks. That’s the main reason I think…I think I was probably just less stressed about it because I knew it wasn’t going to count for grades so I kind of thought of it more as something where I could learn from it rather than something where it was like super stressful that I have to perform really well on and it’s just kind of, “let’s go see how you do and go from there.” (P13).  Therefore, interactions between students’ response to the academic consequences associated with the SOSCE (a student factor) interacted with the academic consequences associated with the SOSCE (an assessment factor) to lead some students to experience negative emotions.   However, even when students went into the SOSCE intending to engage in reflection-in-action, some who experienced negative emotions had difficulty attending to reflection-in-action because they were distracted by their emotions. For example, as previously mentioned, mismatch between students’ understanding or expectations of SOSCE design and the actual SOSCE design led one student to experience negative emotions that distracted them from engaging in reflection-in-action for the sake of adjusting their learning or understanding. Whereas the April one [SOSCE], I wasn’t really learning very much during it because I was just so stressed out. So I was like less able to take away useful tidbits like during it because I was just so like overwhelmed…I wasn’t expecting some of the patient scenarios that we got and I wasn’t expecting, I don’t know, I guess the level of difficulty of the exam was not the same that was advertised to us. And so I think that’s why I was getting stressed out because it just wasn’t what I was expecting in the exam. (P19).  Similarly, some students who experienced negative emotion in response to the academic consequences associated with the SOSCE also had trouble attending to reflection-in-action.  145  Like, because there’s so much more stress from a summative one it might be, like, less brain capacity to think of all those things that was also learning. Versus, for the formative it’s more relaxed, so like we’re more thinking about what we are learning. But I feel like at the end though we still take quite a bit away from the summative OSCE…The stress of, like, yeah, the stress of knowing that, hey, this one is, like, for real, this is for marks, if we fail this we’re, like, we have to do another year, like, that kind of thing. (P3).  In sum, interactions between student factors (e.g., emotional response to the academic consequences associated with the SOSCE and their understanding of the SOSCE) and the assessment design (e.g., academic consequences and characteristics of the content assessed) led students to experience negative emotions. Experiencing negative emotions distracted some students from attending to reflection-in-action (students’ attention capacity), which influenced their access to information from reflection-in-action that might have been useful to monitor performance and adjust learning or understanding.   Interactions between factors also influenced students’ access to information from external support opportunities and to practice opportunities. As already described, in the SOSCE, students did not have an immediate verbal feedback session after each station. As a result, students received less feedback, generally received fewer types of feedback (e.g., less narrative feedback), there was no opportunity for students to engage in dialogue during formal external support opportunities, and the timing of formal external support opportunities was more delayed. These differences influenced the extent to which some students could use the SOSCE to work towards objectives after the exam by influencing their access to information they needed to improve performance.  146  For example, one student had trouble interpreting the meaning behind their marks to understand how they should adjust their learning or understanding because of the lack of narrative feedback about specific strengths, areas of improvement and how to improve. “I was able to learn [from the written grade report] that there are some stations that I didn’t do so well and I could think of a few reasons why I didn’t do so well…But again, without specific comments from the examiners, it’s difficult to really pinpoint what would be more of an ideal performance…” (P1).  Therefore, interactions between assessment design (e.g., the design of formal external supports) and student factors (e.g., students’ understanding of information necessary to monitor performance and adjust learning or understanding) influenced the extent to which students could work towards their objectives. Differences in the design of formal external support opportunities between the SOSCE and the FOSCE, for example, led to variation in students’ access to information from external support opportunities.  Similar to the FOSCE, some students depended on the broader learning environment to access external support opportunities and determine how to adjust understanding or learning or gain access to further practice opportunities to implement adjustments. However, because the SOSCE was scheduled to take place at the end of the school year, it meant that after the SOSCE, students did not have access to practice or external support opportunities as part of the curriculum like they did soon after the FOSCE and, thus, were limited in the extent to which they could address performance objectives.   147  Students who wanted further practice opportunities to implement adjustments sometimes seemed especially dependent on the program to provide practice opportunities because the nature the content assessed on the SOSCE made it difficult for them to create their own practice opportunities. “You can’t really, like, you can’t do those same exams. Like, even if I wanted to improve on my neuro[logical] exams, person with, like, a neuro[logical] deficit, like, I wouldn’t be able to do that on anyone. Like, I can’t do that to my friends and family because they don’t have those symptoms.” (P3).   In response to the lack of immediate external support and practice opportunities, some students delayed thinking about refining learning or understanding based on the SOSCE until such opportunities were available again in the year-two curriculum. This pattern of action suggests that learning through an assessment does not always happen immediately and inaction after an assessment is not always reflective of a lack of interest in learning from the assessment. “I think most people have like taken the summer off from medicine but now that, I mean—I think now when I go back to my clinical skills then I’ll pick that up again and kind of remember what I wanted to work on.” (P25).  Rarely, students had access to external support and practice opportunities through extracurricular opportunities in the summer. Those who did could continue working towards their objectives by planning or implementing adjustments. For example, one student perceived that he might have had some deficits based on the global rating he received on the written grade report. However, he could not figure out what the deficits were. When he was shadowing physicians in the summer, he was able to seek clarification about deficits to understand how he should adjust his 148  performance. “But that score report could kind of tell me that, okay when I meet say, I don’t know, one of the doctors on the general surgery service, I can say, okay well, today the thing I want to focus on for me is how to structure my abdominal exam. And then at least kind of give me some guidance in that sense.” (P17).  As mentioned in the FOSCE results chapter, students’ time management skill and task prioritization interacted with concurrent tasks in the learning environment after the assessment to influence the extent to which they could attend to performance monitoring and adjustment of learning or understanding. Interactions between these factors in the SOSCE continued to have such influence. For example, even though the school year was coming to a close at the time of the SOSCE, students still had to attend to other assessments that had academic consequences associated with not passing. When students had difficulty managing multiple tasks, some prioritized preparing for other assessments over understanding or implementing adjustments in learning based on their SOSCE experience. “I mean I guess I just thought about it that day [day of the SOSCE], about the different stations and things I might’ve missed. I don’t actually specifically remember going back to my notes and looking at things that I kind of wondered about…Probably because I was just worried about studying for our next exam to be perfectly honest.” (P13).   Because the SOSCE took place towards the end of the school year, it was followed by a one month project course and three months of summer vacation. The scheduling of the SOSCE highlighted the role of interactions between concurrent demands in the environment students lived in and students’ time management skills and task prioritization in influencing their performance monitoring and adjusting learning or understanding after the SOSCE. In the summer, students also 149  had concurrent demands such as extracurricular activities or summer jobs. Students prioritized these commitments over planning or implementing adjustments from the SOSCE because they perceived that not attending to these demands would have more significant consequences. For example, one student chose to work full-time in the summer instead of implementing adjustments in learning based on areas of improvements identified through the SOSCE because there would be no immediate consequences if she did not implement adjustments, whereas there would be financial consequences if she did not work in the summer.  I feel that—it’s all about juggling multiple tasks, having a lot on the go so I have to somehow prioritize based on what the stake and return in investment, in a way, like how much time do I need to put in this, what’s the consequence of not doing this kind of thing…I think during summer I decided to go back to full time work to help pay for the family expenses, so that was more important for me during summer, I thought that I could catch up [on addressing learning needs from SOSCE] when school starts. (P1).  In summary, after the SOSCE, some students were focused on improving performance through accessing information and opportunities to aid performance monitoring and adjustment of learning or understanding. Interactions between student factors (e.g., emotional response to the academic consequences associated with the SOSCE and understanding of the SOSCE) and the assessment design (e.g., academic consequences and characteristics of the content assessed) led some students to experience negative emotions. These negative emotions sometimes distracted students from attending to reflection-in-action (student factor of attention capacity) and thus, influenced their access to information.   Further, for students who valued information from external support opportunities, students’ access to information from such sources was influenced by interactions between assessment design (e.g., the design of formal external supports) and student factors (e.g., students’ understanding of 150  information necessary to monitor performance and adjust learning or understanding). Access to external support and practice opportunities after the SOSCE was further influenced by interactions between factors related to the student (e.g., time management skills and prioritization of skills and information and opportunities required to monitor performance and adjust learning or understanding) and students’ learning and living environment (e.g., concurrent tasks and availability of external supports and practice opportunities). Of special note, in some cases, the lack of external support and practice opportunities led students to delay when they planned to adjust learning or understanding until a time when they anticipated such opportunities would be more readily available.   4.2.3.4 Section summary In summary, similar to the FOSCE, students continued to complete cycles of strategic action regarding their SOSCE performance by monitoring performance and adjusting learning or understanding to work towards personal objectives. Interactions between factors related to the student, the assessment design and student’s learning and living environment shaped students’ engagement. Variation in any of these factors between the SOSCE and FOSCE could have led to variation in students’ engagement in learning in the context of the two assessments.  Students’ understanding of the content assessed, which is a component of their understanding of the assessment context (a student factor) interacted with the characteristics of the content assessed on the SOSCE and assessment scheduling (assessment design factor) to influence their adjustment of learning or understanding after the assessment. Their understanding included 151  perception of the necessity and the urgency of adjusting learning or understanding and influenced the content students chose to prioritize when adjusting learning or understanding.  For some students, the prominence of negative emotions seemed to be more pronounced during the SOSCE compared to the FOSCE. This appeared to be influenced by differences in the design of the former compared to the latter (e.g., presence of academic consequences, examiner behaviour and characteristics of the content assessed) interacting with student factors (e.g., students’ understanding of the assessment, response to academic consequences of assessments, and time management skills and task prioritization) and environmental factors (e.g., concurrent tasks). The greater prominence of negative emotions in the SOSCE and the lack of experienced academic consequences (given that all participants passed) may have led students to prioritize emotional state objectives over performance objectives after the SOSCE. However, it is important to note that some students had negative emotions after the FOSCE as well and some also chose to prioritize emotional state objectives over performance objectives.  Students who chose to prioritize their emotional state objectives after the SOSCE sometimes influenced the extent to which they could improve performance through performance monitoring and adjustment of learning or understanding. Prioritizing improvement of their emotional state had the potential to limit students’ access to information from external support opportunities and from reflection-on-action because improving their emotional state required different information or limited their engagement altogether.   152  Even when students tried focusing on performance objectives, the extent to which they could access information and opportunities to monitor performance and adjust learning or understanding depended on multiple interactions between student factors, factors related to assessment design, and factors related to students’ learning and living environment. Negative emotions due to interactions between student factors and the assessment design distracted students (i.e., took students’ attention away) from engaging in reflection-in-action and thus, influenced their access to information. Students’ access to information from formal external support opportunities in the SOSCE was influenced by interactions between assessment design (e.g., the design of formal external supports) and student factors (e.g., students’ desired information). Access to further external support opportunities and practice opportunities after the SOSCE was influenced by interactions between factors related to the student (e.g., time management skills and task prioritization, and desired information and opportunities) and students’ learning and living environment (e.g., concurrent tasks and availability of external supports and practice opportunities).   4.3 Chapter summary In this chapter, I described students’ experiences learning through the SOSCE, which along with students’ experiences learning through the FOSCE described in the last chapter, were sought to address the research objective of enhancing understanding of the factors first-year medical students report as influencing their learning through OSCEs.  The overall structure and organization of findings was influenced by two sensitizing concepts: Butler and Schnellert’s Self-regulation framework and Gielen et al.’s model of assessment 153  effects. As previously mentioned, the purpose of studying two assessment contexts was to broaden understanding of the range of factors influencing students’ learning through assessments. In the following sections, I will highlight additional key findings that arose from studying students’ experiences learning through the SOSCE.   Students continued to work towards similar emotional state and performance objectives in this new context and continued to set objectives based on their understanding of the relevance of the assessment to other contexts of interest. Students’ personal objectives and understanding of the contexts their objectives were set in, however, were not static and sometimes changed between the FOSCE and SOSCE. This evolution could, therefore, lead to variation in their engagement in learning at any given moment.  Some students seemed to experience more significant negative emotions in the SOSCE compared to the FOSCE due to the greater academic consequences and some students being assessed on content they did not expect. This is an important distinction because the strategy some students used to improve their emotional state conflicted with the strategy required to improve performance. This suggests that not only do students choose what objectives to pursue by taking into account many factors, but such choice can limit the pursuit of other objectives that might be similarly valued.   Most generally, however, it was observed that how well students could adopt a strategy to address personal objectives continued to be influenced by interactions between factors related to the student, the assessment design and students’ living and learning environment. Variation in 154  these factors between the FOSCE and SOSCE made these factors more apparent and again highlighted variation in strategy use across students within any given assessment. Sources of such variation included differences in interpretation of the urgency of adjusting one’s learning and the extent to which students felt able to access the information and practice opportunities they desired. Of particular note, students sometimes delayed adjustment in their learning or understanding when relevant external support or practice opportunities were not available soon after the SOSCE. This finding suggests that an apparent lack of effort to address a performance deficit does not always reflect a lack of interest in improving performance.   155  Chapter 5: Discussion  Across two linked studies, this dissertation aimed to explore the overarching research objective of enhancing understanding of the factors first-year medical students report as influencing their learning through OSCEs.  The first study focused on students’ experiences in a “formative” OSCE, completed in December at the end of the first semester, half way through the first year of medical school. The second elaborated on the understanding that emerged from that study by examining students’ experiences in a “summative” OSCE, completed at the end of their first year in medical school. Through these two studies, using CGT and two main sensitizing concepts (Butler and Schnellert’s Integrative Model of Self-regulation54 and Gielen et al.’s39 model of assessment effects), a model was developed to describe factors that influence students’ learning through an OSCE assessment context.  For the sake of clarity and ease of discussion, this chapter will be divided into two main sections: the first summarizes the personal objectives students wanted to address before, during and after their experience with the OSCEs and the second presents factors influencing the extent to which they felt they could address those objectives.   5.1 Personal objectives  Consistent with Butler and Schnellert’s Integrative Model of Self-regulation,54 learning through an OSCE was aimed at working towards objectives students set for themselves. That is, even in the context of this program-initiated activity, students set their own priorities regarding what and 156  how they strove to learn. Students had different types of objectives (e.g., emotional state and performance related objectives) and, consistent with previous research on medical and other health professional students,45,81 their objectives were relevant to multiple contexts (e.g., the OSCE at hand, future OSCEs and clinical work). The OSCEs studied had an influence on students’ learning before, during, and after the assessment itself (whereas most research in this domain has focused on post-assessment effects7) and students reported multiple strategies for achieving their particular personal objectives.  The notion of “particular” objectives is important because students showed variation in both the objectives they held and the strategies they adopted such that they did not universally hold the same types of objectives or prioritize the same strategies. Therefore, differences in engagement across students might be reflective of differences in their personal objectives or strategies. For example, not preparing for an OSCE prior to it taking place might reflect the student not seeing preparation as a strategy that would lead towards fulfilment of their objectives whereas for another student it could mean they were limited in their capacity to prepare or that the assessment was not considered valuable for other reasons.   Consistent with previous research on goal orientation,127,128 students’ personal objectives reflected both performance goal orientation and mastery goal orientation. The purpose of a performance goal orientation is to “gain positive judgments of one’s current level of personal competence and avoid negative judgments”.129 Examples of this type of orientation from the current research could be seen when students reported wanting to outperform their peers or to pass the OSCE to avoid academic consequences. The purpose of a mastery goal orientation is to 157  “gain positive self judgements by actually increasing one’s competence”.129 An example of this orientation from the current research was seen when students indicated wanting to use an OSCE to prepare for clerkship. Individuals with a performance goal orientation tend to have a fixed mindset, meaning that they believe intelligence to be a fixed property.129 Students with fixed mindsets who are confident learners generally seek opportunities to demonstrate their skills whereas those who are insecure often have feelings of helplessness.129 Mastery goal orientation is accompanied by a growth mindset, meaning that individuals believe intelligence to be malleable.129 Both confident and insecure learners with a growth mindset are expected to seek opportunities to improve their abilities.129 Further, existing research suggests that compared to individuals with a fixed mindset, individuals with a growth mindset have more positive beliefs about effort (e.g., effort is positive and a means to become smarter),130 show greater willingness to increase effort, rather than withdraw, in the face of setbacks,131 and tend to experience improved grades as a result.132 Though the mindset that one holds can be relatively stable, interventions have led to alterations in mindset that have been seen in a variety of domains including intellectual,133 physical,134 and managerial skills.135   Given the multiple benefits of a growth mindset, it would be valuable, through future research, to explore how medical students’ mindset might influence the personal objectives they hold and how mindset shapes their regulation of learning through OSCEs (e.g., persistence in improving areas of performance after an assessment). Student beliefs about intelligence offer a potential target for intervention aimed at supporting regulation of learning through OSCEs to improve performance. In other domains, it has been shown that teaching learning goal orientation to at-158  risk junior high school students improved achievement in math grades compared to students who were not taught.136   5.1.1 Interpretation of the relevance of an assessment for addressing personal objectives  As previously mentioned, many of the personal objectives students were interested in addressing through the OSCEs were set in contexts outside of the OSCE (e.g., future OSCEs and clinical work). The data from the current research suggest that to use the OSCE at hand to optimally address personal objectives outside of the OSCE required effective task interpretation, meaning having an accurate understanding of the OSCE at hand, but also required an accurate understanding of how the OSCE task related to other contexts.   Students’ task interpretation continuously evolved as they had new experiences and gained new information. Some came to recognize that they previously had errors in their task interpretation, which initially made them unable to appreciate how to use the OSCE to optimally to address their personal objectives.   To help students learn through assessments in a manner that aligns with their personal objectives, educators can help students co-regulate their learning (i.e., to support effective self-regulation137). One way to co-regulate students’ learning is for educators to offer explicit discussion about the relationship between OSCE-relevant actions and the actions expected in other contexts. That is, deliberate discussion about task interpretation might help the student to understand, and the educator to facilitate, effective translation of one’s self-regulated learning through an assessment to the student’s ultimate aims. Some FOSCE examiners appear to have 159  done this during the verbal feedback session, supporting students’ adjustments in learning or understanding of relevance by being explicit about the context or contexts to which students should or should not apply the feedback they were provided within the OSCE itself.  5.2 Factors influencing progress towards personal objectives Historically, it has been common for assessment developers to determine the priorities of an assessment process by assigning the purpose as being “summative” or “formative”. Underappreciated was the importance of understanding how students interpreted the task of learning through assessments and what specific factors influenced their engagement in learning. The current research showed how complex interactions between factors related to the assessment design, the student, and the learning environment influenced the extent to which students could learn through OSCEs (i.e., prepare for, practice skills during, and monitor performance and adjust learning or understanding after OSCEs). In this section, I will discuss how key interactions relate to existing research and discuss implications for research and practice. Consistent with Butler and Schnellert’s model, through the influence of all these factors it was clear that “self-regulation of learning” should not imply working alone as educators often influenced students’ regulation of learning. Therefore, I will also highlight suggestions for researchers and educators to help students co-regulate learning through assessments.   5.2.1 Students’ emotional state  Existing research suggests that the presence of academic consequences, most often associated with assessments labelled as “summative”, can lead students to experience negative emotions.8 While the current research supported this finding, the label of summative may not be the ultimate 160  determinant of negative emotions as some students had similar reactions to participating in an OSCE without academic consequences. Further, some reacted similarly to opportunities to participate in other practice settings and to engage with external supports available in the broader learning environment. Events during an assessment (e.g., mismatch between students’ expectations and their experiences of the content to be assessed) contributed particularly strongly to students’ negative emotional states.   Existing research suggests that negative emotions may negatively impact students’ learning. For example, when students indicate a desire for feedback, that stated desire can be illusory as the “feedback” that is wanted may be more reflective of a desire for reassurance (i.e., to improve their emotional state), rather than it is of guidance regarding how one might improve.10 This is a complex aspect of self-regulation as the current research showed that: 1) emotional state and performance improvement objectives can be, but are not always in conflict with each other; and, 2) the impact of having emotional state objectives extends beyond impacting the type of feedback students want from external support opportunities.   For some students, negative emotions in an OSCE did not seem to negatively impact their efforts at performance improvement because the strategy used to improve their emotional state was conducive to improvements in performance. For example, when students focused on preparing for the assessment as a means to reduce negative emotions during the OSCE, that preparation could be anticipated to impact positively on their performance. Students who used this approach to improving emotions used problem-focused coping, a form of coping that focuses on removing or circumventing the stressors.138 Other examples include seeking help in the face of stressors.139  161   In contrast, when other students experienced negative emotions resulting from an OSCE, they used strategies to improve emotion that resulted in risk of negative impact on students’ learning. One example was observed whenever students withdrew from thinking about their OSCE performance after experiencing negative emotions during the OSCE. Students who took this approach seemed to adopt emotion-focused coping strategies, a form of coping that focuses on reducing or eliminating the stress itself (rather than the stressor) that is associated with a given situation.138 Examples of such strategies include relaxation, self-blame, acting up, distancing and suppression.140   However, emotion-focused coping strategies did not always have a permanent negative impact on students’ learning. For example, some students indicated that they wanted their feedback in the SOSCE to be delivered at the end of the assessment because they did not want it to negatively impact their emotions during the assessments. In that way, attending to an objective of improving emotions did not necessarily cause permanent detrimental harm to students’ learning and the potential conflict between objectives might be avoided.   Given one significant source of students’ negative emotion during the SOSCE was a mismatch between what they expected to be assessed upon and the content that was actually assessed, Zeidner’s138,141 suggestion of ensuring students understand the task demands, materials, assessment procedures, and grading practices offers a potential approach to reducing negative emotions in assessments. These suggestions also have the potential of supporting students’ 162  performance improvement by facilitating effective preparation as well as reflection in and on action through better understanding of tasks demands and materials.   Another approach to help limit the detrimental effects negative emotions can have on learning is to help students manage or regulate their emotions. For example, educators can encourage students to talk about their emotions and can model effective emotion regulation strategies142 (e.g., not solely relying on emotion-focused coping strategies). Within medical education, the Facilitated Reflective Performance Feedback model143 developed to help physicians accept feedback has also emphasized the importance of helping recipients feel comfortable about sharing emotions and reactions to the feedback data. This approach might be helpful if extended also to include helping manage negative emotions arising from assessment experiences.  Another approach through which educators might facilitate co-regulation by improving students’ emotional state in assessments is to target the timing of when feedback about students’ performance, including test scores, are made available. In past research on medical students’ experiences after an OSCE that carried academic consequences, students experienced negative emotions like fear of failure and anxiety while awaiting the results of their OSCE.8 Recent research on undergraduate students participating in computer based assessments showed that exam scores presented immediately after the assessment had significant positive effects on relief, pride, and hope and negative effects on anxiety and shame but had no impact on anger.144 The current research suggests that students show variation in their preferences for when they want feedback about their performance dependent on whether they thought it would negatively impact their personal objectives regarding their performance or their emotional state. Therefore, to 163  promote positive emotional states after assessments, educators may want to ensure students have rapid access to information about their performance but provide them with control over when they take access.   Another avenue for future research is to explore whether or not certain types of negative emotions are more detrimental to students’ learning compared to others. According to Pekrun et al.,145 negative emotions can be characterized as negative deactivating and negative activating emotions. Deactivating emotions, such as hopelessness and boredom, are detrimental to motivation to learn. In contrast, the effects of activating emotions like anger, anxiety, and shame, are thought to be more complex. Awareness of how specific negative emotions impact individual students’ motivation might help educators pinpoint key situations when they should provide supports to help students stay motivated.   Of special note, having and developing self-confidence about performing in the OSCE (or in other settings) was considered by students to enable a positive emotional state in the current research. Students employed preparing for the OSCEs, practicing their skills in the OSCEs, and receiving certain types of information from external supports as ways of improving their self-confidence through the assessments. These observations are consistent with the literature on self-efficacy (i.e., “personal judgements of one’s capabilities to organize and execute courses of actions to attain desired goals”146) in that verbal persuasion (e.g., feedback from OSCE examiners) and enactive experiences (e.g., skills practice) offer important ways to build students’ self-efficacy, which may, in turn, have a positive impact on their emotional state.146   164  This is not simply a matter of students feeling better about themselves, however, as the development of positive self-efficacy has the potential to improve performance and learning by making students more likely to take on tasks and persist through the challenges that lead to greater achievement.146,147 Future research should explore the extent to which assessment-related impacts on students’ self-efficacy moderates their engagement and achievement in subsequent learning opportunities. Further, educators might want to consider the addition of vicarious experiences146 like performance modelling when designing assessments as a further strategy to improve students’ self-efficacy and achievement. Previous research on writing revision in college students148 and high school girls learning dart-throwing149 showed that students had better skill performance, self-satisfaction, self-efficacy, and intrinsic interest when they had access to a model to demonstrate a task compared to no model. Specifically, coping models, in which students initially observe a model perform a task with errors and then observe the model gradually eliminate these errors, were more beneficial than observing a mastery model who performed the task from the outset without errors. Kitsantas et al.149 hypothesized that coping models were more effective than mastery models because they identify and correct errors. One potential avenue for implementing a coping model might be to have other students at the same or more advanced level of training discuss challenges or errors they encountered during task performance and how they corrected the errors.    5.2.2  Information students require to guide learning  The current research showed that students showed diversity in the information they valued for addressing personal objectives. Information commonly associated with a summative function (e.g., provision of scores per station and the class average) were included as part of the FOSCE. 165  Some students reported the inclusion of such information to positively influence their learning by helping them monitor progress towards their personal objectives (e.g., to perform at a level relative to the class average and meeting expectations of their level of training). While scores and norm-referenced information did not necessarily guide students about how to improve, it helped some of them determine whether or not they should consider improving their performance. Therefore, although it might seem intuitive to assessors to withhold scores in an effort to focus learners on improving their performance, restricting certain types of information may limit students’ capacity to feel like they have the information required to regulate learning towards their personal objectives.   After assessments, students often do not have control over the type of information that is available from external supports (e.g., OSCE examiners), with assessment organizers or the examiners controlling what type of feedback is formally provided.7,17,150 In the current research, the type of information or feedback students wanted from external support opportunities depended on their objectives and the information they thought necessary to enable adjustments and monitoring of their progress towards their objectives. Across students, the types of information that was desired varied from specific positive reassurance to areas requiring improvement and guidance on how to improve. Global judgements of performance (e.g., a mark or grade per OSCE station) and indications of the strength of one’s performance relative to their peers were both valued.   Given the diversity of students’ desires and learning needs, it is unlikely that a predefined set of information will satisfy all students’ efforts to regulate their continued learning. Instead, external 166  support opportunities that can flexibly accommodate individual students’ needs might be necessary to optimally enable access to the information they perceive as being valuable to their ongoing professional development. That might place demands on the educational program, but ensuring access to information that students deem valuable is important if the assessment is to be viewed credibly by students as a support for their learning. When such information is not available, students may try to infer it based on other available information, such as when some students attempted to generate global ratings for their FOSCE performance based on the proportion of feedback they received that was positive. Because students did not know how the program generated the global ratings, they might have arrived at an incorrect conclusion and used faulty information to monitor their performance and make adjustments.  In the current research, there were several design features of the external support opportunities that facilitated students’ access to information. For example, having both verbal and written forms of external support helped students access a variety of types of information. They reported valuing the opportunity for dialogue in the verbal feedback session to help ensure they obtained the information they thought to be most informative.   Another key issue for educators to consider when striving to support students’ access to the information they need is information retention. In the current research, even when students were provided with information from external support opportunities (e.g., provision of feedback) or given the opportunity to generate information (e.g., reflection-in-action when performing in an OSCE) to inform performance monitoring and adjustment of learning or understanding, students could not always use these sources of information. Students sometimes had trouble recalling or 167  remembering the feedback provided to them, or had trouble generating information through reflection-in-action. A cognitive load perspective provides insight into why students might have trouble adjusting and helps inform the kind of supports students might need to enable them to use the information to adjust learning or understanding.  From a cognitive load perspective, performing in and reflecting-in-action during an OSCE station might be considered primary and secondary tasks, respectively, that compete for cognitive resources.38 Because individuals have limited cognitive resources, when the primary task is complex, the attention or working memory required to perform it might leave insufficient cognitive resources for students to engage in the secondary task of reflecting-in-action.38 This possibility is supported by past research that has demonstrated that being asked to engage in reflection in performance while performing a complex task can be detrimental to performance.38   For first-year medical students, in particular, OSCE performance might be highly demanding because they have minimal experience with the assessed content and the OSCE format. The SOSCE comes after experience with the FOSCE, but has added complexity because students encountered some unexpected content and had to develop their performance approach during the assessment.   Experiencing negative emotions during the OSCEs would be expected to amplify these issues, creating additional challenges reflecting-in-action given that past research has shown negative emotions can generate additional cognitive load.151–153 This would, thereby, be expected to compete further with the cognitive resources required for students to perform a task and engage 168  in reflection-in-action. While speculative, this mechanism could explain why students set objectives pertaining to feeling comfortable just as readily as they set objectives for particular performance expectations.  One direct implication of these findings and ideas is that we cannot assume students have sufficient cognitive resources to engage in reflection-in-action fully and, thus, while it might appear to an outsider that they had every opportunity to learn from their experience and the feedback provided, students might leave the OSCE without a complete mental record of their performance. That will limit access to information when students engage in reflection-on-action post-assessment and their efforts to decipher examiner feedback.   A cognitive load perspective also offers a potential explanation for why students might have trouble retaining all the feedback they obtained during the verbal feedback session. Though receiving verbal feedback was valued by students in the context of the FOSCE and is often an educational strategy used in OSCE settings more generally,10,154 students in the FOSCE context found it difficult to retain all feedback when it was delivered verbally. This is consistent with previous research on medical and dental residents receiving verbal feedback in an OSCE.154,155 The study on medical residents,154 for example, found inaccuracies in the feedback they recalled, which could lead to ineffective adjustments in learning or understanding. This is not surprising if we keep in mind that for students to retain feedback after an OSCE, working memory must organize and encode the information available for storage in long-term memory for retrieval at a later point in time.156 Working memory has limited capacity,157 generally retaining no more than seven plus or minus two pieces of information at a time and processing (i.e., organizing, 169  comparing and contrasting156) two to four pieces at any moment.158 Far surpassing these limits, residents in the above-mentioned study were provided with an average of approximately sixteen pieces of feedback within each station.154 Therefore, it is easy to imagine that students in the FOSCE context could have had trouble retaining verbal feedback simply due to the amount of information overloading their working memory capacity.  Further complicating the memory task before them is the fact that students needed to stop thinking about the feedback received after each station to perform and attend to other stations during the OSCE. The challenge this creates is known as retroactive interference given that engaging in subsequent stations after receiving feedback might interfere retroactively with memory of the feedback received in previously encountered stations.159  Further, past research160 suggests that individuals might block feedback that triggers negative emotions if they want to protect their ego, making it difficult to recall later on. This is consistent with our observation that some students speculated they would not want immediate verbal feedback after each station in a SOSCE to avoid negative emotions. It also suggests further reason why students might experience additional challenges recalling verbal feedback if it were provided immediately.  Therefore, to support students’ adjustment of learning or understanding, educators might do well to focus on enhancing students’ retention of feedback from external supports and supporting students’ generation of constructive information. Strategies that students identified to do so include building redundancy in feedback through written documentation of verbal feedback. 170  Strategies identified in the literature include video161 or audio162 recording of feedback of students’ performance.   Beyond the types of information available, in the current research, the credibility of the information, as perceived by students, also influenced the extent to which they believed they could use the information. Compared to other learning opportunities, some students found information from OSCE examiners to be more credible. Students’ perception of the credibility of information was multifaceted, however, as they considered both the quality of the information to which the individual assessing their performance had access to and the competence of the person assessing their skills.   Similar to previous research,31 students valued direct observation of performance because it allowed the assessor to gather detailed information about their knowledge and skill level. Some students especially valued direct observation of performance on an OSCE because the intensity of observation (i.e., the degree of observation163) was greater compared to other learning opportunities.   The importance of the examiner having a detailed mental record about students’ performance also shaped students’ preferences for the timing of feedback. Students’ preferred for examiners to assess and generate feedback about their performance immediately after the performance out of concern that the examiner might have trouble remembering the student’s performance if feedback generation was delayed. 171    Consistent with previous research,21,164–166 another component used to judge credibility was the perceived clinical competence of the feedback provider. This was important for some students because they wanted to use the OSCEs to improve their performance in clinical settings. Despite not knowing the OSCE examiners prior to the assessment, some students assumed them to be more credible than instructors in other learning opportunities because examiners were less likely to cut corners in the OSCE setting than other instructors from clinical workplaces.   The importance of credibility extended to perceptions of how knowledgeable the feedback provider was about OSCEs more generally because some students wanted to use the OSCEs to improve performance on future OSCEs. Again, some students perceived examiners to be more credible than other instructors about this topic because the examiners were more directly involved in the assessment.  Beyond the inherent competence that OSCE examiners possessed, some students also perceived the use of an assessment checklist or rubric in the OSCEs as helping to improve examiners’ competence by enabling them to adopt appropriate criteria when assessing performance.  Students’ trust in the opinion of OSCE examiners over their instructors challenges the idea that building a longitudinal relationship is key to encouraging students’ acceptance of feedback143 and lends support to the idea that credible feedback interactions can happen in brief encounters,167 such as an OSCE.   172  In sum, the status of being an examiner in a formal medical school assessment that involves careful direct observation, engagement in the assessment activity itself, and the rigour of the assessment tools being used were all factors that seemed to influence the credibility of information in a brief encounter. Future research should continue to investigate factors that influence effective feedback interactions in brief encounters.   One area of study that might provide further insight in this regard is the concept of an educational alliance. The educational alliance refers to students’ perception that an educator is committed to helping the student achieve their personal objectives.25 The strength of the educational alliance has been shown to impact learning through influencing medical residents’ receptivity to feedback from their supervisors.24 In the current research students perceived that tutors in learning opportunities outside of the OSCEs, despite having more time with the student overall, could not always fully attend to the students’ learning for reasons like having to attend to other clinical responsibilities or other students, potentially weakening the educational alliance despite the appearance of having more time together. Though students in the OSCE studies did not explicitly indicate that examiners’ strength of commitment to their learning influenced their receptivity to feedback, some reported feeling more comfortable seeking feedback from OSCE examiners compared to seeking feedback from others because they perceived the examiners to be solely committed to their learning with no other responsibilities (at least at that moment). Further, positive experiences with the examiners during the verbal feedback session (e.g., examiners’ demonstrating willingness to answer student questions) could have enabled students to perceive an educational alliance with their examiners despite the lack of a prolonged relationship.  173  5.2.3 Assessment scheduling and the broader environment In the current research, the FOSCE was scheduled in the middle of the school year and the SOSCE was scheduled at the end of the school year. The medical program timed the assessments in this manner so students could use the FOSCE to gather information to guide their learning for the rest of the school year and to facilitate their preparation for the SOSCE. Students’ objectives when engaging with both OSCEs, however, often extended beyond year-one of medical school, including preparation for clerkship and for OSCEs at the end of medical school. That is, while the SOSCE took place at the end of the school year, students did not necessarily see it as the end point with respect to the personal objectives they sought to achieve through the OSCE. Rather, it was but one point in time that enabled them to assess their progress towards their personal objectives. This finding is consistent with previous research on medical students’ experiences with formative written assessments where it has been reported that some goals students sought extended beyond their particular level of training (e.g., to be a good doctor).17 To best co-regulate learning towards students’ personal objectives, therefore, educators may need to consider providing supports that will help students improve performance beyond the specific performance observed. This is consistent with de-emphasizing the time-based focus of training in favour of a competency-based medical training system, which acknowledges that learners may progress at different rates and training should focus on the learning attained, not the hours or weeks spent in training.168  The current research also highlighted how the broader learning environment that is present when an assessment is scheduled can influence students’ learning. Consistent with prior research, academic workload in the learning environment69 prior to the assessment influenced students’ 174  preparation for the assessment. The current research extended such findings by showing both concurrent academic and non-academic tasks interacted with students’ time management skills and task prioritization to influence their learning. Further, it was found that the influence of the learning environment on students’ learning through assessments was not isolated to the time prior to the assessment. That is, the role of competing tasks extended to learning after the assessment as well.   In past research investigating post-assessment learning effects (e.g., engagement with feedback and following up on learning objectives stemming from an assessment), researchers have attempted to capture post-assessment effects immediately10,150 after the assessment or in the subsequent weeks.7,11 The use of such short time-frames to capture post-assessment effects might not be sufficient. For example, students in the current research showed diversity in when they planned to implement adjustments in learning or understanding. Students’ decisions were influenced by multiple factors or interactions between factors, such as students’ emotional state, their interpretation of the assessed content, their skills in managing concurrent tasks, and their access to relevant opportunities. Therefore, inaction within a short time-frame after an assessment might be reflective of other factors related to the student or environment rather than a lack of interest in learning. Future research should include a more flexible and extended follow-up period if investigators wish to capture the full range of post-assessment learning effects.  To help students co-regulate learning before or after an assessment, educators may want to target student factors, such as faulty perception of the assessed content (e.g., the amount of time that can be expected to pass prior to when the content will next be used or whether the content will be 175  addressed in the future curriculum). Educators can also target the design of the learning environment, such as providing students opportunities for further external support or practice opportunities that are part of the curriculum169 (e.g., re-visiting content assessed by virtue of the spiral nature of the curriculum providing practice opportunities), or considering the overall workload in the curriculum when determining the timing of assessments to limit the number of concurrent tasks.  In sum, this research encourages educators to pay close attention to the culture surrounding their assessment protocols if they aim to support students’ learning through assessments. This does appear to be modifiable as at least one previous study has shown that when an assessment system is shifted away from providing grades and comparative ranking to focusing on formative narrative feedback, students may initially have had trouble determining whether they were on track, but eventually the lack of reassurance from “good enough” grades incentivized them to aim for excellence.3 In the current research, the OSCE culture was examined separately in the context of exams intended by the program to be focused on summative and formative functions. However, when some students focused on using OSCEs to make summative judgements in either instance (e.g. comparing their performance against peers or prioritizing whether or not they “passed” the OSCE), it distracted them from using the OSCEs for a formative function. This direction of influence seems particularly critical as deliberate effort will be required to alter expectations surrounding OSCEs developed for formative purposes if students are actually to be led to focus more on improving performance.    176  5.3 Limitations One limitation of the current research is that it only involved medical students from a single training program, potentially impacting on the transferability of the findings. For example, the particular curriculum in which the participants were enrolled foregrounded clinical work by organizing its structure around clinical presentations with longitudinal clinical exposure starting in year-one. Therefore, participants in this study might have had more acute awareness of the need to prepare for clinical work and set more clinically oriented learning objectives compared to students in longer medical programs or curricula with a less clinical focus at this early stage of training. Therefore, future research is necessary to clarify the role of curricular design in influencing the breadth of contexts for which students set personal objectives within the learning they do through assessments.   Given the lack of detail regarding participants’ backgrounds and identities, the current research was also limited in exploring how individuals’ characteristics (or the interaction of those characteristics with the learning environment) may have influenced their learning through OSCEs.  Another limitation of the current research is the student population that participated. Like most medical students in North America, participants in the current research generally will have completed an undergraduate degree (and sometimes graduate degrees) prior to entering medical school. Compared to students entering medical school after high school, therefore, students in the current research were more experienced and perhaps more skilled in navigating the post-secondary learning environment given the competitive nature of medical school admissions. 177  Students’ experience in post-secondary settings might influence how they navigate and learn through assessments. Therefore, future research might be directed at exploring the role of post-secondary learning experiences in shaping students’ learning through assessments.  Another limitation of the current research is that it only involved students at the beginning of training and thus, it is unclear whether students at more senior stages of training or in other health professions share similar experiences. However, existing research has shown that more senior students had experiences that were similar to participants in the current study, including using assessments to improve performance in current or future assessments and to prepare for clinical work. These objectives influenced final-year nursing students’ preparation for an assessment,45 more senior medical students’ engagement with feedback after an OSCE8 and medical residents’ experiences in an OSCE.170   Another limitation of this current research is that it only involved students’ experiences in one type of assessment. However, the findings from OSCEs share similarities with findings from existing research showing that students’ engagement in self-regulated learning through other forms of assessment (e.g., written progress tests17 and multiple-choice question assessments81) and other educational activities (e.g., problem-based learning classes81) were influenced by personal objectives they set for themselves.  There were also limitations in the method of data collection. While interviewing allowed me to gain insight into students’ perceptions regarding their engagement with the OSCEs, reliance on what students think to say during an interview precludes exploration of other events and makes it 178  such that the most we can conclude is that our data reflect what students were willing and able to tell us about their self-regulated learning. Future research might consider the use of additional methods such as direct observation or asking students to keep a journal to obtain a richer description of their experiences in an assessment that can be further explored through interviews.  Finally, it is also important to consider limitations induced by the timing of data collection. In this research, students were interviewed well after the OSCE itself took place about their engagement before, during, and after the OSCEs. Therefore, the interviews represented a retroactive account of students’ experiences. Although this approach was beneficial for understanding the interactions between the assessment and the broader environment after the assessment, it meant that some students might have forgotten details that had significant influence on their engagement at the time of the event. To better capture students' experiences, future research might consider taking a more longitudinal and more real-time approach to data collection, such as interviewing students before, during, immediately after, and at a delayed point in time after the assessment. This approach would also allow for more detailed clarification and exploration of topics raised in subsequent interviews, especially if the same participants could be enrolled for the duration of such a longitudinal study. As it was, the need to meet with many participants in a short period of time between the assessment and breaks in the curriculum did not allow much opportunity for iterative data collection and analysis to take place.  5.4 Conclusion Three core themes arise from the efforts reported in this dissertation to study the factors that influence first-year medical students’ learning through OSCEs:  179   The interplay between student emotions and learning Past research has problematized students who seek feedback for emotional purposes (e.g., to reinforce confidence) out of concern that such focus can preclude students from obtaining information that is important to help them advance their learning and improve their performance. However, the current research suggests that addressing emotional needs might be a necessary first step for some students to subsequently address performance improvements. That is, some students might first need to seek feedback (or take other actions) to feel better before they are ready to take steps to improve their performance. Further, students’ emotional state also influenced their regulation of learning at other points in time, such as prior to the assessment. Therefore, helping students manage their emotions prior to, during, and after assessments might be an important way for educators to promote effective self-regulated learning through assessments.   Supporting learning in OSCEs beyond altering assessment design Existing initiatives for supporting learning through assessments often focus on changing the design of the assessment, such as increasing the provision of feedback from external supports like OSCE examiners. However, results of the current research suggest that learning through an assessment is influenced by factors that range well beyond the design of the assessment. For example, students’ regulation of learning appeared to be influenced by learning environment factors, such as the concurrent tasks to which students felt pressure to attend, and student factors, such as their understanding of the relevance of the assessment to other contexts (e.g., future assessments and the clinical work context). Therefore, educators might need to consider taking a 180  more holistic approach when thinking about how to support student learning through assessments by considering student and learning environment factors in addition to considering the assessment design. Providing external (e.g., faculty) support and practice opportunities after an OSCE or more deliberately harnessing existing clinical workplace encounters to better demonstrate the relationship between the OSCE experience and clinical practice offer a means through which the learning environment outside of the OSCE can be used to optimize the educational influence of the OSCE. In fact, providing students with such supports by altering the learning environment might be preferable to making changes in the OSCE itself because there are often time and resource (including cognitive workload) constraints in OSCEs that make alterations to its design unrealistic.   Variation in learning across students The current research also showed that first-year medical students are not homogenous in that they show variation in the extent to which they could harness the assessment experience to work towards their personal objectives. Part of the variation in their engagement arose from variation in student factors, such as time management skill or the type of information they wanted to enable performance monitoring and adjustment of learning or understanding. This variation indicates that a single factor, such as concurrent tasks, can have variable effects on students’ learning in an assessment. Therefore, design of efforts to support students’ learning might benefit from an individualized approach that targets the specific factors limiting the learning of each student, such as providing one-on-one support for managing negative emotions. Designing inclusive supports that accommodate variation in students’ learning, such as providing diverse types of feedback, is another potential approach to addressing students’ diverse learning needs. 181  There can be no better argument for continuing to emphasise the “self-regulated” nature of learning even in the context of program initiated educational activities given that one-size-fits-all approaches are unlikely to be sufficient and, like it or not, students will inevitably make judgments for themselves regarding where their priorities should lie at any given moment in their training.   182  References 1.  Watling C. The uneasy alliance of assessment and feedback. Perspect Med Educ. 2016;5:10-12. doi:10.1007/s40037-016-0300-6 2.  McLachlan JC. The relationship between assessment and learning. Med Educ. 2006;40(8):716-717. doi:10.1111/j.1365-2929.2006.02518.x 3.  Harrison CJ, Könings KD, Dannefer EF, Schuwirth LWT, Wass V, van der Vleuten CPM. Factors influencing students’ receptivity to formative feedback emerging from different assessment cultures. Perspect Med Educ. 2016;5(5):276-284. doi:10.1007/s40037-016-0297-x 4.  Nicol DJ, Macfarlane-dick D. Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice. Stud High Educ. 2006;31(2):199-218. doi:10.1080/03075070600572090 5.  Archer JC. State of the science in health professional education: effective feedback. Med Educ. 2010;44(1):101-108. doi:10.1111/j.1365-2923.2009.03546.x 6.  Srinivasan M, Hauer KE, Der-Martirosian C, Wilkes M, Gesundheit N. Does feedback matter? Practice-based learning for medical students after a multi-institutional clinical performance examination. Med Educ. 2007;41(9):857-865. 7.  Harrison CJ, Könings KD, Molyneux A, Schuwirth LWT, Wass V, van der Vleuten CPM. Web-based feedback after summative assessment: How do students engage? Med Educ. 2013;47(7):734-744. doi:10.1111/medu.12209 8.  Harrison CJ, Könings KD, Schuwirth L, Wass V, van der Vleuten C. Barriers to the uptake and use of feedback in the context of summative assessment. Adv Heal Sci Educ. 2014;20(1):229-245. doi:10.1007/s10459-014-9524-6 183  9.  Sadler D. Formative assessment: revisiting the territory. Assess Educ Princ Policy Pract. 1998;5(1):77-84. 10.  Eva KW, Munoz J, Hanson MD, Walsh A, Wakefield J. Which factors, personal or external, most influence students’ generation of learning goals? Acad Med. 2010;85(10 Suppl):S102-5. doi:10.1097/ACM.0b013e3181ed42f2 11.  Bounds R, Bush C, Aghera A, Rodriguez N, Stansfield RB, Santen SA. Emergency medicine residents’ self-assessments play a critical role when receiving feedback. Acad Emerg Med. 2013;20(10):1055-1061. doi:10.1111/acem.12231 12.  Sadler DR. Formative assessment and the design of instructional systems. Instr Sci. 1989;18(2):119-144. 13.  Pugh D, Regehr G. Taking the sting out of assessment: is there a role for progress testing? Med Educ. 2016;50(7):721-729. doi:10.1111/medu.12985 14.  Schuwirth LWT, Van der Vleuten CPM. Programmatic assessment: From assessment of learning to assessment for learning. Med Teach. 2011;33(6):478-485. doi:10.3109/0142159X.2011.565828 15.  Wiliam, Dylan. Formative assessment and contingency in the regulation of learning processes. Paper presented at: Annual Meeting of American Educational Research Association; April, 2014; Philadelphia, PA. 16.  Black P, Wiliam D. Developing the Theory of Formative Assessment. Educ Assessment, Eval Account. 2009;21(1):5-31. doi:10.1007/s11092-008-9068-5 17.  Ryan A, McColl GJ, O’Brien R, et al. Tensions in post-examination feedback: information for learning versus potential for harm. Med Educ. 2017;51(9):963-973. doi:10.1111/medu.13366 184  18.  Butler D, Schnellert L, Perry N. Designing assessment and feedback to nurture self-regulated learning. In: Developing Self-Regulated Learners. Don Mills, Ontario: Pearson; 2016:121-140. 19.  Sargeant J, Armson H, Chesluk B, et al. The processes and dimensions of informed self-assessment: a conceptual model. Acad Med J Assoc Am Med Coll. 2010;85(7):1212-1220. doi:10.1097/ACM.0b013e3181d85a4e 20.  Van De Ridder JMM, Berk FCJ, Stokking KM, Ten Cate OTJ. Feedback providers’ credibility impacts students’ satisfaction with feedback and delayed performance. Med Teach. 2015;37(8):767-774. doi:10.3109/0142159X.2014.970617 21.  Bing-You RG, Paterson J, Levine MA. Feedback falling on deaf ears: residents’ receptivity to feedback tempered by sender credibility. Med Teach. 1997;19(1):40-44. 22.  Dijksterhuis MGK, Schuwirth LWT, Braat DDM, Teunissen PW, Scheele F. A qualitative study on trainees’ and supervisors’ perceptions of assessment for learning in postgraduate medical education. Med Teach. 2013;35(8):e1396-e1402. doi:10.3109/0142159X.2012.756576 23.  van Schaik SM, O’Sullivan PS, Eva KW, Irby DM, Regehr G. Does source matter? Nurses’ and Physicians’ perceptions of interprofessional feedback. Med Educ. 2016;50(2):181-188. doi:10.1111/medu.12850 24.  Telio S, Regehr G, Ajjawi R. Feedback and the educational alliance: examining credibility judgements and their consequences. Med Educ. 2016;50(9):933-942. doi:10.1111/medu.13063 25.  Telio S, Ajjawi R, Regehr G. The “educational Alliance” as a Framework for Reconceptualizing Feedback in Medical Education. Acad Med. 2015;90(5):609-614. 185  doi:10.1097/ACM.0000000000000560 26.  Wearne S. Effective feedback and the educational alliance. Med Educ. 2016;50(9):891-892. doi:10.1111/medu.13110 27.  Hattie J, Timperley H. The power of feedback. Rev Educ Res. 2007;77(1):81-112. doi:10.3102/003465430298487 28.  Li J, Luca R De. Review of assessment feedback. Stud High Educ Rev Assess Feed. 2014;39(2):378-393. doi:10.1080/03075079.2012.709494 29.  Eva KW, Armson H, Holmboe E, et al. Factors influencing responsiveness to feedback: on the interplay between fear, confidence, and reasoning processes. Adv Heal Sci Educ Theory Pract. 2012;17(1):15-26. doi:10.1007/s10459-011-9290-7 30.  Sargeant J, Mann K, Sinclair D, Van Der Vleuten C, Metsemakers J. Understanding the influence of emotions and reflection upon multi-source feedback acceptance and use. Adv Heal Sci Educ. 2008;13(3):275-288. doi:10.1007/s10459-006-9039-x 31.  Mann K, van der Vleuten C, Eva K, et al. Tensions in informed self-assessment: How the desire for feedback and reticence to collect and use it can conflict. Acad Med J Assoc Am Med Coll. 2011;86(9):1120-1127. doi:10.1097/ACM.0b013e318226abdd 32.  Jarrell A, Harley J, Lajoie S, Naismith L. Examining the relationship between performance feedback and emotions in diagnostic reasoning: Toward a predictive framework for emotional support. In: Conati C, Heffernan N, Mitrovic A, Verdejo M, eds. Artificial Intelligence in Education AIED 2015. Cham, Switzerland: Springer International Publishing; 2015:650–653. doi:10.1007/978-3-319-19773-9 33.  Schon D. The Reflective Practitioner: How Professionals Think in Action. New York, NY: Basic Books; 1983. 186  34.  Sandars J. The use of reflection in medical education: AMEE Guide No. 44. Med Teach. 2009;31(8):685-695. doi:10.1080/01421590903050374 35.  Kolb D. Experiential Learning: Experience as the Source of Learning and Development. Englewood Cliffs, NJ: Prentice Hall; 1984. 36.  Lan WY, Morgan J. Videotaping as a means of self-monitoring to improve theater students’ performance. J Exp Educ. 2003;71(4):371-381. 37.  Zimmerman BJ. Development and adaptation of expertise: The role of self-regulatory processes and beliefs. In: Ericsson K, Charness N, Feltovich P, Hoffman R, eds. The Cambridge Handbook of Expertise and Expert Performance. New York, NY: Cambridge University Press; 2006:705-722. 38.  Van Gog T, Kester L, Paas F. Effects of concurrent monitoring on cognitive load and performance as a function of task complexity. Appl Cogn Psychol. 2011;25(4):584-587. 39.  Gielen S, Dochy F, Dierick S. Evaluating the Consequential Validity of New Modes of Assessment: The Influence of Assessment on Learning, Including Pre-, Post-, and True Assessment Effects. In: Segers M, Dochy F, Cascallar E, eds. Optimising New Modes of Assessment: In Search of Qualities and Standards. Netherlands: Kluwer Academic Publishers; 2003:37-54. doi:10.1007/0-306-48125-1_3 40.  Segers M, Dochy F, Gijbels D, Stuyven K. Changing insights in the domain of assessment in higher education: Novel assessments and their pre-, post- and pure effects on students learning. In: McInerney D, Brown G, Liem G, eds. Student Perspectives on Assessment: What Students Can Tell Us about Assessment for Learning Volume 9 of Research on Sociocultural Influences on Motivation and Learning. Charlotte, NC; 2009:297-319. 41.  Snyder B. The Hidden Curriculum. New York, NY: Alfred A. Knopf; 1971. 187  42.  Miller CM, Parlett MR. Up to the Mark: A Study of the Examination Game. London: Society for Research into Higher Education; 1974. 43.  Van Etten S, Freebern G, Pressley M. College Students’ Beliefs about Exam Preparation. Contemp Educ Psychol. 1997;22(2):192-212. 44.  Cilliers FJ, Schuwirth LW, Adendorff HJ, Herman N, van der Vleuten CPM. The mechanism of impact of summative assessment on medical students’ learning. Adv Heal Sci Educ. 2010;15(5):695-715. doi:10.1007/s10459-010-9232-9 45.  Duers LE, Brown N. An exploration of student nurses’ experiences of formative assessment. Nurse Educ Today. 2009;29(6):654-659. doi:10.1016/j.nedt.2009.02.007 46.  Heeneman S, Oudkerk Pool A, Schuwirth LWT, van der Vleuten CPM, Driessen EW. The impact of programmatic assessment on student learning: theory versus practice. Med Educ. 2015;49(5):487-498. doi:10.1111/medu.12645 47.  Fransson A. Qualitative differences in learning: IV- Effects of intrinsic motivation and extrinsic test anxiety on process and outcome. Br J Educ Psychol. 1977;47(3):244-257. 48.  Wood T. Assessment not only drives learning, it may also help learning. Med Educ. 2009;43(1):5-6. doi:10.1111/j.1365-2923.2008.03237.x 49.  Dochy F, Segers M, Gijbels D, Struyven K. Assessment engineering: Breaking down barriers between teaching and learning, and assessment. In: Boud D, Falchikov N, eds. Rethinking Assessment in Higher Education: Learning for the Longer Term. New York, NY: Routledge; 2007:87-100. 50.  Struyf E, Vandenberghe R, Lens W. The evaluation practice of teachers as a learning opportunity for students. Stud Educ Eval. 2001;27(3):215-238. doi:10.1016/S0191-491X(01)00027-X 188  51.  Larsen DP, Butler AC, Roediger HL. Test-enhanced learning in medical education. Med Educ. 2008;42(10):959-966. doi:10.1111/j.1365-2923.2008.03124.x 52.  Kromann CB, Jensen ML, Ringsted C. The effect of testing on skills learning. Med Educ. 2009;43(1):21-27. doi:10.1111/j.1365-2923.2008.03245.x 53.  McDaniel M a, Roediger HL, McDermott KB. Generalizing test-enhanced learning from the laboratory to the classroom. Psychon Bull Rev. 2007;14(2):200-206. doi:10.3758/BF03194052 54.  Butler DL, Schnellert L, Perry NE. What is self-regulated learning? In: Developing Self-Regulated Learners. Don Mills, Ontario: Pearson; 2016:1-18. 55.  Butler D, Cartier S. Case studies as a methodological framework for studying and assessing self-regulated learning. In: Schunk DH, Greene JA, eds. Handbook of Self-Regulation of Learning and Performance. 2nd ed. New York, NY: Routledge; 2018:352-369. 56.  Zimmerman BJ. Attaining self-regulation: A social-cognitive perspective. In: Boekaerts M, Pintrich P, Zeidner M, eds. Handbook of Self-Regulation. San Diego, CA; 2000:13-39. 57.  Winne PH. Experimenting to bootstrap self-regulated learning. J Educ Psychol. 1997;89(3):397-410. doi:10.1037/0022-0663.89.3.397 58.  Butler DL, Schnellert L. Success for students with learning disabilities: What does self-regulation have to do with it? In: Cleary T, ed. Self-Regulated Learning Interventions with at-Risk Youth: Enhacing Adaptability, Performance, and Well-Being. Washington, DC: APA Press; 2015:123-141. 59.  Schunk DH, Zimmerman BJ. Self-Regulated Learning and Performance. In: Schunk DH, Zimmerman BJ, eds. Handbook of Self-Regulation of Learning and Performance. New 189  York, NY: Routledge; 2011:1-12. 60.  Zimmerman BJ, Martinez-Pons M. Development of a structured interview for assessing students’ use of self-regulated learning strategies. Am Educ Res J. 1986;23(4):614-628. doi:10.3102/00028312023004614 61.  Zimmerman BJ, Martinez-Pons M. Construct validation of a strategy model of student self-regulated learning. J Educ Psychol. 1988;80(3):284-290. doi:10.1037/0022-0663.80.3.284 62.  Schunk DH. Modeling and attributional feedback effects on children’s achievement: A self-efficacy analysis. J Educ Psychol. 1981;73(1):93–105. 63.  Schunk DH. Sequential attributional feedback and children’s achievement behaviors. J Educ Psychol. 1984;76(6):1159-1169. doi:10.1037/0022-0663.76.6.1159 64.  Zimmerman BJ. Development of self-regulated learning: Which are the key subprocesses? Contemp Educ Psychol. 1986;11(4):307-313. 65.  Panadero E. A review of self-regulated learning: Six models and four directions for research. Front Psychol. 2017;8(422):1-28. doi:10.3389/fpsyg.2017.00422 66.  Butler D, Cartier S. Complementary methods for understanding self-regulated learning as situated in context. Paper presented at: the Annual Meetings of the American Educational Research Association; April, 2005; Montreal, QUC. 67.  Brydges R, Butler D. A reflective analysis of medical education research on self-regulation in learning and practice. Med Educ. 2012;46(1):71-79. doi:10.1111/j.1365-2923.2011.04100.x 68.  Butler DL, Winne PH. Feedback and Self-Regulated Learning: A theoretical synthesis. Rev Educ Res. 1995;65(3):245-281. 190  69.  Cilliers FJ, Schuwirth LWT, Herman N, Adendorff HJ, van der Vleuten CPM. A model of the pre-assessment learning effects of summative assessment in medical education. Adv Heal Sci Educ. 2012;17(1):39-53. doi:10.1007/s10459-011-9292-5 70.  Boekaerts M. Self-regulation and effort investment. In: Sigel E, Renninger K, eds. Handbook of Child Psychology, Vol. 4, Child Psychology in Practice. Hoboken, NJ: Wiley; 2006:345-377. 71.  Boekaerts M, Corno L. Self regulation in the classroom: A perspective on assessment and intervention. Appl Psychol. 2005;54(2):199-231. doi:10.1111/j.1464-0597.2005.00205.x 72.  Khan KZ, Ramachandran S, Gaunt K, Pushkar P. The Objective Structured Clinical Examination (OSCE): AMEE Guide No. 81. Part I: An historical and theoretical perspective. Med Teach. 2013;35(9):e1437-e1446. doi:10.3109/0142159X.2013.818634 73.  Harden RM, Stevenson M, Downie WW, Wilson GM. Assessment of clinical competence using objective structured examination. Br Med J. 1975;1(5955):447-451. doi:10.1136/bmj.1.5955.447 74.  Miller G. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9):S64-S67. 75.  Khan K, Ramachandran S. Conceptual framework for performance assessment: Competency, competence and performance in the context of assessments in healthcare - Deciphering the terminology. Med Teach. 2012;34(11):920-928. doi:10.3109/0142159X.2012.722707 76.  Nasir AA, Yusuf AS, Abdur-Rahman LO, et al. Medical students’ perception of objective structured clinical examination: A feedback for process improvement. J Surg Educ. 2014;71(5):701-706. doi:10.1016/j.jsurg.2014.02.010 191  77.  Townsend A, Mcllvenny S, Miller C, Dunn E. The use of an objective structured clinical examination (OSCE) for formative and summative assessment in a general practice clinical attachment and its relationship to final medical school examination performance. Med Educ. 2001;35(9):841-846. 78.  Brazeau C, Boyd L, Crosson J. Changing an existing OSCE to a teaching tool: The making of a teaching OSCE. Acad Med. 2002;77(9):932. doi:10.1097/00001888-200209000-00041 79.  Hodder R V, Rivington RN, Calcutt LE, Hart IR. The effectiveness of immediate feedback during the objective structured clinical examination. Med Educ. 1989;23(2):184-188. 80.  Rush S, Ooms A, Marks-Maran D, Firth T. Students’ perceptions of practice assessment in the skills laboratory: An evaluation study of OSCAs with immediate feedback. Nurse Educ Pract. 2014;14(6):627-634. doi:10.1016/j.nepr.2014.06.008 81.  Kindler PM, Bates J, Hui E, Eva KW. How and Why Preclerkship Students Set Learning Goals and Assess Their Achievement: A Qualitative Exploration. Acad Med. 2017;92(11):S61-S66. doi:10.1097/ACM.0000000000001913 82.  Mertens DM. Research and Evaluation in Education and Psychology: Integrating Diversity with Quantitative, Qualitative, and Mixed Methods. 3rd ed. SAGE Publications Ltd; 2010. 83.  Weaver K, Olson JK. Understanding paradigms used for nursing research. J Adv Nurs. 2006;53(4):459-469. doi:10.1111/j.1365-2648.2006.03740.x 84.  Eichelberger RT. Disciplined Inquiry: Understanding and Doing Educational Research. New York: Longman; 1989. 192  85.  Willis JW, Muktha J, Nilakanta R. World Views, Paradigms, and the Practice of Social Science Research. In: Foundations of Qualitative Research: Interpretive and Critical Approaches. Thousand Oaks, CA: SAGE Publications, Inc.; 2007:1-23. doi:10.4135/9781452230108 86.  Denzin NK, Lincoln YS, Guba EG. Paradigmatic controversies, contradictions, and emerging confluences. In: Guba EG, Lincoln YS, eds. Handbook of Qualitative Research. Thousand Oaks, CA: SAGE Publications; 2005:191-215. doi:10.1111/j.1365-2648.2005.03538_2.x 87.  Bunniss S, Kelly DR. Research paradigms in medical education research. Med Educ. 2010;44(4):358-366. doi:10.1111/j.1365-2923.2009.03611.x 88.  Appleton J V, King L. Constructivism: a naturalistic methodology for nursing inquiry. Adv Nurs Sci. 1997;20(2):13-22. 89.  Guba EG, Lincoln YS. Competing Paradigms in Qualitative Research. In: Handbook of Qualitative Research. Sage; 1994:105-117. 90.  Mills J, Bonner A, Francis K. The development of constructivist grounded theory. International Journal of Qualitative Method. Int J Qual Methods. 2006;5(1):1-10. doi:10.2307/588533 91.  Kennedy TJ, Lingard LA. Making sense of grounded theory in medical education. Med Educ. 2006;40(2):101-108. 92.  Charmaz K. Constructing Grounded Theory. 2nd ed. Great Britain: Sage; 2014. 93.  Glaser B, Strauss A. The Discovery of Grounded Theory. Chicago: Adaline; 1967. 94.  Glaser B. Theoretical Sensitivity. Mill Valley, CA: Sociology Press; 1978. 95.  Strauss A. Qualitate Analysis for Social Scientists. New York: Cambridge University 193  Press; 1987. 96.  Glaser B. Emergence vs Forcing: Basics of Grounded Theory Analysis. Mill Valley, CA: Sociology Press; 1992. 97.  Charmaz K. Constructing Grounded Theory: A Practical Guide through Qualitative Analysis. Thousand Oaks, CA.: SAGE Publications Ltd; 2006. 98.  Melia K. Rediscovering Glaser. Qual Health Res. 1996;6(3):367-378. 99.  Strauss A, Corbin J. Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Newbury Park, CA: Sage Publications; 1990. 100.  Levers MJD. Philosophical paradigms, grounded theory, and perspectives on emergence. SAGE Open. 2013;3(4). doi:10.1177/2158244013517243 101.  Charmaz K. Grounded theory-Objectivist and constructivist methods. In: Denzin N, Lincoln Y, eds. Handbook of Qualitative Research. 2nd ed. Thousand Oaks, CA: Sage; 2000:509-535. 102.  Charmaz K. Between positivism and postmodernism: Implications for methods. In: Denzin NK, ed. Studies in Symbolic Interaction. Greenwich, CT: JAI; 1995:43-72. 103.  Watling CJ, Lingard L. Grounded theory in medical education research: AMEE Guide No. 70. Med Teach. 2012;34(10):850-861. doi:10.3109/0142159X.2012.704439 104.  Morse JM, Noerager P, Juliet S, Bowers B, Charmaz K, Clarke AE. Developing Grounded Theory :The Second Generation. New York, NY: Routledge; 2016. 105.  Charmaz K. Grounded theory. In: Ritzer G, ed. Encyclopaedia of Sociology. Blackwell; 2006. 106.  Charmaz K. The legacy of Anselm Strauss in constructivist grounded theory. In: Denzin NK, ed. Studies in Symbolic Interaction. Bingley, West Yorkshire: Emerald Group 194  Publishing Limited; 2008:127-141. 107.  Strauss A, Corbin J. Basics of Qualitative Research: Grounded Theory Procedures and Techniques. 2nd ed. Thousand Oaks, CA: Sage; 1998. 108.  Zimmerman BJ. Investigating self-regulation and motivation: Historical background, methodological developments, and future prospects. Am Educ Res J. 2008;45(1):166-183. 109.  Bryant A. Re-grounding grounded theory. J Inf Technol Theory Appl. 2002;4(1):25-42. 110.  Patton MQ. Qualitative Research & Evaluation Methods. 3rd ed. Thousand Oaks, California; 2002. 111.  Malterud K, Siersma VD, Guassora AD. Sample Size in Qualitative Interview Studies: Guided by Information Power. Qual Health Res. 2016;26(13):1753 –1760. doi:10.1177/1049732315617444 112.  Berkhout JJ, Helmich E, Teunissen PW, van der Vleuten CPM, Jaarsma ADC. How clinical medical students perceive others to influence their self-regulated learning. Med Educ. 2016;31(0):1-11. doi:10.1111/medu.13131 113.  Hallberg LR. The “core category” of grounded theory: Making constant comparisons. Int J Qual Stud Health Well-being. 2006;1(3):141-148. 114.  Coyne I. Sampling in qualitative research. Purposeful and theoretical sampling; merging or clear boundaries? J Adv Nurs. 1997;26(3):623-630. doi:10.1046/j.1365-2648.1997.t01-25-00999.x 115.  Malterud K. Qualitative research: standards, challenges, and guidelines. Lancet. 2001;358(9280):483-488. 116.  Mills J, Bonner A, Francis K. Adopting a constructivist approach to grounded theory: Implications for research design. Int J Nurs Pract. 2006;12(1):8-13. doi:10.1111/j.1440-195  172X.2006.00543.x 117.  Dwyer SC, Buckle JL. The Space Between: On Being an Insider-Outsider in Qualitative Research. Int J Qual Methods. 2009;8(1):54-63. 118.  Kanuha VK. “Being” native versus “going native”: Conducting social work research as an insider. Soc Work. 2000;45(5):439-447. 119.  Asselin ME. Insider research: Issues to consider when doing qualitative research in your own setting. J Nurses Staff Dev. 2003;19(2):99-103. 120.  Varpio L, Ajjawi R, Monrouxe L, O ’Brien B, Rees C. Shedding the cobra effect: problematizing thematic emergence, triangulation, saturation and member checking. Med Educ. 2017;51(1):40-50. 121.  Denzin NK. The Research Act: A Theoretical Introduction to Sociological Methods. New York, NY: McGraw-Hill; 1978. 122.  Mathison S. Why Triangulate? Educ Res. 1988;17(2):13-17. doi:10.3102/0013189X017002013 123.  Dey I. Grounding Grounded Theory: Guidelines for Qualitative Inquiry. London: Academic Press; 1999. 124.  O’Reilly M, Parker N. ‘Unsatisfactory saturation’: a critical exploration of the notion of saturated sample sizes in qualitative research. J Qual Res. 2012;13(2):190-197. 125.  Wray N, Markovic M, Manderson L. “Researcher saturation”: the impact of data triangulation and intensive-research practices on the researcher and qualitative research process. Qual Health Res. 2007;17(10):1392-1402. doi:10.1177/1049732307308308 126.  Butler DL, Cartier SC. Promoting Effective Task Interpretation as an Important Work Habit: A Key to Successful Teaching and Learning. Teach Coll Rec. 2004;106(9):1729-196  1758. doi:10.1111/j.1467-9620.2004.00403.x 127.  Dweck CS. Mindset: The New Psychology of Success. New York, NY: Random House; 2006. 128.  Dweck CS, Leggett EL. A social-cognitive approach to motivation and personality. Psychol Rev. 1988;95(2):256-273. doi:10.1037/0033-295X.95.2.256 129.  Zimmerman BJ. Motivational sources and outcomes of self-regulated learning and performance. In: Zimmerman BJ, Schunk DH, eds. Handbook of Self-Regulation of Learning and Performance. Routledge. New York, NY; 2011:49-64. doi:10.4324/9780203839010.ch4 130.  Dweck CS, Molden DC. Mindsets: Their impact on competence motivation and acquisition. In: Elliot A, Dweck CS, eds. Handbook of Competence and Motivation. Second. New York, NY: Guildford Press; 2017:135-154. 131.  Robins RW, Pals JL. Implicit self-theories in the academic domain: Implications for goal orientation, attributions, affect, and self-esteem change. Self Identity. 2002;1(4):313-336. doi:10.1080/1529886029010680 132.  Romero C, Master A, Paunesku D, et al. Academic and emotional functioning in middle school: the role of implicit theories. Emotion. 2014;14(2):227-234. doi:10.1037/a0035490 133.  Martocchio JJ. Effects of conceptions of ability on anxiety, self-efficacy, and learning in training. J Appl Psychol. 1994;79(6):819-825. 134.  Jourden FJ, Bandura A, Banfield JT. The impact of conceptions of ability on self-regulatory factors and motor skill acquisition. J Sport Exerc Psychol. 1991;13(3):213-226. 135.  Wood R, Bandura A. Impact of conceptions of ability on self-regulatory mechanisms and complex decision making. J Pers Soc Psychol. 1989;56(3):407-415. 197  136.  Blackwell LS, Trzesniewski KH, Dweck CS. Implicit theories of intelligence predict achievement across an adolescent transition: A longitudinal study and an intervention. Child Dev. 2007;78(1):246-263. doi:10.1111/j.1467-8624.2007.00995.x 137.  Butler, D. L., Schnellert, L., & Perry NE. Providing supports for self-regulated learning. In: Developing Self-Regulated Learners. Don Mills, Ontario: Pearson; 2016:105-120. 138.  Zeidner M. Test Anxiety: The State of Art. New York: Plenum; 1998. doi:10.1016/S0011-3840(80)80017-7 139.  Boekaerts M, Pekrun R. Emotions and Emotion Regulation in Academic Settings. In: Corno L, Anderman EM, eds. Handbook of Educational Psychology. 3rd ed. Abington: Routledge; 2015. doi:10.4324/9781315688244.ch6 140.  Boekaerts M, Röder I. Stress, coping, and adjustment in children with a chronic disease: A review of the literature. Disabil Rehabil. 1999;21(7):311-337. 141.  Zeidner M. Anxiety in Education. In: International Handbook of Emotions in Education. ; 2014:265-288. doi:10.4324/9780203148211.ch14 142.  Boekaerts M. Emotions, emotion eegulation, and self-regulation of learning. In: Zimmerman BJ, Schunk DH, eds. Handbook of Self-Regulation of Learning and Performance. Abington: Routledge; 2011:408-425. 143.  Sargeant J, Lockyer J, Mann K, et al. Facilitated reflective performance feedback: developing an evidence-and theory-based model that builds relationship, explores reactions and content, and coaches for performance change (R2C2). Acad Med. 2015;90(12):1698-1706. doi:10.1097/ACM.0000000000000809 144.  Daniels LM, Gierl MJ. The impact of immediate test score reporting on university students’ achievement emotions in the context of computer-based multiple-choice exams. 198  Learn Instr. 2017;52:27-35. doi:10.1016/j.learninstruc.2017.04.001 145.  Pekrun R, Frenzel A, Goetz T, Perry R. The control-value theory of achieved emotions: a interactive approach to emotions in education. In: Schut P, Pekrun R, eds. Educational Psychology. Burlington: Academic Press; 2007:13-36. 146.  Zimmerman B. Self-efficacy: An essential motive to learn. Contemp Educ Psychol Educ Psychol. 2000;25(1):82-91. doi:10.1006/ceps.1999.1016 147.  Chen JA, Usher EL. Profiles of the sources of science self-efficacy. Learn Individ Differ. 2013;24:11-21. doi:10.1016/j.lindif.2012.11.002 148.  Zimmerman BJ, Kitsantas A. Acquiring writing revision and self-regulatory skill through observation and emulation. J Educ Psychol. 2002;94(4):660-668. doi:10.1037/0022-0663.94.4.660 149.  Kitsantas A, Zimmerman BJ, Geary T. The Role of Observation and Emulation in the Development of Athletic Self-Regulation. J Educ Psychol. 2000;92(4):811-817. 150.  Naismith LM, Lajoie SP. Motivation and emotion predict medical students’ attention to computer-based feedback. Adv Heal Sci Educ. 2017;Epub ahead. doi:10.1007/s10459-017-9806-x 151.  Darke SJ. Effects of anxiety on inferential reasoning task performance. J Pers Soc Psychol. 1988;55(3):499-505. 152.  Sorg BA, Whitney P. The effect of trait anxiety and situational stress on working memory capacity. J Res Pers. 1992;26(3):235-241. 153.  Ashcraft MH, Kirk EP. The relationships among working memory, math anxiety, and performance. J Exp Psychol Gen. 2001;130(2):224-237. 154.  Humphrey-Murto S, Mihok M, Pugh D, Touchie C, Halman S, Wood TJ. Feedback in the 199  OSCE: What Do Residents Remember? Teach Learn Med. 2016;28(1):52-60. doi:10.1080/10401334.2015.1107487 155.  Ohyama A, Nitta H, Shimizu C, et al. Educative effect of feedback after medical interview in objective structured clinical examination. Kokubyo Gakkai zasshi J Stomatol Soc. 2005;72(1):71-76. 156.  Young JQ, Van Merrienboer J, Durning S, Ten Cate O. Cognitive Load Theory: Implications for medical education: AMEE Guide No. 86. Med Teach. 2014;36(5):371-384. doi:10.3109/0142159X.2014.889290 157.  Miller GA. The magical number seven, plus orminus two: Some limits on our capacity for processing information. Psychol Rev. 1956;63(2):81-97. doi:http://dx.doi.org/10.1037/h0043158 158.  Kirschner PA, Sweller J, Clark RE. Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educ Psychol. 2006;41(2):75-86. doi:10.1207/s15326985ep4102_1 159.  Baddeley A, Eysenck MW, Anderson M. Incidental Forgetting. In: Memory. New York: Psychology Press; 2009:191-215. 160.  Bing-You RG, Trowbridge RL. Why medical educators may be failing at feedback. J Am Med Assoc. 2009;302(12):1330-1331. doi:10.1001/jama.2009.1393 161.  Fukkink RG, Trienekens N, Kramer LJC. Video feedback in education and training: Putting learning in the picture. Educ Psychol Rev. 2010;23(1):45-63. doi:10.1007/s10648-010-9144-5 162.  Harrison CJ, Molyneux AJ, Blackll S, Wass VJ. How give personalised audio feedback 200  after summative OSCEs. Med Teach. 2015;37(4):323-326. doi:10.3109/0142159X.2014.932901 163.  van de Ridder JMM, Mcgaghie WC, Stokking KM, ten Cate OTJ. Variables that affect the process and outcome of feedback, relevant for medical training: A meta-review. Med Educ. 2015;49(7):658-673. doi:10.1111/medu.12744 164.  Watling C, Lingard L. Toward meaningful evaluation of medical trainees: The influence of participants ’ perceptions of the process. Adv Heal Sci Educ. 2012;17(2):183-194. doi:10.1007/s10459-010-9223-x 165.  Watling C, Driessen E, van der Vleuten CP, Lingard L. Learning from clinical work: the roles of learning cues and credibility judgements. Med Educ. 2012;46(2):192-200. 166.  Watling CJ, Driessen EW, van der Vleuten CPM, Vanstone M, Lingard L. Beyond individualism: professional culture and its influence on feedback. Med Educ. 2013;47(6):585-594. 167.  Ross S, Dudek N, Halman S, Humphrey-Murto S. Context, time, and building relationships: bringing in situ feedback into the conversation. Med Educ. 2016;50(9):893-895. doi:10.1111/medu.13138 168.  Frank JR, Snell LS, Cate O Ten, et al. Competency-based medical education: theory to practice. Med Teach. 2010;32(8):638-645. doi:10.3109/0142159X.2010.501190 169.  Schut S, Driessen E, van Tartwijk J, van der Vleuten C, Heeneman S. Stakes in the eye of the beholder: an international study of learners’ perceptions within programmatic assessment. Med Educ. 2018;52(6):654-663. doi:10.1111/medu.13532 170.  Pugh D, Desjardins I, Eva K. How do formative objective structured clinical examinations drive learning? Analysis of residents’ perceptions. Med Teach. 2018;40(1):45-52. 201  doi:10.1080/0142159X.2017.1388502 202  Appendices  Appendix A: Emails sent students containing information about the FOSCE and the SOSCE   Email sent to students prior to the formative OSCE.  Sent on behalf of [name redacted], OSCE Director, MDUP and [name redacted], OSCE Lead, VFMP Dear Students,  As you know, the Year-one Formative Objective Structured Clinical Examination (OSCE) is scheduled for Saturday, December 12, 2015. The purpose of this examination is to give you the opportunity to become familiar with the format of OSCE exams, and to assist you in preparing for the Year 1 Summative OSCE exam in April, 2016. In this OSCE, you will receive immediate feedback on your performance and skills from the examiner immediately after completion of each station. These short feedback sessions are designed to give you specific and direct feedback on your performance for your continued improvement. They will give you the opportunity to ask questions and seek clarification around the feedback you receive. Each station will be 10 minutes long, followed by 5 minutes of feedback. There will be two history taking stations, two physical exam stations, and one rest station. One of the physical exam stations includes a mini post encounter probe (mini PEP), a short series of written questions given to you at 7 minutes.  Within six weeks of the exam, you will receive a written summary of your performance on each station, which will include examiners’ comments. This will give you a sense on how you performed on each station and will help guide your learning through the remainder of Year 1.  You are receiving detailed information regarding the schedule of the day of the Formative OSCE from [name redacted], Examination Site Administrator. A detailed student orientation will also be given on the day of the examination. Please note, attendance at the Formative OSCE exam is mandatory.  We hope that you will find the feedback from this examination helpful as you continue to develop your skills in first year. Sincerely, [Name redacted]  OSCE Director, MD Undergraduate Program 203   [Name redacted]  OSCE Lead, VFMP     204  Email attachment sent to students prior to the formative OSCE 1st Year Formative Objective Structured Clinical Examination (OSCE)   Date: Saturday, December 12, 2015  Location of OSCE Registration:  [Address redacted]  Please ensure you bring a government-issued photo ID or your student ID card to register.  Arrival Time:  There are four administrations of the exam. You have been randomly assigned to an administration for the OSCE. At the bottom of this document is a class list where you will see which administration you will be attending.  This schedule is final. You are not permitted to switch administrations.  If you cannot find your name, or conversely, find your name and you should not be taking the exam, please contact [name and email redacted] immediately.  1st Administration Registration Arrival Time: 7:50 AM 1st Administration Dismissal: 10:25 AM  2nd Administration Registration Arrival Time: 9:40 AM 2nd Administration Dismissal: 12:15 PM  3rd Administration Registration Arrival Time: 11:55 AM 3rd Administration Dismissal: 2:30 PM  4th Administration Registration Arrival Time: 1:45 PM 3rd Administration Dismissal: 4:15 PM 205     Once you have registered, you cannot leave the building until your administration of the exam is over and you receive specific instructions from the OSCE staff to leave. We trust that you will display ethical behavior and professionalism, and will not speak to each other about the examination until after all students have taken the examination.  Schedule for the day: You will be registered and then escorted to the OSCE stations by support staff for the examination. The OSCE runs for 1 hour and 25 minutes. There are 5 stations, each of which is 15 minutes long. Because the OSCE is very tightly scheduled and involves many people, there can be delays. Therefore, dismissal times are estimates. Please do not schedule anything immediately after the exam.  OSCE Format  There are 5 stations, each of which is 15 minutes long (10 minute encounter, 5 minutes of examiner feedback). The stations are inclusive of your first semester of the program. There are 2 history-taking stations, 2 physical examination stations, and 1 Mini Post Encounter Probe (PEP). There is also 1 break station.  You will be assigned to different “tracks,” but each student will rotate through exactly the same stations.  Each station will have the student instructions posted outside the clinic room which will provide you with specific focused instructions. You will have two minutes to change stations and read the student instructions. After the two minute reading time, you will hear a ring tone to enter the room. There is a copy of the student instructions inside of the room for you to refer back to during the 10 minute patient encounter. You will hear a verbal notification at 7 minutes saying “seven minutes” and again at 9 minutes saying “nine minutes.” At 10 minutes you will hear “begin examiner feedback.” After 5 minutes, you will hear a ring to leave the room and advance to the next station.  What you should wear  You need to wear a white lab coat and have your UBC ID visible at all times. Wear comfortable shoes. Please do not use perfumes or colognes.  206  What you should bring  You need to bring a stethoscope, penlight, and a watch if you like. Any other equipment that is required will be provided to you at each station. We will provide you with paper for note-taking during the exam; you cannot bring any paper into the exam.  What you should not bring  The rules governing formal exams as listed in the Calendar apply to the OSCE. Specifically, cell phones, pagers, Blackberries, iPhones, calculators and the like will not be permitted. Do not bring any paper to the exam. If you bring any of these items, you will be required to leave it with your other belongings (i.e. overcoat, bag, etc.). These are kept in secure locations and watched over, but it might be better to leave them at home.  Good luck with your examination.   Examinations Site Administrator  Vancouver Fraser Medical Program  Dean's Office, Undergraduate Education  Faculty of Medicine, University of British Columbia       Email sent to students prior to the summative OSCE 207   Dear Class of 2019,   Please read the attached OSCE announcement in its entirety, as it contains important information about your upcoming exam on April 9, 2016. Please ensure you are familiar with the “OSCE Conduct” document, also attached.   We cannot permit any changes to the OSCE schedule.   Regards, Examinations Site Administrator    Sent on behalf of [name redacted], Interim Director of Assessment, MDUP   With the start of MEDD 412 and 419, we would like to provide you with information regarding your upcoming assessments for the term; please find the following detailed information for MEDD 412 and MEDD 419:       Exam calendar for written exams and OSCEs 208     Exam composition for written exams and OSCEs     Requirements for passing MEDD 412 and 419    Exam result reporting schedule    MEDD 412 Written Exams Exam 1 February 25, 2016 Approximately 13-15 questions per week (weeks 19-24) Total marks: 90 marks Individual Score Report (ISR) Release  (Exam 1) ISR will be released within 3 business weeks after the exam, by March 17, 2016. Exam 2  April 15, 2016 Histology and Anatomy  Total marks: 50 marks Exam 3 April 16, 2016 Approximately 13-15 questions per week (weeks 25-32) Total marks: 120 marks Individual Score Report (ISR) Release  (Comprehensive Term Report) ISR will be released within 3 business weeks after the exam, by May 6, 2016.  This will include results for all of MEDD 412 scores, except OSCE (see below) Requirements for passing written exams, please see below* Requirements for Passing MEDD 412** OSCE Summative OSCE April 9/10, 2016 8 station OSCE detailed schedule will be emailed by March 9, 2016 Individual Score Report (ISR) Release ISR will be released within 3 business weeks after the exam, by May 6, 2016 Portfolio Please refer to attached assessment package for detailed information for timing and deadlines.   209  *Requirements for Passing Written Exams in MEDD 412 (Achieving a Cumulative 60%) The final MEDD 412 written exam score is determined by averaging the scaled exam scores across each of the three written exams. Each mark carries an equal weight. Each exam will be weighted according to the number of marks in the exam. To pass MEDD 412, the average scaled exam score must equal or exceed 60%.    **Requirements for Passing MEDD 412  In order to pass MEDD 412, you will need to achieve the following: 1.   Written Exam: Greater than or equal to 60% (Cumulative)  2.   OSCE: Greater than or equal to 60%  3.   Portfolio: Completion  4.   WBA: Greater than or equal to 60% (Cumulative of 6 summative WBAs based on a grading rubric). Please refer to attached assessment package for more detailed information1  1WBAs will alternate between formative and summative across the course. Each WBA moment will provide immediate feedback. A mid-course review will be conducted to see students’ progress to date. A score of 1 in any competency indicates a student is failing to meet expectations and will be referred to Course Leadership for support. If, by the end of the course, a student is still consistently identified as failing to meet the expectations in the same competency area (receives a score of 1 in the same competency on 3 or more occasions on the WBA form), the Student Promotions Committee will administer a failing grade for WBA.   MEDD 419 210  Written Exams Formative MCQ Exam  February 29, 2016 Approximately 25-30 questions  covers weeks 19-25 of MEDD 419, term 2  Portfolios Formative Due April 15 Summative Due May 27 Individual Score Report (ISR) Release ISR will be released by June 10th. This will be a comprehensive report for terms 1 and 2. Requirements for Passing MEDD 419***   ***Requirements for Passing MEDD 419  In order to pass MEDD 419, you will need to achieve the following: 1.   Summative Written Exam (Term 1): Greater than or equal to 60%  2.   Formative Written Exams (Term 1 and 2): Completion 3.   Formative Portfolio (Term 2): Completion  4.   Summative Portfolio (Term 2): Greater than or equal to 60%   Exam Conduct Policy Please familiarize yourself with the Exam Conduct Policy (attached). Any violations to this policy will be reported to the Regional Student Promotions Committee and the Student Promotions Review Board. Explanation of Exam Scoring Process: After an exam is completed, the answer sheets are processed by the Educational Assessment Unit. The steps include scanning the bubble sheets, double-checking the quality of the scanning, and performing quality assurance checks to ensure the scanning was accurate. Student responses are scored and psychometric data is generated. This psychometric data is then reviewed by Course Leadership at an Item Statistics Review Meeting. The performance of each exam item is reviewed and discussed and scoring decisions are made that affect the exam results. At this point, 211  the exam is rescored, standard set and results are finalized. Individual Score Reports are then created for each student and distributed within 3 business weeks from the exam.    Further Communication: This information will be updated and sent with each future exam notification.       Interim Director of Assessment Assistant Dean, Faculty Development Clinical Associate Professor, Department of Medicine   Faculty of Medicine, University of British Columbia   212  Email attachment sent to students prior to the summative OSCE  1st Year Objective Structured Clinical Examination (OSCE)    Date: Saturday April 9, 2016  Location of OSCE Registration:  [Address redacted]   Please ensure you bring a government-issued photo ID or your student ID card to register  Arrival Time: There are three administrations of the exam. You have been randomly assigned to an administration for the OSCE. At the bottom of this document is a class list where you will see which administration you will be attending.  This schedule is final. You are not permitted to switch administrations.   If you cannot find your name, or conversely, find your name and you should not be taking the exam, please contact [name and email redacted] immediately.  1st Administration Registration Arrival Time: 8:30 AM 213  1st Administration Dismissal: 11:00 AM  2nd Administration Registration Arrival Time: 11:00 AM 2nd Administration Dismissal: 1:30 PM  3rd Administration Registration Arrival Time: 1:30 PM 3rd Administration Dismissal: 4:00 PM  Once you have registered, you cannot leave the building until your administration of the exam is over and you receive specific instructions from the OSCE staff to leave. We trust that you will display ethical behavior and professionalism, and will not speak to each other about the examination until after all students have taken the examination.   Schedule for the day: You will be registered and then be escorted to the OSCE stations by support staff for the examination. The OSCE will begin about 30 minutes after your registration arrival time, and will run for one hour and 36 minutes. There are 8 stations, each of which is 10 minutes long. Because the OSCE is very tightly scheduled and involves many people, there can be delays. Therefore, dismissal times are estimates. Please do not schedule anything immediately after the exam.  OSCE Format 214  There are 8 stations, each of which is 10 minutes long. The stations are inclusive of your first year of the program. There are 3 history-taking stations, 4 physical examination stations, and 1 Post Encounter Probe (PEP).   You will be assigned to different “tracks,” but each student will rotate through exactly the same stations.  Each station will have the student instructions posted outside the clinic room which will provide you with specific focused instructions. You will have two minutes to change stations and read the student instructions. After the two minute reading time, you will hear a ring tone to enter the room. There is a copy of the student instructions inside of the room for you to refer back to during the 10 minute patient encounter. You will hear a verbal notification at 9 minutes saying “Nine Minutes.” At 10 minutes you will hear a ring tone to leave the station.  What should I wear? You need to wear a white lab coat and have your UBC ID visible at all times. Wear comfortable shoes. Please do not use perfumes or colognes.  What equipment should I bring? You need to bring a stethoscope, penlight, watch, and a pen. Any other equipment that is required will be provided to you at each station. We will provide you with paper for note-taking during the exam; you cannot bring any paper into the exam.  215  What equipment should I not bring? The rules governing formal exams as listed in the Calendar apply to the OSCE. Specifically, cell phones, pagers, Blackberries, iPhones, calculators and the like will not be permitted. Do not bring any paper to the exam. If you bring any of these items, you will be required to leave it with your other belongings (i.e. overcoat, bag, etc.). These are kept in secure locations and watched over, but it might be better to leave them at home.  Good luck with your examination.  Examinations Site Administrator  Vancouver Fraser Medical Program Dean's Office, Undergraduate Education Faculty of Medicine, University of British Columbia   216  Appendix B: Sample interview guides The questions with participant numbers beside it [e.g., (P3)], refer to questions that were added during the interview period based on insight from gained from participants  Be intentional with next questions, take notes to think a bit more(P1) Introduction Thank you for agreeing to participate in this interview. My name is Cynthia and I’m a MD/PhD student at UBC. I am hoping to finish my research next year and then I’ll return to clerkship(P10).I don’t work with the medical program in any way and I am doing this project as part of my PhD because I am interested in improving the quality of education for medical students.(P2) The purpose of this interview is for me to understand your experience with the December OSCE and to understand how any learning the OSCE may have prompted any learning before, during and after the OSCE itself. There are no right or wrong answers. I am interested in your individual experience as a student. I am not trying to evaluate the quality of the OSCE itself, so please be as frank as you can.  I will keep your identity confidential and you will only be identified by your participant ID on study materials. If there is a question that you do not feel comfortable answering, please let me know and we’ll skip over it. During the interview, I will be taking some notes to help me remember key ideas in our conversation. Is this ok with you? 217  This is a semi-structured interview, which means that I have some questions to help give us some direction, but I may deviate from the list dependent on things you say and you are also welcome to talk about things that seem relevant to you that may not directly address my question.  To help me fully capture your experience, sometimes the questions may seem a bit repetitive. If you don’t have anything new to add, please let me know.  The interview will take about an hour to complete. Do you have any questions before we begin?  Interview questions Probes:  opening Overall, how did the OSCE go for you? Remember to probe here (P1)  Before I’d like you to go back a bit and talk about your experience preparing for the OSCE.   Before you did the OSCE, what were your opinions/impressions about it?(P1)  What did you think was the purpose of the December OSCE? How did that influence your preparation for it?      What were you hoping to learn by going through the OSCE?   218   What, if any, were your personal goals heading into the OSCE (i.e., what, if anything, were you hoping to accomplish)?  Did the anticipation of the OSCE lead you to do or study anything that you would not have done otherwise?   Did you have a plan or strategy for preparing for the OSCE? Can you tell me about that strategy or plan?  Did anything unexpected happen that impacted your preparation? How did you decide how you were going to prepare? Where did you get this information What prompted that decision? What motivated the goals you expressed?  What else was happening in school or in your life at the time? How did those things affect your preparation in any way? Feelings Relevance During Now I’d like you to tell me about your experience going through the OSCE itself.   How did the experience match up with what you were expecting?    219  Start at the beginning, if you can. Can you recall what were you thinking as it began?   Recall, if you can, the thoughts you were having as you went through the OSCE.   Were there any moments when you were surprised? Tell me about those. What did you learn through this?(P1)  Were there moments when you realized something about your preparation was either helpful or not. Tell me about those.  Can you tell me about your interactions with the examiners during the OSCE. What did you learn through these interactions?(P2) Were you previously aware of the things Were you experiencing any particular feelings that led you to change your approach or your goals?   Examples? (P1)       Did you agree/disagree with the feedback?(P3) What made you feel like you could/could not ask for 220  brought up in the feedback or were??(P5) Did you learn anything new? (P5)   Did the goals you set prior to the OSCE change during the experience? If so, what led to the change? What did you learn about yourself, your ability, or medicine more generally while performing in the OSCE?     Did you learn anything that you didn’t anticipate learning?   Did you walk away from the OSCE believing your goals were [specify] were fulfilled?  What were you left wondering about after the OSCE ended? What did you do about that? clarification/questions you had about your performance(P6)    How did you learn these about these things (feedback from examiner vs self-monitor)? Did your impressions of performance match the examiner’s impression?(P3)    How did you determine whether you fulfilled your goal or not?  221  After I’d like to spend the bulk of our time talking about if and how the OSCE influenced your learning goals and activities after it was completed.  Have you spent any time thinking about your OSCE experience?   Has the program followed-up on the OSCE in any way? (P7) Probe “how to do well on OSCE” session? (P7)   Do you feel like you gained any insight from the OSCE experience into ways in which you might continue your learning?   Compared to other learning experiences you’ve had, how was the OSCE able to help you to gain insight in ways that How would you describe your life since the OSCE?   If so, tell me about what you’ve thought about?  What else has been happening in school or in your life? How has this affected the impact of the OSCE on your learning?       222  you could not gain in other clinical learning experiences? (P7, stemmed from P1)   Since you completed the OSCE, has your perception regarding its value changed? Did your goals or what were you hoping to get out of OSCE change after the OSCE? What motivated the change?  Did the December OSCE experience impact how you will prepare the OSCE in the future?   Has the OSCE experience led you to do anything else differently (e.g., in clinical settings, other classes)?      How are you feeling about your capacity to achieve these goals?     Why are they your current priorities? 223  What goals do you currently have to take the next step in your training as a medical student? Did the OSCE help or hinder you from working towards these goals? Were your goals as a medical student modified in any way by your experience in the OSCE?  Anticipated feedback What were you hoping to get from the feedback report? (P1) What are you hoping to learn from the feedback report? (P1) Beyond the feedback report, what other support or resources do you think you need to help you learn from the OSCE? (P1)   Feedback report So last week I think you received your feedback report.  Did the report change your interpretation of what you learned from the OSCE experience? Did it lead you to What was surprising on the report?    224  modify, remove, or add any learning goals you’ve had since completing the OSCE in December?  Did the report have any additional impacts how you will prepare the OSCE in the future?   Has the report led/will lead you to you to do anything else differently (e.g., in clinical settings, other classes)? Alignment with program goals Thinking more generally about the December OSCE now, what do you believe the MD program intended to accomplish in conducting the OSCE?  Their formally stated intention is to have students “use the assessment to guide their learning”. Were you aware of that prior to the OSCE?  How did the program’s intended goal influence your own goals?    225  What does the phrase “use the assessment to guide learning” mean to you?    Given your recent experience with the December OSCE, in what ways did the experience guide your learning/practice?    In what ways did it NOT achieve its intended purpose of guiding your learning? (disrupting/misdirecting?)  Ending questions That concludes my formal questioning.  Is there anything else that might have occurred to you during this interview that you think might be important related to learning from OSCE assessments? Is there anything that you would like to ask me?    226  Thanks again for sharing your experiences with me. Would it be ok if I contact you to ask a few follow-up questions if I need to clarify some of your answers?  Misc. What does mean when an assessment is formative?(P1)    Past assessment experiences Finally, I’d like to ask you a few questions about your past assessment experiences in general.  Can you tell me about a time in which you found an assessment experience to be particularity valuable for guiding your learning /practice improvement?   What was it about that experience that made it so?  How might the December OSCE have been made more effective in guiding your learning?   What made that assessment a good opportunity to guide your learning/practice improvement (before, during or after the assessment)?   How did it fit/not fit with other things you were learning/doing?     227  If participant cannot think of an assessment that was valuable for guiding their learning, Can you tell me about why assessments have generally not been valuable for learning/practice improvement?   Is there some way to make them valuable in your perception?  228  Sample interview guide for the SOSCE The questions with participant numbers beside it [e.g., (P3)], refer to questions that were added during the interview period based on insight from gained from participants.   Be intentional with next questions, take notes to think a bit more(P1) Introduction Thank you for agreeing to this interview. As you know, I’m a MD/PhD student at UBC and am hoping to finish my research before returning to clerkship next year(P10). I don’t work with the medical program in any way. My interest is entirely driven by a desire to improve the quality of education for medical students.(P2)  The purpose of this interview is for me to explore your experience with the April OSCE and to understand your learning in the context of that OSCE – before, during, and after the OSCE itself.   There are no right or wrong answers. I am interested in your individual experience as a student. I am not trying to evaluate the quality of the OSCE itself, so please be as frank as you can about your experience.   I will keep your identity confidential. You will only be identified by your participant ID on study materials.  229  If there is a question that you do not feel comfortable answering, please let me know and we’ll skip over it.  During the interview, I will be taking some notes to help me remember key ideas in our conversation. Is this ok with you?  This is a semi-structured interview, which means that I have some questions to give us some direction. But I want this to be a conversation. So you are welcome to talk about things that seem relevant to you that may not directly address my question. This is about YOUR experience.  To help me fully capture your experience, sometimes the questions may seem a bit repetitive. If you don’t have anything new to add, just say so and we’ll move on.   The interview will take about an hour to complete. Do you have any questions before we begin?  Interview questions Probes: Probes (from FOSCE interview) opening Overall, how did the OSCE go for you? Remember to probe here (P1)  Before I’d like you to go back a bit and talk about your experience preparing for the April OSCE.      230   Before you did the April OSCE, what were your expectations about it?(P1)  What did you think was the purpose of the April OSCE?   What, if any, were your personal goals heading into the OSCE (i.e., what, if anything, were you hoping to accomplish)? How did that influence your preparation for it?   Did you have a plan or strategy for preparing for the OSCE? Can you tell me about that strategy or plan?   Did anything impact your preparation?       What were you hoping to learn by going through the OSCE?  What made you decide on these goals?   How did you decide how you were going to prepare?   What motivated you to prepare for the April OSCE?        Prior to doing in the April OSCE, what made did you feel that it would be a good place to learn about [insert goals]?   Before to the April OSCE, what got in the 231  Did the anticipation of the OSCE lead you to do or study anything that you would not have done otherwise?     What else was happening in school or in your life at the time? How did those things affect your preparation in any way? Feelings Relevance way of you learning about [insert goals]?       During Now I’d like you to tell me about your experience going through the OSCE itself.   Start at the beginning, if you can. Can you recall what were you thinking as it began?   How did the experience match up with what you were expecting?   Were you experiencing any particular feelings that led you        232  Recall, if you can, the thoughts you were having as you went through the OSCE.   Were there any moments when you were surprised? Tell me about those. What did you learn through this?(P1)  Were there moments when you realized something about your preparation was either helpful or not. Tell me about those.  Did you have a sense of how you did immediately after each station?   What did you notice as you monitored your performance through the OSCE? Were some moments more important than others? How so? What did you learn in those moments? What will you do, if anything, as a result of that?  to change your approach or your goals?   Examples? (P1)                       What were you trying to learn through tracking your performance?  233   Did you have any interaction with any of your April OSCE examiners?  If so, did you learn through these interactions?(P2) Were you previously aware of the things brought up in the feedback or were??(P5) Did you learn anything new? (P5)   Did the goals you set prior to the OSCE change during the experience? If so, what led to the change?  What did you learn about yourself, your ability, or medicine more generally while performing in the OSCE?   Did you learn anything that you didn’t anticipate learning?   Did you walk away from the OSCE believing your goals were [specify] were fulfilled?        How did you determine whether you fulfilled your goal or not?  What did you learn through tracking your performance?   knowing this information help you understand [insert goal]?   Did you experience any difficulty or uncertainty in figuring this out? What made it difficult?  234   What were you left wondering about after the OSCE ended? What did you do about that? Is knowing this information important to you, why?  After I’d like to spend the rest of our time talking about if and how the OSCE influenced your learning goals and activities after it was completed.  Have you spent any time thinking about or reflecting on your OSCE experience?    Do you feel like you gained any insight from the OSCE experience into ways in which you might continue your learning?   How would you describe your life since the OSCE?   If so, tell me about what you’ve thought about?     After the OSCE, was there anything that was happening in your life that disrupted or      What prompted you to think about your performance? What were you hoping to achieve through thinking/reflecting?    235  Compared to other learning experiences you’ve had, how was the OSCE able to help you to gain insight in ways that you could not gain in other clinical learning experiences? P7, stemmed from P1)   Since you completed the OSCE, has your perception regarding its value changed? Did your goals or what were you hoping to get out of OSCE change after the OSCE? What motivated the change?  Did the April OSCE experience impact how you will prepare for an OSCE in the future?   facilitated the impact of the OSCE on your learning?         How are you feeling about your capacity to achieve these goals?    What did you think about and how did you go about it (solo, other people).  Was there anything that prevented you from engaging in thinking about or reflecting on your OSCE performance?  Did you experience any difficulty or uncertainty in figuring 236  Has the OSCE experience led you to do anything else differently or led you to plan to do anything different in the future? (e.g., in clinical settings, other classes)   Did the OSCE help or hinder you from working towards any goals you have as a medical student? Were your goals as a medical student modified in any way by your experience in the OSCE?      Why are they your current priorities? this out? What made it difficult?     Prompt changes in a certain skill, approach to learning,  Anticipated feedback Now I would like to talk to you a bit about the written report you received.  What were you hoping to get from the feedback report? (P1)    Why were you interested in that information? Feedback report   What was surprising on the report? How did the information on the 237  Did the report change your interpretation of what you learned from the OSCE experience? Did it lead you to modify, remove, or add any learning goals you’ve had since completing the OSCE in April?  Did the report have any additional impacts how you will prepare the OSCE in the future?   Has the report led/will lead you to you to do anything else differently (e.g., in clinical settings, other classes)?    reports either help or hinder you from learning from the OSCE? Alignment with program goals Thinking more generally about the April OSCE now, what do you believe the MD program intended to accomplish in conducting the April OSCE?    How did the program’s intended goal influence your own goals?   238   Given your recent experience with the April OSCE, in what ways did the experience guide your learning/practice?   Did anything about the April OSCE get in the way/or limit its usefulness in guiding your learning/practice?    Are there other support or resources you think might help you learn from the OSCE?(P1)          Comparison with December OSCE Finally, I’d like to ask you to compare the December and April OSCEs as opportunities for learning – before, during, and after the actual OSCE experience. What comes to mind as you compare those two OSCE experiences?   preparation and feedback (P6) Beliefs about alignment between OSCE and real life   Were there any differences between what you wanted to learn in the December and April OSCEs? If so, what were the differences, and why? 239   Were you aware that the April OSCE was a summative exam?  Were you aware that the December OSCE was formative? What does that word mean to you?   Did and if so, how did whether the exam was summative or formative influence your learning before, during, after?         Ending questions That concludes my formal questioning.  Is there anything else that I haven’t asked about that you think might help me understand how students learn from those OSCE assessments?  Is there anything that you would like to ask me? Thanks again for sharing your experiences with me.   Other What does mean when an assessment is summative? (P1)   

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0389811/manifest

Comment

Related Items