UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The lateral prefrontal cortex supports an integrated representation of task-rules and expected rewards… Dixon, Matt Luke 2011

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2011_fall_dixon_matt.pdf [ 1.05MB ]
Metadata
JSON: 24-1.0072060.json
JSON-LD: 24-1.0072060-ld.json
RDF/XML (Pretty): 24-1.0072060-rdf.xml
RDF/JSON: 24-1.0072060-rdf.json
Turtle: 24-1.0072060-turtle.txt
N-Triples: 24-1.0072060-rdf-ntriples.txt
Original Record: 24-1.0072060-source.json
Full Text
24-1.0072060-fulltext.txt
Citation
24-1.0072060.ris

Full Text

The lateral prefrontal cortex supports an integrated representation of taskrules and expected rewards: evidence from fMRI-adaptation by Matt Luke Dixon Hon B.Sc., University of Toronto, 2006  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIRMENTS FOR THE DEGREE OF MASTER OF ARTS in The Faculty of Graduate Studies (Psychology) The University of British Columbia (Vancouver) August 2011 © Matt Luke Dixon, 2011  ABSTRACT Our capacity for self-control is supported by the use of behaviour-guiding rules. A fundamental question is how we decide which one of out of many potential rules to follow. If different rules were integrated with their expected reward-value, they could be compared, and the one with the highest value selected. However, it currently remains unknown whether any areas of the brain perform this integrative function. To address this question, we took advantage of functional magnetic resonance imaging (fMRI)-adaptation—the ubiquitous finding that repeated as compared to novel stimuli elicit a change in the magnitude of neural activity in areas of the brain that are sensitive to that stimulus. We created a novel fMRI-adaptation paradigm in which instruction cues signaled novel or repeated task-rules and expected rewards. We found that the inferior frontal sulcus (IFS)—a sub-region of the lateral prefrontal cortex—exhibited fMRIadaptation uniquely when both rule and reward information repeated as compared to when it was novel. fMRI-adaptation was not observed when either factor repeated in isolation, providing strong evidence that the IFS supports an integrated representation of task-rules and rewards. Consistent with an integrative role, the IFS exhibited correlated activity with numerous rulerelated and reward-related areas of the brain across the entire experimental time-course. Additionally, the correlation strength between the IFS and a subset of these regions changed as a function of the novelty of rule and reward information presented during the instruction cue period. Our results provide novel evidence that the IFS integrates rules with their expected reward-value, which in turn can guide complex decision making.  ii  PREFACE The research conducted in this study was approved by the UBC clinical research ethics board (certificate number: H10-00030).  iii  TABLE OF CONTENTS ABSTRACT ................................................................................................................................... ii PREFACE ...................................................................................................................................... iii TABLE OF CONTENTS ............................................................................................................... iv LIST OF TABLES ......................................................................................................................... vi LIST OF FIGURES ...................................................................................................................... vii ACKNOWLEDGMENTS ........................................................................................................... viii 1  INTRODUCTION .................................................................................................................. 1 1.1  Theoretical accounts of lateral prefrontal cortex function .............................................. 1  1.2  Evidence that the LPFC plays a motivational role .......................................................... 3  1.2.1  Anatomical evidence for a motivational role .......................................................... 3  1.2.2  Functional evidence for a motivational role ............................................................ 4  1.2.2.1  Convergent evidence from diverse methodological approaches ...................... 4  1.2.2.2  The LPFC integrates actions and rewards ........................................................ 5  1.2.2.3  The LPFC is necessary for reward-based decision making .............................. 6  1.2.2.4  The LPFC may combine rule and reward information ..................................... 6  1.2.3 1.3  2  Questions about the information represented by LPFC neural activity .................. 7  Current research .............................................................................................................. 8  1.3.1  Research question and methodological approach.................................................... 8  1.3.2  Task design .............................................................................................................. 9  1.3.3  Experimental hypotheses ......................................................................................... 9  1.3.4  Analysis considerations ......................................................................................... 10  METHODS ........................................................................................................................... 12 2.1  Subjects ......................................................................................................................... 12  2.2  Task design.................................................................................................................... 12  2.3  fMRI data acquisition .................................................................................................... 18  2.4  fMRI data preprocessing ............................................................................................... 19  2.5  fMRI data analysis ........................................................................................................ 19  2.5.1  First-level model .................................................................................................... 19  2.5.2  Second-level random effects analysis ................................................................... 21  2.5.3  ROI definition ........................................................................................................ 21 iv  3  4  2.5.4  Time-course visualization ..................................................................................... 22  2.5.5  Functional connectivity and correlation changes .................................................. 22  RESULTS ............................................................................................................................. 24 3.1  Behavioural data: subjects are motivated to earn money .............................................. 24  3.2  fMRI data ...................................................................................................................... 26  3.2.1  Validation of the fMRI-adaptation paradigm ........................................................ 26  3.2.2  fMRI-adaptation for repeated reward information ................................................ 26  3.2.3  fMRI-adaptation for repeated rule information ..................................................... 29  3.2.4  fMRI-adaptation evidence that the LPFC integrates rules and rewards ................ 32  3.2.5  The IFS interacts with the rule and reward networks ............................................ 38  3.2.6  The IFS exhibits dynamic correlation changes ..................................................... 41  DISCUSSION ....................................................................................................................... 43 4.1  A new perspective on the role of the LPFC .................................................................. 43  4.2  LPFC and model-based decision making ...................................................................... 45  4.3  Implications for understanding higher "cognitive" processes ....................................... 47  4.4  Interpretational and methodological issues ................................................................... 51  4.5  Questions remaining to be addressed by future studies ................................................ 55  4.6  Conclusions ................................................................................................................... 57  REFERENCES ............................................................................................................................. 58  v  LIST OF TABLES  Table 1. Region exhibiting fMRI-adaptation for repeated reward information............................ 28 Table 2. Regions exhibiting fMRI-adaptation for repeated rule information ............................... 31 Table 3. Regions exhibiting fMRI-adaptation for repeated rule and reward information ........... 34 Table 4. Regions exhibiting significant functional connectivity with the IFS ............................ 40  vi  LIST OF FIGURES  Figure 1. Illustration of the trial structure .................................................................................... 14 Figure 2. Illustration of the four conditions of interest ................................................................ 15 Figure 3. Accuracy and reaction time (RT) data........................................................................... 25 Figure 4. Regions exhibiting fMRI-adaptation for repeated reward information ........................ 27 Figure 5. Regions exhibiting fMRI-adaptation for repeated rule information.............................. 30 Figure 6. Regions exhibiting fMRI-adaptation for repeated rule and reward information ........... 33 Figure 7. Left IFG time-course time-locked to the presentation of the second instruction cue ... 37 Figure 8. Regions exhibiting functional connectivity with the IFS ............................................. 39 Figure 9. Interregional correlations between the IFS and the rule and reward networks ............. 42  vii  Acknowledgments  I would like to thank my advisor Dr. Kalina Christoff for her support and trust in my thesis project as well as her guidance and feedback. I thank my thesis committee members Dr. Todd Handy and Dr. Jess Tracy for their encouragement. I am indebted to Dr. Alex MacKay, Trudy Harris, Paul Hamill, and Linda James at the UBC MRI Research Centre for making data collection a smooth and enjoyable process. I gratefully appreciate the assistance, critical feedback, and support I received from my lab members Melissa Ellamil, Kieran Fox, and Graeme McCaig. I am most grateful to the participants of this study for their willingness to keep focused in an MRI scanner for 2 hours. The love and support I have continually received from my family and friends has been amazing. Finally, this work would not have been possible without the financial support provided by a grant from the Natural Sciences and Engineering Research Council of Canada (NSERC).  viii  1  INTRODUCTION The capacity to act intentionally based on internal goals and subtle contextual  information often requires the use of if-then rules (e.g., if I want to maintain my New Year‘s resolution to eat healthier then I should forego dessert). However, a fundamental question is how we decide to act based on one particular rule when there are multiple potential rules to follow, or how we decide to expend the effort to use a rule to override a reflexive habit-based response that conflicts with long-term goals (as in the example above). Such complex decision making would be facilitated if rules were integrated with their expected reward value. In this way, a comparison could be made between the expected reward-value of different rules, or between the value of a rule and the value of a habitual response, and the option with the highest expected value could be selected. Considerable research has demonstrated that the orbitofrontal cortex (OFC) and anterior cingulate cortex (ACC) integrate expected rewards with stimuli and actions, respectively, thus facilitating simple decision making (Matsumoto, Suzuki, & Tanaka, 2003; Rudebeck et al., 2008; Rushworth, Behrens, Rudebeck, & Walton, 2007; Schoenbaum & Esber, 2010; Wallis, 2007). However, surprisingly little research has directly examined whether any areas of the brain integrate rule and reward information.  1.1  Theoretical accounts of lateral prefrontal cortex function It is possible that the paucity of research examining potential integrative regions of the  brain is related to the fact that prominent theories of executive control and complex decision making posit a strict anatomical segregation between regions that represent rules and regions that represent motivational incentives such as rewards (Botvinick, Braver, Barch, Carter, & Cohen, 2001; Hazy, Frank, & O'Reilly R, 2007; Kouneiher, Charron, & Koechlin, 2009; Ridderinkhof,  1  Ullsperger, Crone, & Nieuwenhuis, 2004). According to these theories, the lateral prefrontal cortex (LPFC) plays a regulative role by representing behaviour-guiding rules, whereas the medial PFC/ACC and basal ganglia play a motivational role by computing the costs and benefits of taking different actions (based on effort expenditure, reward magnitude and delay, probability of making an error, etc.). The representation of motivationally salient events by these latter areas putatively results in a gain signal that engages the rule-based regulatory control supported by the LPFC (Botvinick, et al., 2001; Kouneiher, et al., 2009; Ridderinkhof, et al., 2004). Although this perspective provides a parsimonious mechanism by which motivational incentives could alter the strength of an already activated rule, it is unclear how a non-specific modulatory gain signal would allow value information to be assigned to different rules, and therefore, how one particular rule would be selected in the first place. On the other hand, perhaps dynamic phase coupling among distributed brain regions (i.e., neural synchrony) could link reward incentives to specific rules. A separate model articulated by Frank and colleagues (Frank & Badre, in press; Hazy, et al., 2007) suggests that the basal ganglia register motivationally salient events and through trialand-error reinforcement learning, influences the specific contextual information that is represented by the LPFC in service of action selection. While this model has considerable explanatory power, it is unclear whether this model can be extended to novel or rapidly changing environments in which trial-and-error learning may be unsuitable to discern the reward-value of rules. As an alternative to a segregation perspective, the brain could support complex decision making through dedicated neural structures that integrate information about rules and rewards. In particular, the LPFC is an ideal candidate region that could potentially support an abstract integrative representation via re-mapping rule and reward information initially represented in  2  separate brain regions (Gray, Braver, & Raichle, 2002; Pessoa, 2008; Watanabe, 2007). Although the LPFC‘s role in representing behaviour-guiding rules receives the most attention (for reviews see Badre & D'Esposito, 2009; Bunge, 2004; Duncan, 2010; Koechlin & Summerfield, 2007; Miller & Cohen, 2001), anatomical and functional data (reviewed below) suggest that this region may also directly represent motivational information (e.g., expected rewards), therefore making the LPFC well-suited to play an integrative role. Moreover, dedicated integrative neural circuitry would be consistent with current conceptions of simple decision making implemented in other brain regions (e.g., the OFC and ACC).  1.2  Evidence that the LPFC plays a motivational role  1.2.1 Anatomical evidence for a motivational role It is well known that the LPFC shares reciprocal anatomical connections with key rulerelated areas of the brain including the premotor cortex, supplementary motor area (SMA), posterior parietal cortex, and lateral temporal cortex (Petrides & Pandya, 1988, 1999, 2002, 2006; Yeterian, Pandya, Tomaiuolo, & Petrides, 2011). However, there are also robust bidirectional anatomical connections between the LPFC and key emotional/motivational regions of the brain including the ACC (dorsal area BA 32 and ventral, rostral area BA 24), the posterior cingulate cortex (PCC), every sector of the OFC (BA 11, 47/12, 13, 14), and the anterior insula (Barbas & Mesulam, 1985; Morris, Pandya, & Petrides, 1999; Petrides & Pandya, 1999, 2002, 2006; Yeterian, et al., 2011). These regions have been heavily implicated in motivational/reward processing (Craig, 2002; Croxson, Walton, O'Reilly, Behrens, & Rushworth, 2009; Kable & Glimcher, 2007; Knutson, Fong, Bennett, Adams, & Hommer, 2003; Noonan et al., 2010; O'Doherty, 2004; Plassmann, O'Doherty, & Rangel, 2007; Plassmann, O'Doherty, & Rangel, 2010; Preuschoff, Quartz, & Bossaerts, 2008; Rudebeck, Bannerman, & Rushworth, 2008; 3  Rushworth & Behrens, 2008) and it seems likely that direct afferent input from these regions would be incorporated into the representational content supported by LPFC. 1.2.2 Functional evidence for a motivational role 1.2.2.1 Convergent evidence from diverse methodological approaches Across numerous studies employing diverse methodological approaches, the findings converge on the conclusion that the LPFC plays a central role in motivational processing. Electrophysiological and functional neuroimaging studies show increased LPFC activity in response to reward predicting cues (Matsumoto, et al., 2003; Wallis & Miller, 2003), during delay periods when a specific reward is expected (Hikosaka & Watanabe, 2000; Watanabe, 1996), and during feedback periods when rewards are obtained, or an error is signalled (Histed, Pasupathy, & Miller, 2009; Li, Delgado, & Phelps, 2011; Mansouri, Matsumoto, & Tanaka, 2006; Seo, Barraclough, & Lee, 2007). Additionally, LPFC activation related to holding information in mind (e.g., a spatial location) is influenced by the availability of rewards (Leon & Shadlen, 1999; Pochon et al., 2002; Watanabe, 1996) and the emotional state of the subject (Gray, et al., 2002). Complimenting these correlational findings, LPFC lesions in monkeys disrupt the ability to estimate the reward-value of stimuli when the magnitude and delay until the reward are concurrently varied (Simmons, Minamimoto, Murray, & Richmond, 2010) and interfere with the ability to use a previously learned behavioural-strategy to obtain rewards in a detour reaching task (Wallis, Dias, Robbins, & Roberts, 2001). Human lesion studies paint a similar picture, showing that LPFC damage is often accompanied by disturbances in mood and motivation, including increased negative affect and irritability, emotional blunting, poverty of speech, a lack of drive and energy, disinterest in the world, and diminished self-initiated action (Fuster, 2008; Gillihan et al., 2011; Levy & Dubois, 2006; Paradiso, Chemerinski, Yazici, 4  Tartaro, & Robinson, 1999). Thus, neuroimaging, electrophysiological, and lesion work converge in demonstrating that the LPFC is responsive to motivationally salient events and is necessary for normal motivated action. 1.2.2.2 The LPFC integrates actions and rewards Several studies also suggest that motivational information represented by the LPFC may be combined with information about actions (Barraclough, Conroy, & Lee, 2004; Histed, et al., 2009; Seo, et al., 2007; Wallis & Miller, 2003). When monkeys decide between two response options associated with different reward values, a significant proportion of LPFC neurons encode both the forthcoming response and the expected reward magnitude (Wallis & Miller, 2003). Complimenting this finding, when monkeys play an economic decision making task in which they must use knowledge of recent actions and reward outcomes in order to determine which one of two current response options will be rewarded, LPFC activity reflects both the monkey‘s choice and associated outcome from the previous few trials (Barraclough, et al., 2004; Seo, et al., 2007). Finally, Histed et al., (2009) demonstrated that the capacity for reward feedback to shape behaviour is represented by LPFC activity. They had monkeys learn arbitrary associations between visual cues and a leftward or rightward saccade via trial-and-error feedback. They found that following a correct response when a juice reward was presented, many LPFC neurons exhibited increased firing that persisted into the next trial, and these same neurons then exhibited greater response selectivity (e.g., greater preference for a right saccade) on the subsequent trial. Importantly, changes in neural selectivity were associated with an increased likelihood of making a correct response. Thus, recent electrophysiological data has conclusively established that information about actions and rewards come together at the level of single neurons in the LPFC. 5  1.2.2.3 The LPFC is necessary for reward-based decision making The role of the LPFC in reward-based decision making extends beyond associating actions and rewards, however. LPFC activity correlates with important decision variables including the subjective value assigned to stimuli (Plassmann, et al., 2010; Tobler, Christopoulos, O'Doherty, Dolan, & Schultz, 2009), uncertainty in the optimal choice (Huettel, Song, & McCarthy, 2005), an integration of expected value and risk (Tobler, et al., 2009), the temporally discounted reward value of stimuli derived from combining reward magnitude and delay (Kim, Hwang, & Lee, 2008), and the subjective cognitive cost assigned to different decision options (McGuire & Botvinick, 2010). Furthermore, LPFC activity is associated with choosing delayed rewards during financial decision making (McClure, Laibson, Loewenstein, & Cohen, 2004; Tanaka et al., 2004) and disrupting LPFC activity with transcranial magnetic stimulation (TMS) leads to greater preference for risky options (Knoch et al., 2006) and smaller, immediate rewards versus larger, delayed rewards (Figner et al., 2010). These findings point to a prominent role of the LPFC in many aspects of reward-based decision making, in particular, integrating multiple relevant factors, and promoting choices associated with long-term rewards. This further supports the idea that the LPFC is not just a high-level cognitive area of the brain, but rather, is also involved in key computations pertaining to motivational processing. 1.2.2.4 The LPFC may combine rule and reward information Finally, a series of elegant studies by Braver and colleagues (Beck, Locke, Savine, Jimura, & Braver, 2010; Jimura, Locke, & Braver, 2010; Locke & Braver, 2008; Savine & Braver, 2010) are suggestive that the LPFC may have the capacity to integrate rule and reward information. Using a task-switching paradigm, Savine and Braver (2010) found that left dorsolateral prefrontal cortex (DLPFC) activity was greater when demands on rule processing 6  increased (task switch versus single task blocks) and also found that cues signaling the availability of a monetary reward elicited additional activation in this same region. The authors suggested that the left DLPFC may integrate task (i.e., rule) and incentive information during cue periods to optimize behavioural performance. In another study, Jimura et al. (2010) had subjects perform a Sternberg working memory task within separate blocks (or contexts) in which either there was or was not the potential to earn money. In this task, subjects had a simple rule to use: determine whether a probe word was included within an immediately preceding 5-word memory set. LPFC activity reflected the rule based component of the task and was additionally elevated within the monetary reward context. There was also a shift in the temporal dynamics of LPFC activation: transient activation was greater during the early part of trials in the reward context (indicating proactive rule use), but was greater during the late part of trials in the no-reward context (indicating reactive rule use). This suggests that the reward context may have been incorporated into the LPFC‘s representation of the task-rules. 1.2.3 Questions about the information represented by LPFC neural activity The data reviewed above make a strong case that the LPFC is sensitive to motivational incentives such as rewards and there is clear electrophysiological evidence that this information is integrated with actions and information held in working memory. The neuroimaging data is further suggestive that the LPFC may be capable of integrating rule and reward information. However, this latter idea remains contentious. The observation of increased or temporally altered rule-related activity in the LPFC when rewards are available seems most compatible with an integrative account. However, it has been argued that changes in LPFC activity in the presence of rewards actually reflect a ―boosted‖ representation of the task-rules, or a change in the allocation of attention, rather than a representation of the reward per se (Kennerley & Wallis, 7  2009; Kouneiher, et al., 2009). As such, direct evidence that the LPFC integrates rule and reward information is still lacking.  1.3  Current research  1.3.1 Research question and methodological approach The current study provided a direct test of the hypothesis that the LPFC integrates rule and reward information by taking advantage of functional magnetic resonance imaging adaptation (fMRI-adaptation)—a technique that is widely used to examine the specific information represented by different brain regions in a manner that is more direct than traditional ‗activation-based‘ paradigms. fMRI-adaptation exploits the fact that during repeated as compared to novel presentation of a specific stimulus, there is a selective change in the magnitude of the blood-oxygen-level dependent (BOLD) response in regions that are sensitive to that particular stimulus. Thus, by manipulating the specific aspect of a stimulus that is being repeated, fMRI-adaptation can more directly reveal the information that is supported by different brain regions. Although fMRI-adaptation is most commonly used to examine visual processing (Grill-Spector, Henson, & Martin, 2006; Grill-Spector et al., 1999; Grill-Spector, Kushnir, Edelman, Itzchak, & Malach, 1998; Grill-Spector & Malach, 2001; Henson, Shallice, & Dolan, 2000; Turk-Browne, Yi, Leber, & Chun, 2007; Xu, Turk-Browne, & Chun, 2007), it has also been utilized in studies examining stimulus-response learning (Chouinard & Goodale, 2009; Dobbins, Schnyer, Verfaellie, & Schacter, 2004; Salimpoor, Chang, & Menon, 2010), mirror neurons (Chong, Cunnington, Williams, Kanwisher, & Mattingley, 2008; Kilner, Neal, Weiskopf, Friston, & Frith, 2009; Lingnau, Gesierich, & Caramazza, 2009), conceptual/semantic decision making (Buckner et al., 1998; Race, Shanker, & Wagner, 2009), and theory of mind (Jenkins, Macrae, & Mitchell, 2008). 8  1.3.2 Task design We created a novel fMRI-adaptation paradigm in which we manipulated subjects‘ exposure to rule and reward information (see Figures 1 and 2). Subjects (N = 15) performed one of two tasks on each trial—decide if a word has an abstract/concrete meaning, or decide if a face is a male/female. These tasks involved simple if-then rules (e.g., if words task and if concrete then press button ―1‖, if abstract then press button ―2‖). On some trials, subjects could earn a fixed amount of money if they responded accurately and within a 1400 ms time-window. Each trial began with an instruction cue that allowed subjects to mentally represent the current taskrules and expected reward (money or no money). Importantly, on about half of the trials, a second instruction cue was presented after a delay period. Across the two instruction cues, we systematically varied whether there was repetition of the task-rules, the expected monetary reward, or both (see Figure 2). Crucially, this allowed us to look for fMRI-adaptation: we examined neural activity during the second instruction cue and identified brain regions demonstrating a differential BOLD response for repeated as compared to novel rules, reward, or their combination. Strong evidence that the LPFC represents an integration of task rules and expected rewards would be to demonstrate fMRI-adaptation in response to repetition of rule and reward information, but not in response to repetition of rules or reward in isolation. 1.3.3 Experimental hypotheses fMRI-adaptation comes in two flavours: repetition suppression and repetition enhancement. Repetition suppression refers to a reduced neural response for repeated relative to novel stimuli and is related to faster/more efficient access to a given representation (GrillSpector, et al., 2006; Henson, et al., 2000). For example, if a familiar face is presented twice in a row, the process of representing the face may be easier and require less neural resources the 9  second time around (Henson, et al., 2000). On the other hand, repetition enhancement refers to an increased neural response for repeated relative to novel stimuli and is often observed when a new perceptual representation or stimulus-response association needs to be constructed (R. Henson & M. D. Rugg, 2003; Henson, et al., 2000; Salimpoor, et al., 2010). For instance, if an unfamiliar face is presented twice in a row, the precise configuration of the features of the face may not be fully encoded during the first exposure, and may require a process of refining or consolidating this new representation during a second exposure—and would ostensibly require extra neural resources (Henson, et al., 2000). Based on this prior work, we expected to observe: 1) repetition suppression during repetition of the expected reward alone and for repetition of the rules alone, as repeated reward or rule information should be easier to access and represent. 2) repetition enhancement for repetition of rule and reward information; if this information is being integrated, it should involve the construction of a new representation and may require an extended process of refining and consolidating the integrated representation. 1.3.4 Analysis considerations Our analysis concentrated on several anatomically defined regions of interest (ROIs) that have been strongly implicated in rule and/or reward processing. Regions consistently related to rule processing (referred to here as the ‗rule network‘ as a heuristic) that we focused on included the lateral prefrontal cortex—in particular, the rostrolateral prefrontal cortex (RLPFC) and along the inferior frontal sulcus (IFS) and adjacent middle and inferior frontal gyri (MFG and IFG)—as well as the intraparietal sulcus (IPS), posterior middle temporal gyrus (pMTG), and presupplementary motor area (pre-SMA)/mid-cingulate (Badre & D'Esposito, 2009; Buckley et al., 2009; Bunge, 2004; Christoff, Keramatian, Gordon, Smith, & Madler, 2009; Christoff et al., 2001; Donohue, Wendelken, Crone, & Bunge, 2005; Dosenbach et al., 2006; Duncan, 2010; 10  Hampshire, Thompson, Duncan, & Owen, 2011; Koechlin, Ody, & Kouneiher, 2003; Miller & Cohen, 2001; Sakai, 2008; Wallis, Anderson, & Miller, 2001). Regions consistently related to reward processing (referred to here as the ‗reward network‘ as a heuristic) that we focused on included the orbitofrontal cortex/ventromedial prefrontal cortex (OFC/VMPFC), rostral anterior and posterior cingulate cortices (rACC and PCC), anterior insula, and nucleus accumbens (NAcc) (Craig, 2002; Kable & Glimcher, 2007; Knutson & Cooper, 2005; Kringelbach & Rolls, 2004; O'Doherty, 2004; Plassmann, et al., 2007; Preuschoff, et al., 2008; Rushworth & Behrens, 2008; Schoenbaum, Roesch, Stalnaker, & Takahashi, 2009; Walton, Behrens, Buckley, Rudebeck, & Rushworth, 2010).  11  2  METHODS  2.1  Subjects Subjects were 15 right-handed healthy adults (M = 27.4, SD = 5.51; 8 female) with no  history of psychiatric or neurological illness and all provided written informed consent and received payment for their participation. The study was approved by the UBC clinical research ethics board.  2.2  Task design The software package E-prime (Psychology Software Tools, Pittsburg, PA, USA) was  used to implement the task. Stimuli were presented using a back-projection system. On each trial, subjects performed one of two tasks: either they decided if a face was male or female, or decided if a word had a concrete or abstract meaning (see Figure 1). These tasks required simple if-then rules (e.g., if face task and if male then press button ―1‖, if female then press button ―2‖). On 75% of the trials, participants could earn money at the end of the experiment by responding accurately and within a 1400 ms time-window. Subjects received $30 for participating in the fMRI scanning session and were told that they could earn an additional $30 if they earned all of the available money. The chance to double their earnings ensured that the task was very motivating for subjects. Each correct answer was worth 25 cents. The remaining 25% of trials served as neutral trials with no money available. Prior to the presentation of a face or a word stimulus, an instruction cue appeared and informed participants of which task to perform on that trial and whether or not money was available to be won. The instruction cues were familiar visual images that subjects learned prior to the experiment and were selected to be easy to represent in mind (see Figures 1 and 2). The 12  instruction cues did not specify a particular response, but rather, only a set of stimulus-response contingencies (i.e., rules). The appropriate response could not be determined until the ensuing stimulus appeared. Subjects were told to use the instruction cue to internally rehearse the task rules and to think about the expected reward (i.e., money, or no money). On 40% of trials, a single instruction cue appeared prior to the stimulus. Crucially, on the remaining 60% of trials, the first instruction cue was followed by a delay period and then presentation of a second instruction cue. Subjects were told in this case to forget about the first cue and follow the content of the second cue. These double-instruction cue trials were of primary interest as they allowed us to examine fMRI-adaptation. Across the two instruction cues, we systematically varied whether there was repetition of the rules, expected reward, or both. We examined neural activity during the second instruction cue and looked for brain regions showing fMRI-adaptation (i.e., a change in the magnitude of neural activity) according to the specific information that was being repeated. The first instruction cue signaled one of four combinations of task rules and expected outcome: the faces task and no money available; the faces task and money available; the words task and no money available; or the words task and money available. In contrast, the second instruction cue was always the same event: the words task and money available (see Figure 2). This design allowed us to categorize the identical event—instruction cue 2—into four conditions as a function of the preceding instruction cue 1: (1) the rules and expected monetary reward were completely novel, (2) the expected monetary reward repeated, (3) the rules repeated, (4) or both rules and expected monetary reward repeated. Thus, of critical interest was neural activation during the second instruction cue and whether a difference could be observed depending upon how it was primed by the first instruction cue.  13  Figure 1. Illustration of the trial structure. Each trial began with a variable duration fixation cross. An instruction cue then appeared and signaled the currently relevant task-rules and whether or not money was available to be won. This was followed by a variable duration delay period. On some trials, the delay period was immediately followed by presentation of a word or face stimulus and subjects would make a response. However, on some trials, a second instruction cue appeared before the stimulus. In this case, subjects were told to forget the first instruction cue and to follow the content of the second instruction cue. Following the stimulus, a reward screen appeared and informed subjects of their total monetary winnings and whether or not they had earned money on that trial.  14  Figure 2. Illustration of the four conditions of interest. Across the two instruction cues, we systematically varied whether there was repetition of rule and reward information. The blue vase symbolized that no monetary reward was available on that trial. The money bag or dollar bills were used to symbolize that money was available to be won on that trial. The open book signifies that the abstract/concrete rules are relevant, whereas the profile image of the faces signifies that the male/female rules are relevant on that trial. With respect to the first instruction cue, the second instruction cue signals: A. novel rules and novel monetary reward. B. novel rules and repeated monetary reward. C. repeated rules and novel monetary reward. D. repeated rules and repeated monetary reward  15  Subjects performed 162 trials in total. There were 96 were double-instruction cue trials in which each of the four key conditions noted above appeared 24 times. There were 66 singleinstruction cue trials: each of the four combinations of rules and expected monetary reward were presented 12 times; the other 18 single-cue trials were additional faces task and no monetary reward trials. These additional trials ensured that 40% of the trials were single-cue trials, 25% were neutral (no monetary reward), and 25% were the faces task. These constraints served to minimize expectancy of the second instruction cue, maximize the impact of the monetary incentive, and to have enough faces task trials to minimize boredom. Trials were presented pseudorandomly such that double-cue trials never occurred more than twice in a row and no condition appeared more than twice in a row. Trials began with a jittered interstimulus interval (ISI) (mean = 4.9 s, range = 2 – 7.5 s, increments of 500 ms), followed by presentation of the first instruction cue (2 s). This was followed by a variable length delay (mean = 5 s; range = 4 - 6 s; increments of 1000 ms). Next the word or face stimulus appeared (2 s) during which time subjects made their response. Finally a reward screen (1.5 s) revealed to participants their total current winnings and also if they earned money on that trial. On some trials, a second instruction cue (2 s) appeared followed by a delay (4 s) prior to stimulus presentation. Given the delay length of 4, 5, or 6 s before the key event of interest (i.e., instruction cue 2), this allowed us to effectively estimate the BOLD response separately for the first and second instruction cues and also provided a temporal resolution of 1000 ms with respect to sampling the hemodynamic response function. In contrast, the duration of the second delay period was held constant at 4 s because we were not concerned about overlap in the BOLD response for the second instruction cue and the subsequent stimulus presentation. Given that the second instruction cue was the same across the four conditions and  16  was therefore followed by a stimulus with the same perceptual characteristics and response requirements, any overlap in the BOLD response between the cue and stimulus period would be constant across conditions and therefore, have no relevant influence. Given that nearly half of the trials were single cue trials and subjects were explicitly instructed to avoid expecting a second cue, this ensured that participants paid attention to the first cue. Each task and expected reward (money versus no money) was represented with two different visual images. During repetition of the task-rules, expected reward, or both, the two distinct visual images were used so that there was never repetition of the visual features of the cue, but only its symbolic meaning. Given the demanding nature of the task, we included a rest period of 15 s (filled with a blank screen) in middle of each session to provide subjects with a brief break. A blank screen also appeared for 10 s at end of each session to allow the BOLD response to return to baseline. Practice. One day prior to scanning, subjects came in for a one-hour training session. Subjects learned the correspondence between the instruction cue visual images and their meaning and then received 80 practice trials. This training ensured that during the scanning session, subjects were able to rapidly represent the information contained in the instruction cue. Stimuli. The words were chosen from the Medical Research Council Psycholinguistic Database (http://www.psy.uwa.edu.au/mrcdatabase/uwa_mrc.htm). The words had a minimum of three letters and a maximum of eight letters, and a minimum written frequency of 30. Words selected for the ―concrete‖ category (e.g., bag), had a concrete rating above 600 and words selected for the ―abstract‖ category (e.g., advice) had a concrete rating below 300. The face stimuli were high resolution front-view photographs of neutral expression faces obtained from several image databases (Lundqvist, Flykt, & Ohman, 1998; Martinez & Benavente, 1998; 17  Phillips, Wechsler, Huang, & Rauss, 1998). In total, 42 photographs (21 male, 21 female) were selected. The faces were cropped to remove hair and other non-facial features, gray-scaled, equated in size, and then we added 10% Gaussian noise to increase the difficulty of the face discrimination. Stimuli subtended 4.5 (width) x 4.7 (height) degrees visual angle.  2.3  fMRI data acquisition fMRI data were collected using a 3.0-Tesla Philips Intera MRI scanner (Best,  Netherlands) with a standard 8-element 6-channel phased array head coil with parallel imaging capability (SENSE; Pruessmann et al., 1999). Head movement was restricted using foam padding around the head. T2*-weighted functional images were acquired parallel to the anterior commissure/posterior commissure (AC/PC) line using a single shot gradient echo-planar sequence (repetition time, TR = 2 s; echo time, TE = 30 ms; flip angle, FA = 90°; field of view, FOV = 24 × 24 × 14.3 cm; matrix size = 80 × 80; SENSE factor = 1.0). Thirty-six interleaved axial slices covering the whole brain were acquired (3-mm thick with 1-mm skip). Data collected during the first 4 TRs were discarded to allow for equilibration effects. There were six sessions approximately 9-minutes long each during which 1608 volumes were acquired in total. After functional imaging, in-plane inversion recovery prepared T1-weighted anatomical images were acquired in the same slice locations as the functional images using a fast spin-echo sequence (TR = 2 s; TE = 10 ms; 36 interleaved axial slices covering the whole brain, 3-mm thick with 1-mm skip; FA = 90°; FOV = 22.4 × 22.4 × 14.3 cm; matrix size = 240 × 235; reconstructed matrix size = 480 × 470; inversion delay = 800 ms; spin echo turbo factor = 5). This 2D in-plane structural image was used for normalization. For 10 subjects, we also collected a high resolution 3DT1 anatomical volume. (SPGR: TR = 2 s; TE = 3.53 ms; 175 interleaved  18  axial slices covering the whole brain, 1-mm thick with 0 mm skip; FOV = 25.6 × 25.6 × 17.5 cm; matrix size = 256 × 250; 1 x 1 x 1 mm3 isotropic voxels).  2.4  fMRI data preprocessing Image preprocessing and analysis were conducted with Statistical Parametric Mapping  (SPM5, University College London, London, UK; http://www.fil.ion.ucl.ac.uk/spm/software/spm5). The time series data were slice-time corrected (to the middle slice), realigned to the first volume to correct for between-scan motion (using a 6 parameter rigid body transformation), and coregistered with the T1-weighted structural image. The in-plane T1 image was bias-corrected and segmented using template (ICBM) tissue probability maps for gray/white matter and CSF. Parameters obtained from this step were subsequently applied to the functional and structural data during normalization to MNI space. The data were spatially-smoothed using an 8-mm3 full-width at half-maximum Gaussian kernel to reduce the impact of inter-subject variability in brain anatomy. Finally, a linear detrending procedure (Macey, Macey, Kumar, & Harper, 2004) was applied to remove time-series components that were correlated with global changes in the BOLD signal.  2.5  fMRI data analysis  2.5.1 First-level model Data were analyzed at the first level with a general linear model. There were 19 key regressors that were convolved with a synthetic hemodynamic response function. Four regressors modeled as delta (stick) functions coded the information contained in the first instruction cue: (1) faces task, (2) words task, (3) monetary reward, (4) no monetary reward. Four regressors modeled as variable-duration (4 – 6 s) epochs coded the subsequent delay period 19  following each of these events. Four regressors modeled as delta (stick) functions coded the combination of rule and reward information contained in the second instruction cue as a function of the preceding instruction cue: (1) novel rules and novel reward, (2) repeated rules and novel reward, (3) novel rules and repeated reward, and (4) repeated rules and repeated reward. Four regressors modeled as 4 s fixed-duration epochs coded the subsequent delay period after these events. Additional regressors modeled as delta functions coded presentation of the stimulus and reward screen, and a regressor modeled as a variable-duration (10 or 15 s) epoch coded the rest period at the middle and end of each session. The model also included the six movement parameters estimated during realignment and regressors coding session effects. Serial autocorrelations were modeled using AR(1) and the data were high-pass filtered (1/128 Hz) to remove low frequency drift in the BOLD signal. Given that performance was at near ceiling levels, modeling correct and incorrect responses had a negligible effect, so they were left out in order to simplify the model. We created 6 contrast images to capture repetition suppression and repetition enhancement effects for rules, reward, and their combination: (1) novel rules and novel reward > repeated rules and novel reward (repetition suppression for rules), (2) novel rules and novel reward > novel rules and repeated reward (repetition suppression for reward), (3) novel rules and novel reward > repeated rules and repeated reward (repetition suppression for rules and reward), (4) repeated rules and novel reward > novel rules and novel reward (repetition enhancement for rules), (5) novel rules and repeated reward > novel rules and novel reward (repetition enhancement for reward), and (6) repeated rules and repeated reward > novel rules and novel reward (repetition enhancement for rules and reward).  20  2.5.2 Second-level random effects analysis The contrasts created for each subject were subsequently submitted to one-sample t-tests at the group level. To create maps of significant effects, we used a cluster-forming threshold of Z > 2.3 uncorrected, and then to correct for multiple comparisons, we report and display only those voxels surviving family-wise error (FWE) correction for cluster extent (p < .05) based on random field theory (Worsley, Evans, Marrett, & Neelin, 1992). Correction for multiple comparisons was calculated based on a small-volume correction for a priori anatomically defined regions of interest. To examine if any brain regions showed fMRI-A selectively for rule and reward repetition, we used an exclusive masking analysis. We excluded voxels demonstrating fMRIadaptation (repetition enhancement or suppression) for repetition of rules alone, or reward alone at a very lenient threshold (p < .05 uncorrected), and then we looked for regions demonstrating fMRI-adaptation for repetition of rule and reward information. The lenient threshold for the voxels being masked out made this a very conservative analysis with respect to finding regions that exhibit adaptation selectively for rule and reward repetition. 2.5.3 ROI definition To reduce the number of multiple comparisons, we adopted an anatomical ROI approach based on strong hypotheses regarding the likely regions supporting rule and reward processing. ROIs were generated from the AAL atlas provided by the WFU pickatlas in SPM5. They included the LPFC (MFG and IFG – pars opercularis, triangularis, and orbitalis combined), posterior parietal cortex (superior and inferior parietal cortex and angular gyrus combined), SMA/pre-SMA and cingulate (Supplementary motor area, anterior, mid, and posterior cingulate combined), mid-orbitofrontal cortex/ventromedial prefrontal cortex, insula, and striatum 21  (putamen and caudate combined; this ROI was used to search for activation in the NAcc, given that it is located at the junction of the putamen and caudate). We also included a more specific ROI for the RLPFC (12 mm sphere centered on: x,y,z = -32, 56, 6 and combined with the mirror image in the right hemisphere) based on Sakai and Passingham (2006), and an ROI for the left pMTG (12mm sphere centered on: x,y,z = - 62, -46, -10) based on Donohue et al. (2005). 2.5.4 Time-course visualization To visualize the time-course of regions ostensibly showing an integration effect, we used the Marsbar toolbox in SPM5 (Brett, Anton, Valabregue, & Poline, 2002) (http://marsbar.sourceforge.net/) to extract average signal change values from 6-mm radius spheres centered on peak voxels. We used 12 finite impulse response (FIR) functions, one for each peristimulus time point within a trial window of 24 s following onset of the second instruction cue. For the anterior IFS, the time-course was extracted from the peak in the upper lateral bank (x,y,z = 48, 39, 12) to avoid inclusion of white matter signal and therefore noise in the ROI time-course. 2.5.5 Functional connectivity and correlation changes To examine functional connectivity, for each subject we extracted the IFS time-course from an 6-mm sphere centered on the coordinates (x,y,z = 48, 39, 12). The time-course was scaled by the mean global brain signal at each time point to minimize the effect of global drift, and then converted to percent signal change values by subtracting and diving by the mean value for the ROI for the appropriate session. The data were also high-pass filtered (1/128 Hz). The normalized time-course for each subject was then used as a regressor in a standard first-level GLM analysis that also included the 6 motion parameters obtained from realignment as covariates of no interest. We created contrast images for each subject assessing positive 22  connectivity across the entire experiment. These contrast images were then brought to a secondlevel random effects analysis and entered into a one-sample t-test to identify voxels across the brain showing a correlation with IFS that differed significantly from zero. To examine correlation changes, we extracted the time-course from 6-mm radius spheres centered on peak voxels within our a priori ROIs that showed fMRI-adaptation for rule or reward information. These ROIs included the left RLPFC (x,y,z = -36, 51, -3), left IFG (-51, 42, 9), left MFG (-42, 39, 33), left anterior and middle IPS (-42, -42, 51; -27, -57, 42), pre-SMA (=3, 6, 54), left pMTG (-54, -54, 6), right ventral insula (33, 18, -6), right NAcc (9, 15, -9), and rACC (-3, 36, 15). Although the fMRI-adaptation effect did not survive correction for multiple comparisons in the PCC (6, -27, 33), this region was included as an additional ROI given the robust functional connectivity with the IFS across the entire time-course. The time-courses from the ROIs were scaled by the mean global brain signal at each time point, converted to percent signal change values, and high-pass filtered (1/128 Hz). To compute the correlations during the second instruction cue period, we sampled activity at TRs 2 and 3 (corresponding to 4–8 s postcue onset) to account for the hemodynamic lag. For each participant we calculated the Pearson correlation between the IFS and each of the ROIs separately for the novel rules and reward condition and the repeated rules and reward condition. The correlations were converted to zvalues using Fisher‘s r to z transformation in order to normalize the distribution; this allowed us to use paired-samples t-tests to compare the correlation strength for novel as compared to repeated rule and reward information. Given strong a priori hypotheses, correlation changes were evaluated using an alpha level of α < .05. Correlation change values exceeding two standard deviations from the mean were removed from the analysis. There was never more than one value removed for a given correlation analysis.  23  3  RESULTS  3.1  Behavioural data: subjects are motivated to earn money For the double-cue trials, there was no difference in average reaction time [F(3, 42) =  1.232, p = .31], or accuracy [F(3, 42) = 1.196, p = .32] according to whether the second instruction cue signaled novel or repeated rules and reward (see Figure 3A). The absence of a behavioural priming effect was expected due to the delay period interposed between presentation of the instruction cues and the stimuli and the fact that repetition occurred at the level of instruction cues rather than the stimuli themselves as in most priming studies. There was however an incentive effect suggesting that subjects were motivated to win money (see figure 3B): on single cue trials, subjects were faster to respond when money was available (Mmoney = 857.70, SD = 58.77) as compared to when no money was available (Mno money = 898.83, SD = 92.54) [t(14) = 2.82, p = .014; two-tailed]. There was no difference in accuracy (Mmoney = 98.13%, SD = 3.66% vs Mno money = 96.87%, SD = 2.95%) (p > .3). Moreover, self-report data obtained after scanning revealed that most participants were highly motivated to win money, reporting an average rating of 5.46 on a 7-point scale (1 = not motivated, 7 = highly motivated to earn money). An average score of 5.46 is highly different from a score of 3.5, which would represent neutral motivation [t(14) = 5.02, p < 001].  24  Figure 3. Accuracy and reaction time (RT) data. A. Behavioural data for double cue trials as a function of whether the second instruction cue signaled novel or repeated rule and reward information. No difference across conditions was observed in RT or accuracy. B. RT was faster on trials in which there was the opportunity to win money (reward versus no reward trials) and there was no trade-off in accuracy.  25  3.2  fMRI data  3.2.1 Validation of the fMRI-adaptation paradigm Before addressing the question of integration, we wanted to establish the validity of our fMRI-A paradigm. Accordingly, we first examined whether we could identify canonical regions of the rule network when rule information alone was repeated and canonical regions of the reward network when reward information alone was repeated. This is indeed what we found. 3.2.2 fMRI-adaptation for repeated reward information Several key areas of the reward network exhibited repetition suppression (i.e., a reduced neural response) when the expected monetary reward was repeated as compared to when it was novel (Figure 4; Table 1; Z > 2.3, p < .05 cluster-corrected). These regions included the rACC (x,y,z = -3, 36, 15), the right ventral and dorsal anterior insula (x,y,z = 33, 18, -6; x,y,z = 36, 21, 6) extending ventrally into the caudal OFC (x,y,z = 27, 21, -18), and right NAcc (x,y,z = 9, 15, 9; x,y,z = -15, 15, -9). The pre-SMA (x,y,z = 0, 12, 60) and pMTG (x,y,z = -54, -51, -9) also showed this pattern. Repetition suppression was also observed in the PCC (x,y,z = 6, -27, 33) and bilateral VMPFC (x,y,z = 12, 45, -12; x,y,z = -9, 54, -15), but did not survive cluster size correction for multiple comparisons. No regions showed repetition enhancement for reward information. These identified areas are highly consistent with prior work examining reward processing.  26  Figure 4. Regions exhibiting fMRI-adaptation for repeated reward information. Regions showing repetition suppression (i.e., a smaller neural response for repeated relative to novel reward information) included the rostral anterior cingulate cortex (rACC), pre-supplementary motor area (pre-SMA), right nucleus accumbens (NAcc), and right insula. The colour scale denotes t-values, and the numerical values above the images correspond to MNI coordinates. Activated regions are significant at Z > 2.3, p < .05 FWE cluster corrected.  27  Table 1. Regions exhibiting fMRI-adaptation for repeated reward information. Region  Hemisphere BA  MNI coordinates X Y Z  Z-score  Voxels  4.07 3.82 3.44 3.05 3.39 3.99 3.09 3.10 3.04 3.19 2.88 3.15  229 173 49  Novel reward > repeated reward (repetition suppression for reward) rACC Pre-SMA Insula (Ventral) Insula (Dorsal) Caudal OFC NAcc pMTG NAcc Insula (Dorsal) VMPFC VMPFC PCC  Medial Medial Right Right right Right left Left Left left Right Medial  24/32 6  13 21/37  14 14 23  -3 0 33 33 27 9 -54 -15 -33 9 12 6  36 12 18 21 21 15 -51 15 15 54 45 -27  15 60 -6 6 -18 -9 -9 -9 9 -15 -12 33  53 20 38* 32* 21* 19* 34*  Number of voxels are reported within a priori anatomical ROIs. Reported regions are significant at Z > 2.3, p < .05 FWE cluster corrected. * Regions exhibited a trend towards significance (p < .12 FWE cluster corrected). BA = Brodmann area. rACC = rostral anterior cingulate cortex; pre-SMA = presupplementary motor cortex; OFC = orbitofrontal cortex; NAcc = nucleus accumbens; pMTG = posterior middle temporal gyrus; VMPFC = ventromedial prefrontal cortex; PCC = posterior cingulate cortex.  28  3.2.3 fMRI-adaptation for repeated rule information Several key areas of the rule network exhibited repetition suppression for repeated as compared to novel rule information (Figure 5; Table 2; Z > 2.3, p < .05 cluster-corrected). These areas included the left IPS (anterior: x,y,z = -27, -57, 42; middle: x,y,z = -42, -42, 51), preSMA/SMA (x,y,z = -3, 6, 54), mid-cingulate (x,y,z = 0, 18, 45), left pMTG (x,y,z = -54, -54, -6) extending into the posterior superior temporal sulcus (x,y,z = pSTS; -54, -36, 6), and NAcc (x,y,z = 9, 9, -12). Moreover, there was robust repetition suppression in the LPFC—specifically the left IFG (x,y,z = -51, 42, 9) and MFG (x,y,z = -42, 39, 33)—at a reduced height threshold (Z > 1.65, p < .05 cluster corrected). The activations on the lateral surface were left-lateralized, consistent with prior work (Bunge, Kahn, Wallis, Miller, & Wagner, 2003). Additionally, we found increased activation (i.e., repetition enhancement) for repeated task-rules in a small number of regions, most notably, the left ventral RLPFC (x,y,z = -33, 51, -3). These identified regions are highly consistent with prior work examining rule processing.  29  Figure 5. Regions exhibiting fMRI-adaptation for repeated rule information. Regions showing repetition suppression (i.e., a smaller neural response) for repeated relative to novel rule information included the left middle and inferior frontal gyri (MFG and IFG), the left posterior middle temporal gyrus (pMTG), the supplementary motor area/pre-supplementary motor area (SMA/pre-SMA) extending into the mid-cingulate, and the left anterior and middle intraparietal sulcus (aIPS and mid-IPS). The left rostrolateral prefrontal cortex exhibited repetition enhancement (i.e., a greater neural response for repeated rule information). The colour scale denotes t-values, and the numerical values above the images correspond to MNI coordinates. Activated regions are significant at Z > 2.3, p < .05 FWE cluster corrected.  30  Table 2. Regions exhibiting fMRI-adaptation for repeated rule information. Region  Hemisphere BA  MNI coordinates X Y Z  Z-score  Voxels  33 9 54 45 42 51 -12 -6  3.32 3.10 3.41 3.37 3.51 3.17 3.99 3.81  341  -3 33 48  3.40 3.98 3.73  26 109  Novel rules > repeated rules (repetition suppression for rules) MFG IFG SMA/pre-SMA Mid-Cingulate gyrus Mid IPS aIPS NAcc pMTG  Left Left Medial Medial Left Left Right left  46 45 6 32 40/7 40 22/37  -42 -51 -3 0 -27 -42 9 -54  39 42 6 18 -57 -42 9 -54  221 91 92 127 56  Repeated rules > novel rules (repetition enhancement for rules) RLPFC MFG MFG/PM cortex  left left left  10 9 8  -36 -24 -36  51 27 18  Number of voxels are reported within a priori anatomical ROIs. Reported regions are significant at Z > 2.3, p < .05 FWE cluster corrected. BA = Brodmann area. MFG = middle frontal gyrus; IFG = inferior frontal gyrus; SMA/pre-SMA = supplementary motor cortex/presupplementary motor cortex; Mid-IPS = middle intraparietal sulcus; aIPS = anterior intraparietal sulcus; pMTG = posterior middle temporal gyrus; RLPFC = rostrolateral prefrontal cortex; PM cortex = premotor cortex.  31  3.2.4 fMRI-adaptation evidence that the LPFC integrates rules and rewards Having established that our fMRI-adaptation paradigm can effectively identify regions that are sensitive to rule or reward information, we next turned to our central question of whether the LPFC would demonstrate fMRI-adaptation uniquely for repetition of rule and reward information, and thereby demonstrate evidence of integration. To examine this, we used an exclusive masking analysis to exclude voxels showing fMRI-adaptation for repetition of rules alone or reward alone at a liberal threshold (p < .05, uncorrected), making this a very conservative analysis. We then looked for regions demonstrating fMRI-adaptation when both rule and reward information repeated. If this information is being integrated, it would ostensibly require the construction of a new representation, and therefore, we expected to observe fMRIadaptation in the form of repetition enhancement. Consistent with an integrative role, a large area of the LPFC running along the anterior IFS (lower bank: x,y,z = 36, 39, 3; upper bank: x,y,z = 48, 39, 12) demonstrated repetition enhancement that was selective to repetition of rule and reward information (Figure 6; Table 3; Z > 2.3, p < .05 cluster-corrected). The time-course extracted from this region (Figure 6b) demonstrated that activation increased when rule and reward information was repeated as compared to when it was novel, but there was no increase in activation when either the rules alone repeated, or reward alone repeated. This provides strong evidence that the right anterior IFS supports an integrated representation of task-rules and expected rewards. A similar repetition enhancement effect was observed in the right posterior IFS/IFG (x,y,z = 42, 6, 36) and along the right IPS (x,y,z = 33, -60, 57), suggesting that a right-lateralized frontoparietal network may integrate rules and rewards (Figure 6a, Table 3).  32  Figure 6. Regions exhibiting fMRI-adaptation for repeated rule and reward information. A. Lateral view shows that the right inferior frontal sulcus (IFS) and the right intraparietal sulcus (IPS) exhibited a repetition enhancement effect that was selective to repeated rule and reward information. B. IFS activation time-course time-locked to the onset of the second instruction cue. Activation increases when both rules and reward repeated as compared to when they were novel and there is no increase when either rules alone or reward alone repeated. C. Axial slices showing the repetition enhancement effect in the IFS. The colour scale denotes tvalues, and the numerical values above the images correspond to MNI coordinates. Activated regions are significant at Z > 2.3, p < .05 FWE cluster corrected.  33  Table 3. Regions exhibiting fMRI-adaptation for repeated rule and reward information. Region  Hemisphere BA  MNI coordinates X Y Z  Z-score  Voxels  Repeated rules and reward > novel rules and reward (repetition enhancement for rules and reward) IFS (lower bank) IFS (upper bank) Posterior IFS/IFG IPS  right right right right  45 45/46 9/44 7  36 48 42 33  39 39 6 -60  3 12 36 57  4.18 3.87 3.79 3.58  151 104 234  Number of voxels are reported within a priori anatomical ROIs. Reported regions are significant at Z > 2.3, p < .05 FWE cluster corrected. BA = Brodmann area. IFS = inferior frontal sulcus; IFG = inferior frontal gyrus; IPS = intraparietal sulcus.  34  Importantly, the repetition enhancement effect cannot be explained by extraneous variables such as difficulty, effort, attention, or task-switching because it should be easier to represent repeated rule and reward information. Rather, the repetition enhancement effect is consistent with the idea that it takes extra time and processing resources to consolidate a novel high-level integrative representation in mind. Additionally, an important feature of our experimental design was that we held constant the content of the second instruction cue, thus allowing us to compare neural activation to the identical event, simply as a function of how it was primed by the first instruction cue (the content of which we systematically varied). This ensures that our findings cannot be explained by differences in visual processing or interpretation of the second instruction cue, or differences relating to expectation of the ensuing stimulus. Furthermore, given that there was never repetition of visual information across the first and second instruction cue (see methods for details), we can be sure that fMRI-adaptation was related to repetition of the conceptual representation of the task-rules and expected reward and not to repetition of the visual symbols used to signal this information. Within our a priori ROIs, only the left IFG exhibited a repetition suppression effect selective to repeated rule and reward information (BA 45; x,y,z = -54, 27, 9). However, close inspection of the time-course did not support this conclusion. Unlike the GLM model implemented in SPM, the finite impulse response (FIR) analysis used to visualize activation time-courses does not make assumptions about the shape of the hemodynamic response function; therefore, the FIR analysis can sometimes reveal a different picture than the canonical hemodynamic response function which may not perfectly capture differences in response profile across regions. The FIR time-course analysis for the left IFG showed this region did not exhibit a smaller neural response selectively for repeated rules and reward (see Figure 7). Given that  35  this provides evidence that this region does not integrate rules and rewards, this region is not discussed further.  36  0.5  Novel Rules and Reward  % Signal Change  0.4  Repeated Reward  0.3  Repeated Rules  0.2  Repeated Rules and Reward  0.1 0 -0.1 -0.2 0  2  4  6  8  10  12  14  16  18  20  22  Time (secs)  Figure 7. Left IFG time-course time-locked to the presentation of the second instruction cue. The y-axis represents percent BOLD signal change and the x-axis denotes time in seconds following presentation of the second instruction cue. The time-course shows that the left IFG did not exhibit a smaller neural response selective to the repeated rule and reward condition. In fact, mean percent signal change values were actually the smallest for the reward only repetition condition. Thus, in contrast to the GLM analysis, this suggests that this region does not play an integrative role.  37  3.2.5 The IFS interacts with the rule and reward networks We found that the integrative role supported by the anterior IFS did not occur in isolation, but rather, occurred within the context of functional interactions with both the rule and reward networks. We extracted the mean activation values from a 6-mm sphere centered on the peak anterior IFS coordinates (x,y,z = 48, 39, 12) and converted these values to percent signal change values (see Methods for details). We then examined the correlation (or ‗functional connectivity‘) between activity in the IFS and every other voxel in the brain with a GLM regression analysis (see Methods for details). Across the entire experimental time-course, we found that neural activation in the IFS was significantly correlated with activation in bilateral frontoparietal and lateral temporal cortices, as well as the pre-SMA/mid cingulate (Figure 8; Table 4; Z > 2.3, p < .05 cluster-corrected). Many of these activations overlapped with, or were adjacent to the activation peaks showing fMRI-adaptation for repetition of task-rules. Additionally, neural activation in the IFS was significantly correlated with activity in the rACC extending into the VMPFC, PCC, OFC, right ventral anterior insula, and bilateral NAcc. Many of these activations overlapped with, or were adjacent to the peaks showing fMRI-adaptation for repetition of the monetary reward. Thus, signal in the right IFS continuously fluctuated in concert with both the rule network and the reward network, consistent with the idea that it may function as a hub where rule and reward information can be linked together to produce an integrated representation.  38  Figure 8. Regions exhibiting functional connectivity with the IFS. Across the entire experimental time-course, the inferior frontal sulcus (IFS) exhibited significantly correlated activity with regions of both the rule network (blue box) and the reward network (red box). Correlated rule network regions included the bilateral intraparietal sulcus (IPS), bilateral posterior middle temporal gyrus (pMTG), pre-supplementary motor area (pre-SMA) and several areas of the bilateral lateral prefrontal cortex (LPFC). Correlated reward network regions included the posterior cingulate cortex (PCC), rostral anterior cingulate cortex extending into the ventromedial prefrontal cortex (rACC/VMPFC), right orbitofrontal cortex, right insula, and bilateral nucleus accumbens (NAcc). The colour scale denotes t-values, and the numerical values above the images correspond to MNI coordinates. Activated regions are significant at Z > 2.3, p < .05 FWE cluster corrected.  39  Table 4. Regions exhibiting significant functional connectivity with the IFS. Region  Hemisphere BA  MNI coordinates X Y Z  Z-score  Voxels  IFS pIFG/ventral premotor MFG OFC IFS/IFG MFG Insula (ventral) Insula (dorsal) Caudate/Nacc NAcc RLPFC pMTG Pre-SMA/Midcingulate rACC rACC/VMPFC PCC aIPS vIPS pIPS IPL  Right Right Right Right Left Left Right Right Left Right Left Left Medial  46/45 44/6 46 11/13 10/45 9/46  39 15 33 39 45 33 21 30 12 15 48 -54 24  15 27 36 -12 6 36 -6 6 9 0 3 -9 39  ---5.75 5.46 4.70 4.95 4.45 4.05 3.15 4.98 4.07 2.98 4.34 5.43  2027  10 37 6/8/32  45 57 45 39 -51 -48 39 36 -12 15 -39 -63 9  Medial Medial Medial Right Right Left Left  24 32 23/31 7/40 7/19 7 7/40  6 9 3 42 36 -27 -51  33 45 -33 -42 -75 -66 -39  15 3 30 51 39 45 45  4.45 4.08 4.96 5.41 5.26 3.65 3.51  465 86 118 177 33 68 413  258 784 386  Number of voxels are reported within a priori anatomical ROIs. Reported regions are significant at Z > 2.3, p < .05 FWE cluster corrected. BA = Brodmann area. IFS = inferior frontal sulcus; pIFG = posterior inferior frontal gyrus; MFG = middle frontal gyrus; pre-SMA = presupplementary motor cortex; vIPS = ventral intraparietal sulcus; aIPS = anterior intraparietal sulcus; pIPS = posterior intraparietal sulcus; pMTG = posterior middle temporal gyrus; RLPFC = rostrolateral prefrontal cortex; rACC = rostral anterior cingulate cortex; pre-SMA = presupplementary motor cortex; OFC = orbitofrontal cortex; NAcc = nucleus accumbens; pMTG = posterior middle temporal gyrus; VMPFC = ventromedial prefrontal cortex; PCC = posterior cingulate cortex.  40  3.2.6 The IFS exhibits dynamic correlation changes Finally, we examined the strength of correlated activity between the IFS and the rule and reward networks specifically during the second instruction cue period when subjects were presented with novel or repeated rule and reward information (see Methods for details). We found that the correlation strength between the IFS and several rule and reward regions changed depending upon the novelty of the rule and reward information signaled by the second instruction cue (see Figure 9). We found a significant correlation change between the IFS and rule network regions including the left IFG [t(14) = 2.59, p = .021, two-tailed] and the mid-IPS at a trend level [t(14) = 2.04, p = .061, two-tailed], and also found a correlation change between the IFS and reward network regions including the insula [t(13) = 2.71, p = .018, two-tailed], NAcc [t(14) = 2.21, p = .044, two-tailed], and the PCC at a trend level [t(14) = 1.94, p = .073, twotailed]. Thus, the IFS dynamically interacted with the rule and reward networks according to the novelty of rule and reward information during the instruction cue period. To examine the specificity of the observed correlation changes involving the IFS, we also examined whether the left RLPFC region showing fMRI-adaptation for repetition of rule information changed correlation strength with any other regions. The RLPFC exhibited a correlation change with rule network regions, specifically, the pre-SMA/mid-cingulate [t(14) = 2.18, p = .047, two-tailed] and the left IFG at a trend level [t(14) = 1.93, p = .074, two-tailed], but did not change correlation with any reward network regions (all ps > .20). This finding indicates that correlation changes did not occur globally across the brain due to a non-specific factor, but rather, are consistent with an integrative role for the IFS.  41  Normalized Pearson correlation  0.5  Novel rules and reward  0.4  Repeated rules and reward  0.3 0.2  0.1 0 IFG  Mid-IPS  Insula  Nacc  PCC  Figure 9. Interregional correlations between the IFS and the rule and reward networks. The histograms represent normalized correlation coefficients (resulting from Fisher‘s r to z transformation) between the IFS and several rule and reward areas calculated during the second instruction cue period. The correlations are plotted separately for the novel rules and reward condition and the repeated rules and reward condition. Error bars represent one within-subject standard error of the mean (SEM).  42  4  DISCUSSION Using a novel fMRI-adaptation paradigm and convergent functional connectivity  analyses, we provide direct evidence suggesting that the lateral prefrontal cortex—in particular, the right IFS—supports an integrated representation of task-rules and expected rewards. The IFS exhibited repetition enhancement uniquely when rule and reward information repeated, but not when either factor repeated in isolation. The repetition enhancement effect is consistent with the idea that extended time and neural resources are needed to establish and consolidate a novel high-level integrative representation in mind. This interpretation is supported by prior work reporting repetition enhancement in situations requiring the construction of novel perceptual representations (Henson, et al., 2000) or stimulus-response associations (Salimpoor, et al., 2010). Additionally, the IFS exhibited significant functional connectivity with numerous rule-related and reward-related regions across the entire time-course and the strength of the correlation with a subset of these regions changed as a function of the novelty (or salience) of rule and reward information during the instruction cue period. Thus, the integrative role of the IFS may occur within the context of dynamic interactions with the rule and reward networks.  4.1  A new perspective on the role of the LPFC The LPFC has long been considered the central component in the neural circuitry  supporting high-level cognition and self-control. Although its precise contribution has been a matter of debate, the most prevalent perspective is that the LPFC supports rule-based action selection (Bunge, 2004; Duncan, 2010; Koechlin & Summerfield, 2007; Miller & Cohen, 2001). An overwhelming amount of data supports this position (Badre & D'Esposito, 2007; Bunge, et al., 2003; Christoff, et al., 2001; Donohue, Wendelken, & Bunge, 2008; Dosenbach, et al., 2006; Hampshire, et al., 2011; Koechlin, et al., 2003; Sakai & Passingham, 2006; Wallis, Anderson, et  43  al., 2001; Woolgar, Thompson, Bor, & Duncan, 2011). However, the widespread assumption that rule processing supported by the LPFC occurs independently of motivational processing (Botvinick, et al., 2001; Frank & Badre, in press; Hazy, et al., 2007; Kouneiher, et al., 2009; Ridderinkhof, et al., 2004) is inconsistent with our findings. It is important to note that we are not questioning the proposed role ascribed to the medial PFC and basal ganglia or their hypothesized influence on the LPFC as noted in these influential theories. Rather, our findings specifically address the function of the LPFC and suggest that it plays an integrative role— combining information about rules and motivational incentives. Thus, rather than conflicting with these theories, it is more appropriate to view our findings and the possibility of integration in the LPFC in terms of an extension to these theories. The integration perspective is in agreement with prior suggestions (Gray, et al., 2002; Pessoa, 2008; Watanabe, 2007) and bolstered by a wealth of research showing that the LPFC directly represents motivational information and may play an important role in reward-based decision making. Studies consistently find that LPFC activity increases when subjects expect or obtain rewards (Hikosaka & Watanabe, 2000; Histed, et al., 2009; Li, et al., 2011; Matsumoto, et al., 2003; McClure, et al., 2004; Seo, et al., 2007; Tanaka, et al., 2004; Wallis & Miller, 2003; Watanabe, 1996) and electrophysiological work has convincingly shown that action and reward information converges at the level of single LPFC neurons (Barraclough, et al., 2004; Histed, et al., 2009; Seo, et al., 2007; Wallis & Miller, 2003). Similar findings have been reported for the ACC (Matsumoto, et al., 2003) a region with a well accepted motivational role. Neuroimaging studies of reward-based decision making have additionally found that LPFC activity correlates with the subjective value assigned to decision options (Plassmann, et al., 2010; Simon & Daw,  44  2011; Tobler, et al., 2009) as well as other important decision variables (Huettel, et al., 2005; McGuire & Botvinick, 2010). Convergent evidence that the LPFC plays a role in motivational processing comes from lesion and TMS studies showing that the LPFC is necessary for the normal expression of mood and motivation (Gillihan, et al., 2011; Levy & Dubois, 2006; Paradiso, et al., 1999). (Fuster, 2008; Mesulam, 1986), the ability to discern the reward-value of stimuli when multiple attributes (e.g., reward magnitude and delay) need to be integrated (Simmons, et al., 2010), the ability to use a previously learned behavioural-strategy to obtain rewards (Wallis, Dias, et al., 2001), and the tendency to select choices that promote long-term as opposed to immediate financial benefits (Figner, et al., 2010; Knoch, et al., 2006). Finally, our findings are especially consistent with recent work by Braver and colleagues (Beck, et al., 2010; Jimura, et al., 2010; Locke & Braver, 2008; Savine & Braver, 2010) that has provided suggestive evidence that LPFC activation reflects an interaction between rule-based action selection and reward incentives. Together with this past work, our findings suggest that the LPFC plays a motivational role, and specifically, supports an integrated representation of rule and reward information. This idea is highly congruent with anatomical data showing rich connections between the LPFC and areas of both the rule and reward networks (Barbas & Mesulam, 1985; Morris, et al., 1999; Petrides & Pandya, 1999, 2002, 2006; Yeterian, et al., 2011).  4.2  LPFC and model-based decision making Theoretical accounts and accumulating empirical evidence suggest that the LPFC is  important for model-based decision making—i.e., when an explicit model or belief about the environment is used to estimate the value of actions (Daw, Niv, & Dayan, 2005; Glascher, Daw, Dayan, & O'Doherty, 2010; Li, et al., 2011; Samejima & Doya, 2007). This contrasts with 45  simpler trial-and-error learning in which the value of actions is acquired and adjusted incrementally based on direct feedback from the environment. Model-based decision making is ostensibly crucial in novel or rapidly changing environments in which optimal decisions cannot be made based on past experience. Rather, in such situations, optimal decisions can only be made by using explicit knowledge about the current state of the world to formulate conceptual associations between different actions and their estimated reward value. The long accepted role of the LPFC in working memory—or the capacity to hold and manipulate information in mind in the absence of input from the environment—could provide the basis for its ability to rapidly construct ‗online‘ associations between actions (or rules) and their predicted reward value. Our task, like many real-life situations, did not provide subjects with a chance to learn by trial-anderror which rule to implement. Rather, subjects had to use symbolic information contained in the instruction cues to rapidly form causal associations between different rules and their expected reward value in order to guide behaviour effectively. Our results demonstrate that the LPFC is capable of rapidly forming integrated representations of rules and rewards on each trial in the absence of feedback from the environment and are therefore consistent with the idea that the LPFC implements model-based decision making. Our data builds on work by Frank and colleagues (Frank & Badre, in press; Hazy, et al., 2007) that has outlined how basal gangliaLPFC interactions could allow trial-and-error learning about rewards to engage high-level action-based control by the LPFC. According to this model, discrepancies between experienced and predicted outcomes are signalled by the dopaminergic firing in the basal ganglia. These prediction errors are used to learn the expected value of specific rule-related contextual information used by the LPFC to guide action selection. This theoretical model can explain behaviour in a range of situations, but the reliance on experience driven learning means that it  46  may not be ideal for rapid and flexible decision making. In contrast, our findings and the notion of model-based decision making by the LPFC may offer insight into how we make effective decisions in novel and changing environments. It is noteworthy that our findings provide a natural extension to recent work examining the role of the OFC and ACC in value-based decision making. These areas appear specialized for integrating and establishing causal associations between the sensory properties of stimuli and rewards or actions and rewards, respectively (Matsumoto, et al., 2003; Rudebeck, Behrens, et al., 2008; Rushworth, et al., 2007; Schoenbaum & Esber, 2010; Wallis, 2007). This provides a mechanism by which the value of different options can be estimated and compared during simple decisions making (e.g., deciding between eating an apple or an orange). Rather than relying on trial-and-error associative learning, these areas appear to operate in a model-based manner similar to the LPFC. They integrate information regarding past learning and the current state of the world on the fly within working memory and are thus capable of rapidly estimating the reward-value of actions and stimuli (Hampton, Bossaerts, & O'Doherty, 2006; Schoenbaum & Esber, 2010; Wallis, 2007). Thus, similar principles may operate with respect to estimating and comparing the value of stimuli and actions on the one hand, and more complex representations such as rules on the other hand.  4.3  Implications for understanding higher "cognitive" processes The results of the current study have several implications for our understanding of the  neurocognitive basis of higher cognitive processes. First, our findings provide new insight into complex decision making. Common ―complex decision making‖ situations often require that we select one particular rule to follow out of multiple potential rules, or require that we select an action based on a rule rather than a reflexive habit that may conflict with long-term goals (e.g., 47  foregoing dessert to maintain a New Year‘s resolution to eat healthier). Arguably, such decisions first require discerning the reward-value of different rules. Our findings suggest that although rule and reward information may initially be represented in segregated neural systems, this information converges at the level of the LPFC. Thus, the reward-value of rules appears to be computed via dedicated integrative neural circuitry. By supporting an integrated representation of rules and their expected reward-value, the LPFC could play a central role in complex decision making by facilitating a comparison between the expected value of different rules, or the value of a rule versus the value of a habitual response. Second, our findings also provide a new perspective through which to interpret data obtained from classic measures of ―cognitive control‖ with no conspicuous motivational component. In one of the most widely used measures of the cognitive control – the Wisconsin Card Sorting Task (WCST) – subjects sort cards according to a particular rule (e.g., based on the shape or colour of the stimuli on the cards) and must use feedback to determine which rule is currently appropriate. This task, which requires subjects to select between multiple potential rules to follow, is viewed as a measure of cognitive flexibility. Damage to the LPFC severely disrupts performance on this task (e.g., causes perseveration) and it has been suggested that this region is critical because it supports ―task setting‖ (i.e., establishing and maintaining relevant stimulus-response associations) (Shallice, Stuss, Picton, Alexander, & Gillingham, 2007) or ―inhibitory control over attentional selection and previously relevant task sets‖ (Aron, Robbins, & Poldrack, 2004; Dias, Robbins, & Roberts, 1996) . Our findings, however, offer an alternative interpretation. Rather than supporting just the task-rules (i.e., stimulus-response associations) or an inhibitory process, the LPFC may be representing the reward-value of the different potential rules and dynamically updating the value of the rules based on the continuously presented  48  feedback. By representing the changing value of the different rules, the LPFC would be especially important for determining when to switch between rules. Thus, patients with LPFC damage may perseverate (i.e., continue to sort based an initial rule even when no longer relevant) not because they have difficulty representing a new sorting rule, but rather, because they have difficulty altering the value assigned to the original sorting rule, versus a new rule. Another classic measure of cognitive control is the Go/No-Go task. In this task, subjects are presented with a stream of sequentially presented stimuli (e.g., letters) and make a button press response to every stimulus except for the one No-Go stimulus (e.g., the letter ―X‖). Because the No-Go stimulus is presented infrequently, subjects build up a pre-potent tendency (or habit) to make a response and then are occasionally required to use an arbitrary rule to override this pre-potent tendency (e.g., if the stimulus is an ―X‖ then do not respond). Neuroimaging and lesion studies have demonstrated that the LPFC plays a crucial role in this task—putatively by supporting this rule-based inhibitory control over a pre-potent, yet inappropriate response (Aron, Fletcher, Bullmore, Sahakian, & Robbins, 2003; Aron, et al., 2004; Chikazoe, Konishi, Asari, Jimura, & Miyashita, 2007). Again, the overwhelming interpretation is that the LPFC is playing a purely ―cognitive‖ role. However, in order to expend the effort to implement a rule and override an automatic/pre-potent response, it would arguably require some element of motivation to do so. Despite the absence of overt rewards in this task, subjects may be motivated to implement an effortful rule for several reasons: as an internal drive to succeed, social compliance (with the instructions of the experimenter), or because correct task performance will ultimately lead to obtaining course credit (if they are an undergraduate student). Thus, it is quite possible that the role of the LPFC in the Go/No-Go task is to combine some element of motivation with the arbitrary rule demanded by the task. This would provide a  49  means by which subjects are willing to expend the effort necessary to implement the rule at the correct time. The arguments outlined here can be extended to any common measure of cognitive control. Finally, at a more broad level, our findings inform current conceptions of self-control. There has been a long tradition in Western philosophical thought to view self-control in terms of using reason and social standards for conduct in order to override an impulsive action triggered by the passions (for a review see Hofmann, Friese, & Strack, 2009). The prevalent modern neuroscientific view suggesting that the LPFC implements rule-based action control— independent of any motivational process—is in accordance with this tradition. It has led some to suggest that self-control is accomplished by a ―cognitive control‖ process supported by the LPFC that is needed to override more automatic reward-based processing supported by ventromedial and subcortical limbic regions (Diekhof & Gruber, 2010; Hare, Camerer, & Rangel, 2009; Heatherton & Wagner, 2011; Kirk, Harvey, & Montague, in press; Peters & Buchel, 2011). However, this idea begs the question posed at the outset of this paper: how does this rational, rule-based cognitive control mechanism become engaged in the first place? Our findings offer insight into this question and lead to a new conception of self-control. We found that the LPFC combines information about rules and rewards. Assuming that the LPFC is centrally involved in self-control, our data suggests that it may be more appropriate to view self-control as based on selecting the ―proper‖ incentive to guide action—in particular, an incentive that will motivate an individual to engage in rule processing—rather than as based on cognitive control overcoming the allure of a motivational incentive. Consider an example of self-control: the capacity to forego dessert when on a diet. One interpretation is that this occurs due to cold, rational reasoning (i.e., cognitive control) overcoming an emotional impulse. An alternative (and  50  perhaps more parsimonious) explanation is that the capacity to forego dessert will be effective to the extent that the individual assigns motivational value to being healthy—and combines this value with rule-based action control (e.g., being healthy is rewarding + if I want to maintain my health then I should not eat dessert). Thus, self-control (foregoing dessert) will occur on occasions in which the individual assigns a higher value to the abstract goal of being healthy as compared to the more concrete subjective experience of eating the dessert (for a discussion see Fujita, 2011). According to this perspective, motivational processing (and the rousing of passion) is not antithetical to self-control, but rather, is a central component. In other words, self-control can be seen in terms of deciding what to value, rather than deciding whether to value at all. The LPFC should thus be considered as supporting a valuation system for complex representations (e.g., rules), rather than a cognitive control system that is qualitatively distinct from medial prefrontal and subcortical valuation systems for simpler representations (e.g., stimuli). This perspective leads to the novel prediction that individual differences in self-control may reflect differences in the capacity to represent ―proper‖ motivational incentives (e.g., abstract/delayed/long-term rewards), just as much as they may reflect differences in the capacity to represent behaviour-guiding rules.  4.4  Interpretational and methodological issues It is important to consider some interpretational and methodological issues pertaining to  our study. We interpreted the repetition enhancement effect as most consistent with the extra time and processing resources necessary to establish and consolidate an integrative representation in mind. However, it is important to consider an alternative interpretation of repetition enhancement effects. It has been suggested that repetition enhancement may reflect the activation of an additional cognitive process that is specific to repeated stimuli (R. Henson &  51  M. Rugg, 2003). For instance, when an unfamiliar face is presented for a second time, it may elicit a recognition response that would be absent the first time around—when an initial representation of the face is being constructed (Henson, et al., 2000). According to this view, the repetition enhancement effect we observed would not reflect an integration process, but rather, recognition that the instruction cue signals a particular rule and reward combination for a second time. However, three lines of reasoning make this an unlikely explanation for our findings. First, subjects were familiarized with the instruction cue images prior to scanning and the same images were repeatedly presented during the experiment. Thus, the cues were familiar, making it unlikely that a recognition process would operate exclusively during the second instruction cue period; recognition should have occurred during each presentation of the instruction cue. Second, if a recognition process did occur during the second instruction cue period, it would be expected occur whenever any piece of information repeated. However, the repetition enhancement effect we observed in the IFS and IPS was specific to repetition of rule and reward information. Finally, the IFS and IPS are part of the ‗multiple demand network‘ that is engaged during rule use across a wide range of paradigms (Dosenbach, et al., 2006; Duncan, 2010), so it seems likely that these regions would be supporting the same process for the first and second instruction cue (i.e., establishing and maintaining a representation of the rules and reward in mind). Thus, the most parsimonious account of the repetition enhancement effect we observed is that it corresponded with the additional time and resources necessary for constructing a novel rule-reward integrated representation. Another point to consider is the generality of our findings. In our paradigm, we manipulated the content of the first instruction cue and hence, how it primed the second instruction cue, but held the content of the second instruction cue constant. This allowed us to  52  examine neural activity and the potential for fMRI-adaptation based on the identical event, ensuring that differences in neural activity were due to our priming manipulation and not other extraneous variables. However, the limitation of this approach is that it reduces generalization; our findings only allow us to make conclusions about how one particular rule and reward combination is integrated. Thus, we cannot definitively conclude that the IFS flexibly integrates any rule and reward combination required by current task demands. However, as noted above, the IFS is a central component of the ‗multiple demand network‘ that is activated across a widerange of complex tasks (Dosenbach, et al., 2006; Duncan, 2010) and single neurons in the LPFC show adaptive coding, representing any currently relevant task information (for reviews see Duncan, 2001; Miller & Cohen, 2001). Thus, while future studies are clearly needed, we speculate that the IFS flexibly constructs integrative representations of rule and reward information on the fly to meet current task demands. Moreover, it is likely that the IFS plays a broad integrative role, and is capable of combining not only rule and reward information, but a diverse array of information (e.g., two different rules). Consistent with this idea, a recent study directly manipulated the amount of perceptual information that needed to be integrated into a rule and found activation running along the right IFS extending onto the MFG and also along the right IPS (Hampshire, et al., 2011). The similarity with our findings—despite the fact that no motivational information was involved in that study—suggests that this right lateralized frontoparietal network is recruited as a function of the amount of information that needs to be integrated, regardless of the specific content of that information. It is interesting to note, however, the considerable evidence suggesting that increasingly abstract rules and concepts are represented by increasingly anterior regions along the LPFC (Badre & D'Esposito, 2007; Christoff, et al., 2009; Koechlin, et al., 2003). Accordingly, although the LPFC may be part of a  53  network that plays a general integrative role, the precise locus of activation along the LPFC may vary across studies according the complexity/abstractness of the information being represented. Related to the idea that the IFS may play a broad integrative role, it could be argued that the IFS repetition enhancement effect in our study did not reflect an integration of rule and reward information, but rather, simply reflected the need to integrate two things in mind. In other words, it could be argued that the representation of the monetary reward was not motivational in nature, but rather, could have been a cold, cognitive representation of the symbolic money cue (and was essentially just another piece of information to hold in mind). However, this seems unlikely for several reasons. First, the IFS was functionally connected to numerous reward regions and changed correlation strength with some of these regions as a function of the salience of the reward information during the instruction cue period. This is most consistent with the idea that the IFS was acquiring motivationally charged information from these regions. Second, subjects were faster to respond when a monetary reward was expected and self-report data confirmed that subjects were motivated by the monetary reward; this suggests that the representation of the monetary reward was highly motivational in nature. Finally, as noted above, abundant prior work has shown that the LPFC is sensitive to motivational information and activity in this region tightly correlates with the subjective value of decision options, similar to what is found in a classic motivation-related area, the OFC. Thus, the most parsimonious explanation of our findings is that the repetition enhancement effect observed in the LPFC reflected an integration of rule and reward information—with the representation of reward information being motivational in nature.  54  4.5  Questions remaining to be addressed by future studies Several issues remain to be addressed by future studies. An important issue will be  discerning exactly how the LPFC integrates rule and reward information. Our correlation analysis found that the IFS exhibited dynamic interactions with the rule and reward networks, consistent with the idea that it may re-map the information supplied by these regions into an abstract second-order integrative representation. Thus, it could be that the IFS simply sums afferent input arising from these networks. However, it seems more likely that IFS may operate somewhat independently and that the relationship between the IFS and the reward network may depend on context. For example, some situations provide conflicting information about the best response option, or may provide conflicting reward options (e.g., an immediate reward versus a long-term reward). In this case, the IFS (or other LPFC sub-region) may not utilize information from the reward network. Consistent with this idea, Li et al. (2011) found negative functional connectivity between the DLPFC and reward network areas (VMPFC, NAcc) when subjects used explicit instruction rather than feedback from the environment to learn the probability that a particular choice would be rewarded. Our task, in contrast, did not provide conflicting sources of reward information, and we observed positive functional connectivity. Thus, communication between the LPFC and the reward network may vary as a function of the usefulness of reward information provided by the environment versus other sources (e.g., long-term goals, social learning, etc.). Another issue for future work will be to disentangle the precise roles of the IFS and IPS. Similar to the IFS, we also found that a large activation cluster in the parietal cortex running along the IPS exhibited a repetition enhancement effect selective to repeated rule and reward information. The parietal and lateral prefrontal cortices are highly connected anatomically and  55  co-activation of the regions is ubiquitously observed. Similar to the LPFC, the IPS has been implicated in both rule and reward processing (Boorman, Behrens, Woolrich, & Rushworth, 2009; Daw, O'Doherty, Dayan, Seymour, & Dolan, 2006; Dosenbach, et al., 2006; Louie & Glimcher, 2010; Platt & Glimcher, 1999; Seo, Barraclough, & Lee, 2009; Woolgar, et al., 2011). We speculate that the IPS helps to translate high-level goal representations in the LPFC into a more concrete format capable of directing specific actions. However, more work is necessary to evaluate the veracity of this hypothesis. A final point pertains to the paucity of existing studies that have used of fMRI-adaptation paradigms to assess high-level cognitive and motivational functions. While it may not be as straightforward to design these types of studies relative to studies employing fMRI-adaptation to examine visual processing, our results highlight the feasibility and usefulness of this type of paradigm to address questions pertaining to complex processes. An important finding of the current study was that when examining repetition of rule information alone our adaptation analysis identified major nodes of the rule network (Buckley, et al., 2009; Bunge, 2004; Christoff, et al., 2009; Christoff, et al., 2001; Donohue, et al., 2005; Dosenbach, et al., 2006; Duncan, 2010; Hampshire, et al., 2011; Koechlin, et al., 2003; Miller & Cohen, 2001; Sakai, 2008; Wallis, Anderson, et al., 2001) and when examining repetition of reward information alone our adaptation analysis identified major nodes of the reward network (Kable & Glimcher, 2007; Knutson & Cooper, 2005; Kringelbach & Rolls, 2004; O'Doherty, 2004; Plassmann, et al., 2007; Preuschoff, et al., 2008; Rushworth & Behrens, 2008; Schoenbaum, et al., 2009; Walton, et al., 2010). The similarity of our findings with prior work highlights the feasibility of using fMRIadaptation to examine high-level cognitive and motivational processes. Thus, fMRI-adaptation may offer a valuable method of uncovering the representational content supported by different  56  areas of the prefrontal cortex. In particular, a strength of this approach is that it places the emphasis on the explicit computations performed by different brain areas, rather than promoting interpretations based on vague functions such as ―executive control‖ or ―inhibition‖ that are often attributed to prefrontal regions.  4.6  Conclusions In sum, we found repetition enhancement in the IFS—a sub-region of the LPFC—  uniquely when both rule and reward information repeated, consistent with the idea that this region supports an integrative representation. Corroborating evidence came from a functional connectivity analyses revealing significantly correlated activity between the IFS and regions of the rule and reward networks across the entire time-course. Moreover, the strength of the correlation with a subset of these regions changed during the instruction cue period depending upon whether novel or repeated rule and reward information was signaled. This suggests that the IFS may incorporate afferent input from distributed rule and reward systems into an integrated representation. Despite the abundant evidence in prior work suggesting that the LPFC may play a motivational role, there has been reluctance to adopt this perspective. Instead, the LPFC is still widely regarded as strictly a high-level ―cognitive‖ area of the brain. Our findings provide direct evidence that this perspective is no longer tenable and suggest a novel view on the role of the LPFC in complex decision making and self-control.  57  REFERENCES Aron, A. R., Fletcher, P. C., Bullmore, E. T., Sahakian, B. J., & Robbins, T. W. (2003). Stopsignal inhibition disrupted by damage to right inferior frontal gyrus in humans. Nature neuroscience, 6(2), 115-116. Aron, A. R., Robbins, T. W., & Poldrack, R. A. (2004). Inhibition and the right inferior frontal cortex. Trends in cognitive sciences, 8(4), 170-177. Badre, D., & D'Esposito, M. (2007). Functional magnetic resonance imaging evidence for a hierarchical organization of the prefrontal cortex. Journal of cognitive neuroscience, 19(12), 2082-2099. Badre, D., & D'Esposito, M. (2009). Is the rostro-caudal axis of the frontal lobe hierarchical? Nature reviews. Neuroscience, 10(9), 659-669. Barbas, H., & Mesulam, M. M. (1985). Cortical afferent input to the principalis region of the rhesus monkey. Neuroscience, 15(3), 619-637. Barraclough, D. J., Conroy, M. L., & Lee, D. (2004). Prefrontal cortex and decision making in a mixed-strategy game. Nature neuroscience, 7(4), 404-410. Beck, S. M., Locke, H. S., Savine, A. C., Jimura, K., & Braver, T. S. (2010). Primary and secondary rewards differentially modulate neural activity dynamics during working memory. PloS one, 5(2), e9251. Boorman, E. D., Behrens, T. E., Woolrich, M. W., & Rushworth, M. F. (2009). How green is the grass on the other side? Frontopolar cortex and the evidence in favor of alternative courses of action. Neuron, 62(5), 733-743. Botvinick, M. M., Braver, T. S., Barch, D. M., Carter, C. S., & Cohen, J. D. (2001). Conflict monitoring and cognitive control. Psychological review, 108(3), 624-652. 58  Brett, M., Anton, J.-L., Valabregue, R., & Poline, J.-B. (2002). Region of interest analysis using an SPM toolbox [abstract] Paper presented at the Presented at the 8th International Conference on Functional Mapping of the Human Brain, June 2-6 2002. Buckley, M. J., Mansouri, F. A., Hoda, H., Mahboubi, M., Browning, P. G., Kwok, S. C., et al. (2009). Dissociable components of rule-guided behavior depend on distinct medial and prefrontal regions. Science, 325(5936), 52-58. Buckner, R. L., Goodman, J., Burock, M., Rotte, M., Koutstaal, W., Schacter, D., et al. (1998). Functional-anatomic correlates of object priming in humans revealed by rapid presentation event-related fMRI. Neuron, 20(2), 285-296. Bunge, S. A. (2004). How we use rules to select actions: a review of evidence from cognitive neuroscience. Cognitive, affective & behavioral neuroscience, 4(4), 564-579. Bunge, S. A., Kahn, I., Wallis, J. D., Miller, E. K., & Wagner, A. D. (2003). Neural circuits subserving the retrieval and maintenance of abstract rules. Journal of neurophysiology, 90(5), 3419-3428. Charron, S., & Koechlin, E. (2010). Divided representation of concurrent goals in the human frontal lobes. Science, 328(5976), 360-363. Chikazoe, J., Konishi, S., Asari, T., Jimura, K., & Miyashita, Y. (2007). Activation of right inferior frontal gyrus during response inhibition across response modalities. Journal of cognitive neuroscience, 19(1), 69-80. Chong, T. T., Cunnington, R., Williams, M. A., Kanwisher, N., & Mattingley, J. B. (2008). fMRI adaptation reveals mirror neurons in human inferior parietal cortex. Current biology : CB, 18(20), 1576-1580.  59  Chouinard, P. A., & Goodale, M. A. (2009). FMRI adaptation during performance of learned arbitrary visuomotor conditional associations. NeuroImage, 48(4), 696-706. Christoff, K., Keramatian, K., Gordon, A. M., Smith, R., & Madler, B. (2009). Prefrontal organization of cognitive control according to levels of abstraction. Brain research, 1286, 94-105. Christoff, K., Prabhakaran, V., Dorfman, J., Zhao, Z., Kroger, J. K., Holyoak, K. J., et al. (2001). Rostrolateral prefrontal cortex involvement in relational integration during reasoning. NeuroImage, 14(5), 1136-1149. Craig, A. D. (2002). How do you feel? Interoception: the sense of the physiological condition of the body. Nature reviews. Neuroscience, 3(8), 655-666. Croxson, P. L., Walton, M. E., O'Reilly, J. X., Behrens, T. E., & Rushworth, M. F. (2009). Effort-based cost-benefit valuation and the human brain. The Journal of neuroscience, 29(14), 4531-4541. Daw, N. D., Niv, Y., & Dayan, P. (2005). Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature neuroscience, 8(12), 17041711. Daw, N. D., O'Doherty, J. P., Dayan, P., Seymour, B., & Dolan, R. J. (2006). Cortical substrates for exploratory decisions in humans. Nature, 441(7095), 876-879. Dias, R., Robbins, T. W., & Roberts, A. C. (1996). Dissociation in prefrontal cortex of affective and attentional shifts. Nature, 380(6569), 69-72. Diekhof, E. K., & Gruber, O. (2010). When desire collides with reason: functional interactions between anteroventral prefrontal cortex and nucleus accumbens underlie the human ability to resist impulsive desires. The Journal of neuroscience, 30(4), 1488-1493.  60  Dobbins, I. G., Schnyer, D. M., Verfaellie, M., & Schacter, D. L. (2004). Cortical activity reductions during repetition priming can result from rapid response learning. Nature, 428(6980), 316-319. Donohue, S. E., Wendelken, C., & Bunge, S. A. (2008). Neural correlates of preparation for action selection as a function of specific task demands. Journal of cognitive neuroscience, 20(4), 694-706. Donohue, S. E., Wendelken, C., Crone, E. A., & Bunge, S. A. (2005). Retrieving rules for behavior from long-term memory. NeuroImage, 26(4), 1140-1149. Dosenbach, N. U., Visscher, K. M., Palmer, E. D., Miezin, F. M., Wenger, K. K., Kang, H. C., et al. (2006). A core system for the implementation of task sets. Neuron, 50(5), 799-812. Duncan, J. (2001). An adaptive coding model of neural function in prefrontal cortex. Nature reviews. Neuroscience, 2(11), 820-829. Duncan, J. (2010). The multiple-demand (MD) system of the primate brain: mental programs for intelligent behaviour. Trends in cognitive sciences, 14(4), 172-179. Figner, B., Knoch, D., Johnson, E. J., Krosch, A. R., Lisanby, S. H., Fehr, E., et al. (2010). Lateral prefrontal cortex and self-control in intertemporal choice. Nature neuroscience, 13(5), 538-539. Frank, M. J., & Badre, D. (in press). Mechanisms of hierarchical reinforcement learning in corticostriatal circuits 1: Computational analysis. Cerebral cortex. Fujita, K. (2011). On Conceptualizing Self-Control as More Than the Effortful Inhibition of Impulses. Pers Soc Psychol Rev. Fuster, J. (2008). The prefrontal cortex (4 ed.): Academic Press, London.  61  Gillihan, S. J., Xia, C., Padon, A. A., Heberlein, A. S., Farah, M. J., & Fellows, L. K. (2011). Contrasting roles for lateral and ventromedial prefrontal cortex in transient and dispositional affective experience. Social cognitive and affective neuroscience, 6(1), 128137. Glascher, J., Daw, N., Dayan, P., & O'Doherty, J. P. (2010). States versus rewards: dissociable neural prediction error signals underlying model-based and model-free reinforcement learning. Neuron, 66(4), 585-595. Gray, J. R., Braver, T. S., & Raichle, M. E. (2002). Integration of emotion and cognition in the lateral prefrontal cortex. Proceedings of the National Academy of Sciences of the United States of America, 99(6), 4115-4120. Grill-Spector, K., Henson, R., & Martin, A. (2006). Repetition and the brain: neural models of stimulus-specific effects. Trends in cognitive sciences, 10(1), 14-23. Grill-Spector, K., Kushnir, T., Edelman, S., Avidan, G., Itzchak, Y., & Malach, R. (1999). Differential processing of objects under various viewing conditions in the human lateral occipital complex. Neuron, 24(1), 187-203. Grill-Spector, K., Kushnir, T., Edelman, S., Itzchak, Y., & Malach, R. (1998). Cue-invariant activation in object-related areas of the human occipital lobe. Neuron, 21(1), 191-202. Grill-Spector, K., & Malach, R. (2001). fMR-adaptation: a tool for studying the functional properties of human cortical neurons. Acta psychologica, 107(1-3), 293-321. Hampshire, A., Thompson, R., Duncan, J., & Owen, A. M. (2011). Lateral prefrontal cortex subregions make dissociable contributions during fluid reasoning. Cerebral cortex, 21(1), 1-10.  62  Hampton, A. N., Bossaerts, P., & O'Doherty, J. P. (2006). The role of the ventromedial prefrontal cortex in abstract state-based inference during decision making in humans. J Neurosci, 26(32), 8360-8367. Hare, T. A., Camerer, C. F., & Rangel, A. (2009). Self-control in decision-making involves modulation of the vmPFC valuation system. Science, 324(5927), 646-648. Hazy, T. E., Frank, M. J., & O'Reilly R, C. (2007). Towards an executive without a homunculus: computational models of the prefrontal cortex/basal ganglia system. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 362(1485), 1601-1613. Heatherton, T. F., & Wagner, D. D. (2011). Cognitive neuroscience of self-regulation failure. Trends in cognitive sciences, 15(3), 132-139. Henson, R., & Rugg, M. (2003). Neural response suppression, haemodynamic repetition effects, and behavioural priming. Neuropsychologia, 41(3), 263-270. Henson, R., & Rugg, M. D. (2003). Neural response suppression, haemodynamic repetition effects, and behavioural priming. Neuropsychologia, 41(3), 263-270. Henson, R., Shallice, T., & Dolan, R. (2000). Neuroimaging evidence for dissociable forms of repetition priming. Science, 287(5456), 1269-1272. Hikosaka, K., & Watanabe, M. (2000). Delay activity of orbital and lateral prefrontal neurons of the monkey varying with different rewards. Cerebral cortex, 10(3), 263-271. Histed, M. H., Pasupathy, A., & Miller, E. K. (2009). Learning substrates in the primate prefrontal cortex and striatum: sustained activity related to successful actions. Neuron, 63(2), 244-253.  63  Hofmann, W., Friese, M., & Strack, F. (2009). Impulse and Self-Control From a Dual-Systems Perspective. Perspectives on Psychological Science, 4(2), 162-176. Huettel, S. A., Song, A. W., & McCarthy, G. (2005). Decisions under uncertainty: probabilistic context influences activation of prefrontal and parietal cortices. The Journal of neuroscience, 25(13), 3304-3311. Jenkins, A. C., Macrae, C. N., & Mitchell, J. P. (2008). Repetition suppression of ventromedial prefrontal activity during judgments of self and others. Proceedings of the National Academy of Sciences of the United States of America, 105(11), 4507-4512. Jimura, K., Locke, H. S., & Braver, T. S. (2010). Prefrontal cortex mediation of cognitive enhancement in rewarding motivational contexts. Proceedings of the National Academy of Sciences of the United States of America, 107(19), 8871-8876. Kable, J. W., & Glimcher, P. W. (2007). The neural correlates of subjective value during intertemporal choice. Nature neuroscience, 10(12), 1625-1633. Kennerley, S. W., & Wallis, J. D. (2009). Reward-dependent modulation of working memory in lateral prefrontal cortex. The Journal of neuroscience, 29(10), 3259-3270. Kilner, J. M., Neal, A., Weiskopf, N., Friston, K. J., & Frith, C. D. (2009). Evidence of mirror neurons in human inferior frontal gyrus. The Journal of neuroscience, 29(32), 1015310159. Kim, S., Hwang, J., & Lee, D. (2008). Prefrontal coding of temporally discounted values during intertemporal choice. Neuron, 59(1), 161-172. Kirk, U., Harvey, A., & Montague, P. R. (in press). Domain expertise insulates against judgment bias by monetary favors through a modulation of ventromedial prefrontal cortex. Proceedings of the National Academy of Sciences of the United States of America.  64  Knoch, D., Gianotti, L. R., Pascual-Leone, A., Treyer, V., Regard, M., Hohmann, M., et al. (2006). Disruption of right prefrontal cortex by low-frequency repetitive transcranial magnetic stimulation induces risk-taking behavior. The Journal of neuroscience, 26(24), 6469-6472. Knutson, B., & Cooper, J. C. (2005). Functional magnetic resonance imaging of reward prediction. Current opinion in neurology, 18(4), 411-417. Knutson, B., Fong, G. W., Bennett, S. M., Adams, C. M., & Hommer, D. (2003). A region of mesial prefrontal cortex tracks monetarily rewarding outcomes: characterization with rapid event-related fMRI. NeuroImage, 18(2), 263-272. Koechlin, E., Ody, C., & Kouneiher, F. (2003). The architecture of cognitive control in the human prefrontal cortex. Science, 302(5648), 1181-1185. Koechlin, E., & Summerfield, C. (2007). An information theoretical approach to prefrontal executive function. Trends in cognitive sciences, 11(6), 229-235. Kouneiher, F., Charron, S., & Koechlin, E. (2009). Motivation and cognitive control in the human prefrontal cortex. Nature neuroscience, 12(7), 939-945. Kringelbach, M. L., & Rolls, E. T. (2004). The functional neuroanatomy of the human orbitofrontal cortex: evidence from neuroimaging and neuropsychology. Progress in neurobiology, 72(5), 341-372. Leon, M. I., & Shadlen, M. N. (1999). Effect of expected reward magnitude on the response of neurons in the dorsolateral prefrontal cortex of the macaque. Neuron, 24(2), 415-425. Levy, R., & Dubois, B. (2006). Apathy and the functional anatomy of the prefrontal cortex-basal ganglia circuits. Cerebral cortex, 16(7), 916-928.  65  Li, J., Delgado, M. R., & Phelps, E. A. (2011). How instructed knowledge modulates the neural systems of reward learning. Proceedings of the National Academy of Sciences of the United States of America, 108(1), 55-60. Lingnau, A., Gesierich, B., & Caramazza, A. (2009). Asymmetric fMRI adaptation reveals no evidence for mirror neurons in humans. Proceedings of the National Academy of Sciences of the United States of America, 106(24), 9925-9930. Locke, H. S., & Braver, T. S. (2008). Motivational influences on cognitive control: behavior, brain activation, and individual differences. Cognitive, affective & behavioral neuroscience, 8(1), 99-112. Louie, K., & Glimcher, P. W. (2010). Separating value from choice: delay discounting activity in the lateral intraparietal area. The Journal of neuroscience, 30(16), 5498-5507. Lundqvist, D., Flykt, A., & Ohman, A. (1998). Karolinska Directed Emotional Faces [Database of standardized facial images]. Psychology Section, Department of Clinical Neuroscience, Karolinska Hospital, S-171 76 Stockholm, Sweden. Macey, P. M., Macey, K. E., Kumar, R., & Harper, R. M. (2004). A method for removal of global effects from fMRI time series. NeuroImage, 22(1), 360-366. Mansouri, F. A., Matsumoto, K., & Tanaka, K. (2006). Prefrontal cell activities related to monkeys' success and failure in adapting to rule changes in a Wisconsin Card Sorting Test analog. The Journal of neuroscience, 26(10), 2745-2756. Martinez, A. M., & Benavente, R. (1998). The AR Face Database. CVC Technical Report #24. Matsumoto, K., Suzuki, W., & Tanaka, K. (2003). Neuronal correlates of goal-based motor selection in the prefrontal cortex. Science, 301(5630), 229-232.  66  McClure, S. M., Laibson, D. I., Loewenstein, G., & Cohen, J. D. (2004). Separate neural systems value immediate and delayed monetary rewards. Science, 306(5695), 503-507. McGuire, J. T., & Botvinick, M. M. (2010). Prefrontal cortex, cognitive control, and the registration of decision costs. Proceedings of the National Academy of Sciences of the United States of America, 107(17), 7922-7926. Miller, E. K., & Cohen, J. D. (2001). An integrative theory of prefrontal cortex function. Annual review of neuroscience, 24, 167-202. Morris, R., Pandya, D. N., & Petrides, M. (1999). Fiber system linking the mid-dorsolateral frontal cortex with the retrosplenial/presubicular region in the rhesus monkey. The Journal of comparative neurology, 407(2), 183-192. Noonan, M. P., Walton, M. E., Behrens, T. E., Sallet, J., Buckley, M. J., & Rushworth, M. F. (2010). Separate value comparison and learning mechanisms in macaque medial and lateral orbitofrontal cortex. Proceedings of the National Academy of Sciences of the United States of America, 107(47), 20547-20552. O'Doherty, J. P. (2004). Reward representations and reward-related learning in the human brain: insights from neuroimaging. Current opinion in neurobiology, 14(6), 769-776. Paradiso, S., Chemerinski, E., Yazici, K. M., Tartaro, A., & Robinson, R. G. (1999). Frontal lobe syndrome reassessed: comparison of patients with lateral or medial frontal brain damage. Journal of neurology, neurosurgery, and psychiatry, 67(5), 664-667. Pessoa, L. (2008). On the relationship between emotion and cognition. Nature reviews. Neuroscience, 9(2), 148-158. Peters, J., & Buchel, C. (2011). The neural mechanisms of inter-temporal decision-making: understanding variability. Trends in cognitive sciences, 15(5), 227-239.  67  Petrides, M., & Pandya, D. N. (1988). Association fiber pathways to the frontal cortex from the superior temporal region in the rhesus monkey. The Journal of comparative neurology, 273(1), 52-66. Petrides, M., & Pandya, D. N. (1999). Dorsolateral prefrontal cortex: comparative cytoarchitectonic analysis in the human and the macaque brain and corticocortical connection patterns. The European journal of neuroscience, 11(3), 1011-1036. Petrides, M., & Pandya, D. N. (2002). Comparative cytoarchitectonic analysis of the human and the macaque ventrolateral prefrontal cortex and corticocortical connection patterns in the monkey. The European journal of neuroscience, 16(2), 291-310. Petrides, M., & Pandya, D. N. (2006). Efferent association pathways originating in the caudal prefrontal cortex in the macaque monkey. The Journal of comparative neurology, 498(2), 227-251. Phillips, P. J., Wechsler, H., Huang, J., & Rauss, P. J. (1998). The FERET database and evaluation procedure for face-recognition algorithms Image and Vision Computing, 16(5), 295-306. Plassmann, H., O'Doherty, J., & Rangel, A. (2007). Orbitofrontal cortex encodes willingness to pay in everyday economic transactions. The Journal of neuroscience, 27(37), 9984-9988. Plassmann, H., O'Doherty, J. P., & Rangel, A. (2010). Appetitive and aversive goal values are encoded in the medial orbitofrontal cortex at the time of decision making. The Journal of neuroscience 30(32), 10799-10808. Platt, M. L., & Glimcher, P. W. (1999). Neural correlates of decision variables in parietal cortex. Nature, 400(6741), 233-238.  68  Pochon, J. B., Levy, R., Fossati, P., Lehericy, S., Poline, J. B., Pillon, B., et al. (2002). The neural system that bridges reward and cognition in humans: an fMRI study. Proceedings of the National Academy of Sciences of the United States of America, 99(8), 5669-5674. Preuschoff, K., Quartz, S. R., & Bossaerts, P. (2008). Human insula activation reflects risk prediction errors as well as risk. The Journal of neuroscience, 28(11), 2745-2752. Race, E. A., Shanker, S., & Wagner, A. D. (2009). Neural priming in human frontal cortex: multiple forms of learning reduce demands on the prefrontal executive system. Journal of cognitive neuroscience, 21(9), 1766-1781. Ridderinkhof, K. R., Ullsperger, M., Crone, E. A., & Nieuwenhuis, S. (2004). The role of the medial frontal cortex in cognitive control. Science, 306(5695), 443-447. Rudebeck, P. H., Bannerman, D. M., & Rushworth, M. F. (2008). The contribution of distinct subregions of the ventromedial frontal cortex to emotion, social behavior, and decision making. Cognitive, affective & behavioral neuroscience, 8(4), 485-497. Rudebeck, P. H., Behrens, T. E., Kennerley, S. W., Baxter, M. G., Buckley, M. J., Walton, M. E., et al. (2008). Frontal cortex subregions play distinct roles in choices between actions and stimuli. The Journal of neuroscience, 28(51), 13775-13785. Rushworth, M. F., & Behrens, T. E. (2008). Choice, uncertainty and value in prefrontal and cingulate cortex. Nature neuroscience, 11(4), 389-397. Rushworth, M. F., Behrens, T. E., Rudebeck, P. H., & Walton, M. E. (2007). Contrasting roles for cingulate and orbitofrontal cortex in decisions and social behaviour. Trends in cognitive sciences, 11(4), 168-176. Sakai, K. (2008). Task set and prefrontal cortex. Annual review of neuroscience, 31, 219-245.  69  Sakai, K., & Passingham, R. E. (2006). Prefrontal set activity predicts rule-specific neural processing during subsequent cognitive performance. The Journal of neuroscience 26(4), 1211-1218. Salimpoor, V. N., Chang, C., & Menon, V. (2010). Neural basis of repetition priming during mathematical cognition: repetition suppression or repetition enhancement? Journal of cognitive neuroscience, 22(4), 790-805. Samejima, K., & Doya, K. (2007). Multiple representations of belief states and action values in corticobasal ganglia loops. Annals of the New York Academy of Sciences, 1104, 213-228. Savine, A. C., & Braver, T. S. (2010). Motivated cognitive control: reward incentives modulate preparatory neural activity during task-switching. The Journal of neuroscience 30(31), 10294-10305. Schoenbaum, G., & Esber, G. R. (2010). How do you (estimate you will) like them apples? Integration as a defining trait of orbitofrontal function. Current opinion in neurobiology, 20(2), 205-211. Schoenbaum, G., Roesch, M. R., Stalnaker, T. A., & Takahashi, Y. K. (2009). A new perspective on the role of the orbitofrontal cortex in adaptive behaviour. Nature reviews. Neuroscience, 10(12), 885-892. Seo, H., Barraclough, D. J., & Lee, D. (2007). Dynamic signals related to choices and outcomes in the dorsolateral prefrontal cortex. Cerebral cortex, 17 Suppl 1, i110-117. Seo, H., Barraclough, D. J., & Lee, D. (2009). Lateral intraparietal cortex and reinforcement learning during a mixed-strategy game. The Journal of neuroscience 29(22), 7278-7289. Shallice, T., Stuss, D. T., Picton, T. W., Alexander, M. P., & Gillingham, S. (2007). Multiple effects of prefrontal lesions on task-switching. Frontiers in human neuroscience, 1, 2.  70  Simmons, J. M., Minamimoto, T., Murray, E. A., & Richmond, B. J. (2010). Selective ablations reveal that orbital and lateral prefrontal cortex play different roles in estimating predicted reward value. The Journal of neuroscience 30(47), 15878-15887. Simon, D. A., & Daw, N. D. (2011). Neural correlates of forward planning in a spatial decision task in humans. The Journal of neuroscience 31(14), 5526-5539. Tanaka, S. C., Doya, K., Okada, G., Ueda, K., Okamoto, Y., & Yamawaki, S. (2004). Prediction of immediate and future rewards differentially recruits cortico-basal ganglia loops. Nature neuroscience, 7(8), 887-893. Tobler, P. N., Christopoulos, G. I., O'Doherty, J. P., Dolan, R. J., & Schultz, W. (2009). Riskdependent reward value signal in human prefrontal cortex. Proceedings of the National Academy of Sciences of the United States of America, 106(17), 7185-7190. Turk-Browne, N. B., Yi, D. J., Leber, A. B., & Chun, M. M. (2007). Visual quality determines the direction of neural repetition effects. Cerebral cortex, 17(2), 425-433. Wallis, J. D. (2007). Orbitofrontal cortex and its contribution to decision-making. Annual review of neuroscience, 30, 31-56. Wallis, J. D., Anderson, K. C., & Miller, E. K. (2001). Single neurons in prefrontal cortex encode abstract rules. Nature, 411(6840), 953-956. Wallis, J. D., Dias, R., Robbins, T. W., & Roberts, A. C. (2001). Dissociable contributions of the orbitofrontal and lateral prefrontal cortex of the marmoset to performance on a detour reaching task. The European journal of neuroscience, 13(9), 1797-1808. Wallis, J. D., & Miller, E. K. (2003). Neuronal activity in primate dorsolateral and orbital prefrontal cortex during performance of a reward preference task. The European journal of neuroscience, 18(7), 2069-2081.  71  Walton, M. E., Behrens, T. E., Buckley, M. J., Rudebeck, P. H., & Rushworth, M. F. (2010). Separable learning systems in the macaque brain and the role of orbitofrontal cortex in contingent learning. Neuron, 65(6), 927-939. Watanabe, M. (1996). Reward expectancy in primate prefrontal neurons. Nature, 382(6592), 629-632. Watanabe, M. (2007). Role of anticipated reward in cognitive behavioral control. Current opinion in neurobiology, 17(2), 213-219. Woolgar, A., Thompson, R., Bor, D., & Duncan, J. (2011). Multi-voxel coding of stimuli, rules, and responses in human frontoparietal cortex. NeuroImage, 56(2), 744-752. Worsley, K. J., Evans, A. C., Marrett, S., & Neelin, P. (1992). A three-dimensional statistical analysis for CBF activation studies in human brain. Journal of cerebral blood flow and metabolism 12(6), 900-918. Xu, Y., Turk-Browne, N. B., & Chun, M. M. (2007). Dissociating task performance from fMRI repetition attenuation in ventral visual cortex. The Journal of neuroscience, 27(22), 59815985. Yeterian, E. H., Pandya, D. N., Tomaiuolo, F., & Petrides, M. (2011). The cortical connectivity of the prefrontal cortex in the monkey brain. Cortex; a journal devoted to the study of the nervous system and behavior.  72  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0072060/manifest

Comment

Related Items