Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A dissociable role for dopamine receptors in the basolateral amygdala in risk/reward decision making Larkin, Joshua Daniel 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata


24-ubc_2015_may_larkin_joshua.pdf [ 1.22MB ]
JSON: 24-1.0167201.json
JSON-LD: 24-1.0167201-ld.json
RDF/XML (Pretty): 24-1.0167201-rdf.xml
RDF/JSON: 24-1.0167201-rdf.json
Turtle: 24-1.0167201-turtle.txt
N-Triples: 24-1.0167201-rdf-ntriples.txt
Original Record: 24-1.0167201-source.json
Full Text

Full Text

   A Dissociable Role for Dopamine Receptors in the Basolateral Amygdala in Risk/Reward Decision Making by   Joshua Daniel Larkin B.S., University of Washington, 2012     A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The Faculty of Graduate and Postdoctoral Studies (Neuroscience) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) April, 2015     © Joshua Daniel Larkin, 2015 ii  ABSTRACT Different aspects of cost/benefit decision making involving uncertain rewards are facilitated by distributed corticolimbic circuits linking different regions of the prefrontal cortex, ventral striatum and the basolateral amygdala (BLA). Dopamine (DA) also plays an integral role in promoting choice of larger, uncertain rewards, as manipulations of DA transmission the PFC or nucleus accumbens alters risky choice. However, considerably less is known about how DA activity within the BLA regulates risk-based decision making. The present study assessed the effects of DA receptor modulation within the BLA on risk-based decision making, utilizing a probabilistic discounting task. Rats were trained to choose between a small/certain lever (1 sugar pellet) and a large/risky lever (4 sugar pellets) delivered in a probabilistic manner. The odds of obtaining the larger reward decreased in a systematic manner across 4 blocks of trials (100%, 50%, 25%, & 12.5%) during a daily session. Animals received counterbalanced intra-BLA microinfusions of the D1 receptor antagonist SCH23390, D2 antagonist eticlopride, the D1 agonist SKF81297 or D2 agonist quinpirole. Blockade of D1 receptors in the BLA caused rats to discount the larger/uncertain reward significantly more when compared to their performance after saline infusions, resulting in a reduction in risky choice most prominently during blocks where delivery of the larger reward was uncertain. Further, stimulation of the D1 receptor produced an optimization effect on choice behavior, increasing risky choice when it is more advantageous and decreasing risky choice when it is not advantageous. D1 receptors in the BLA seem to have an important role in facilitating optimal decision making and promoting choice of larger uncertain rewards. D2 receptor blockade showed a significant reduction in reward sensitivity, while stimulation of the D2 receptor did not affect choice behavior. More generally, these findings highlight a key contribution by mesoamygdala DA in regulating certain aspects of cost/benefit decision making, particularly the D1 receptor which may be the primary mechanism through which DA exerts its effect the BLA. These findings may have important implications to understanding mechanisms underlying disruptions in decision making and reward processes in psychiatric disorders linked to dysfunction of the DA system and the amygdala.   iii  PREFACE: Research for this thesis proposal was approved by the UBC Animal Care Committee, application number A14-0210.                     iv  TABLE OF CONTENTS: ABSTRACT………………………………………………………………………………………iiPREFACE…………………………………………………………………...……………………iii TABLE OF CONTENTS………………………………………………………………………...iii LIST OF TABLES………………………………………………………………………………...v LIST OF FIGURES………………………………………………………………………...…….vi ACKNOWLEDGEMENTS……………………………………………………………….…..…vii I.INTRODUCTION…………………………………………………………………………….…1II. MATERIALS AND METHODS………………………………………………………………9  ANIMALS………………………………………………………………………………...9  APPARATUS……………………………………………………………………………..9  LEVER PRESS TRAINING……………………………………………………………...9  SIDE PREFERENCE TRAINING…………………………………………….………...10  PROBABILISTIC DISCOUNTING TASK……………………………………………..11  TRAINING PROCEDURE, SURGERY AND MICROINFUSION PROTOCOL……..13  REWARD MAGNITUDE DISCRIMINATION TASK………………………………...15  HISTOLOGY…………………………………………………………………………….15  DATA ANALYSIS………………………………………………………………………16 III. RESULTS……………………………………………………………………………………18  D1 RECEPTOR BLOCKADE…………………………………………………………..18  REWARD MAGNITUDE DISCRIMINATION………………………………..….…....21  D2 RECEPTOR BLOCKADE…………………………………………………………..23  D1 RECEPTOR STIMULATION…………………………………………………….…25 D2 RECEPTOR STIMULATION…………………………………………………….…28  INACCURATE PLACEMENT……………………………………………………….…30 IV. DISCUSSION…………………………………………………………………………….….34 V. SUMMARY AND CONCLUSION………………………………………………………….43 BIBLIOGRAPHY…………………………………………………………………………….….44 v  LIST OF TABLES: TABLE 1: PERFORMANCE MEASURES………………………………………………..20                        vi  LIST OF FIGURES: FIGURE 1: D1 BLOCKADE EFFECTS ON RISK DISCOUNTING………………………….19 FIGURE 2: D1 BLOCKADE EFFECTS ON REWARD MAGNITUDE TASK…………….…22 FIGURE 3: D2 BLOCKADE EFFECTS ON RISK DISCOUNTING………………………….24 FIGURE 4: D1 STIMULATION EFFECTS ON RISK DISCOUNTING………………………27 FIGURE 5: D2 STIMULATION EFFECTS ON RISK DISCOUNTING………………………29 FIGURE 6: INACCURATE PLACEMENT ANALYSIS………………………………………31 FIGURE 7: HISTOLOGY – ACCURATE PLACEMENTS……………………………………32 FIGURE 8: HISTOLOGY – INACCURATE PLACEMENTS…………………………………33                 vii  ACKNOWLEDGEMENTS: I would like to thank all those aided me along the journey of conducting this research. Most prominently I would like to convey my most sincere appreciation to Dr. Stan Floresco for giving me this wonderful opportunity to conduct research in his lab and grow as a student, scientist, writer and person. I would also like to thank all those who aided with the research, analysis and conceptual development of this project who include Maric Tse, Patrick Piantadosi, Nicole Jenni and Colin Stopper, among many others. Further, I would like to thank my parents, Chris and Sue Larkin for their unwavering support over the last 3 years, my girlfriend Amanda Anderson whose help through this research and writing process was immeasurable, and my siblings, Delaney and Jackson Larkin. I would not be where I am today without the many people who helped me along the way.    I. INTRODUCTION  Economic decision making involves weighing costs and benefits of different courses of action in order to maximize an intended outcome. These sorts of decisions involve judging different aspects of a situation, in this specific instance the risk of not receiving the desired outcome, but also including the effort required to receive the desired outcome or overcoming a temporal delay. Cost/benefit decisions involving risk of reward receipt frequently offer a choice between one option that has a small reward with relatively low risk versus an option with a higher possible reward but greater associated risk. For example, buying stock of an up-and-coming company has the chance to be highly financially rewarding (benefit), but could also result in large losses should this company falter (cost). Another example would be deciding whether or not to double down while playing blackjack. In humans and other animals, studies on the neural basis of cost/benefit decision making have revealed that these types of choices are mediated by distributed cortico-limbic-striatal circuits to bias choices towards more advantageous outcomes (Floresco et al., 2008). These circuits include the prefrontal cortex (PFC), the nucleus accumbens (NAc) and the basolateral amygdala (BLA). Each brain region within this circuit plays a distinct and complementary role in facilitating advantageous decision making in order to maximize the chance of the best outcome.  Research into the neural basis of cost/benefit decision making in humans has benefitted from observing patients who have region specific brain damage in order to elucidate the role these regions play in making a choice. The seminal work of Antonio Damasio (Bechara et al., 1994) has provided important insights into the brain regions involved in decision making. Patients with damage to the ventromedial PFC were tested on the Iowa Gambling Task (IGT) and failed to update their choice behavior even after being aware of the most advantageous 2  strategy, implying a direct role for the PFC in cost/benefit decision making. Subsequently, the amygdala (AMG) was found to play a role in decision making (Bechara et al., 1999) using fMRI and the IGT. Damage to the AMG was also associated with suboptimal patterns of decision making similar to patients PFC damage, with the primary difference being that AMG damage resulted in patients lacking a skin conductance response (SCR) prior to a disadvantageous choice, whereas a SCR similar to controls was observed in PFC damaged patients prior to disadvantageous choice. This indicates that both of these regions are relied on, but that they play distinct roles in guiding optimal decision making.  Additional insights into the neural mechanisms underlying decision making can be obtained from studies with laboratory animals that allows for greater experimental control. A number of different behavioral assays have been developed to study certain components of risk/reward decision making. One approach entails the use of a probabilistic discounting task, a well-established cost/benefit decision making assay used in rodent models (Cardinal & Howes, 2005; St. Onge & Floresco, 2009; Shimp et al., 2014). This task requires an animal to choose between two options, one of which distributes a small yet certain reward (1 pellet), and another that distributes a large reward (4 pellets) on a probabilistic schedule (100%, 50%, 25%, 12.5%; large/risky option). This assay has previously been used to elucidate the specific role of brain regions in cost/benefit decision making, as well as some of the neurotransmitter systems involved (Stopper et al., 2011; St. Onge et al., 2009; St. Onge et al., 2010; Ghods-Sharifi et al., 2009). Three key regions that have been implicated in mediating probabilistic discounting include the PFC, the NAc and the BLA. The BLA is reciprocally connected to the PFC and sends unidirectional projections to the NAc. BLA-NAc projections appear to bias choice towards larger 3  uncertain rewards (St. Onge et al., 2012). Local inactivation of these regions via baclofen/muscimol (GABAA/B agonist) produced varying behavioral phenotypes. BLA and NAc inactivation produced a significant reduction in choice of larger, uncertain rewards (Ghods-Sharifi et al., 2009; Stopper et al., 2011) indicating these regions could be playing a similar role in prompting risky choice. This notion is supported by the findings that functional disconnection of the BLA and NAc produced a reduction of risky choice (St. Onge et al., 2012), indicating that when this circuit is functioning normally projections from the BLA to the NAc bias animals towards more risky choices. Inactivation of the prelimbic region of the medial PFC prevents animals from updating their choice bias over the course of a daily session (St. Onge et al., 2011). When probability decreases (100%→12.5%) during the probabilistic discounting task, animals normally display a strong bias for the large/risky reward at the start of the session, and then choose it less often as reward probabilities decrease, yet inactivation of the PFC causes rats to choose more risky over the session. Conversely, during the ascending probability condition (12.5%→100%) the initial bias is towards the small/certain option and increase choice of the risky option as reward probabilities get more favorable. Here, PFC inactivation causes animals to choose significantly less risky than controls. This pattern of deficits suggests that the effect of PFC inactivation is attributable to an inability to update their choice bias as the probability of receiving a large reward changes.  Additional experiments using selective inactivation of top-down (PFC to BLA) and bottom-up (BLA to PFC) pathways have shown that even though the BLA and PFC are reciprocally connected, it is the PFC that exerts top-down control over the BLA to influence risky choice bias (St. Onge et al., 2012). Inactivation of the top-down projections increased risky choice, indicating that the PFC may play a role in dampening the urge to continue to choose risky even when it is no longer beneficial, whereas inactivation of the 4  bottom up projections show no effect on risky choice. This indicates that the influence of the PFC over the BLA is guiding animals towards the most optimal choice.  Behavioral and electrophysiological studies have implicated a strong role for the BLA in reward-seeking and goal-directed behavior. The BLA facilitates reward seeking related to cues, where inactivation of the BLA impaired cue responding (Ishikawa et al., 2008) and, during freely moving in vivo electrophysiology anticipatory firing of BLA neurons on an 8-arm radial maze task, showed differential responses to large and small rewards (Pratt and Mizumori, 1998). Further, neural correlates relating to changes in amount of reward have been shown in the BLA (Roesch et al., 2010), where changes in amount of reward received, regardless of whether it increased or decreased from previous experiences, produced increases in neuronal firing. Neurons of the BLA have a greater response to dynamic rewards compared to rewards that are stable. This indicates that the BLA is responding to changing reward, potentially tracking magnitude of reward during goal-directed behavior and updating expectations based on previous experience. The PFC is one input to the BLA that is believed to play a key role in modifying this activity. The PFC can exert inhibitory control over the output of the BLA (Grace and Rosenkranz, 2002) via excitation of intrinsic GABAergic interneurons, which inhibit the firing of BLA principle neurons. As noted above, disconnections of the PFC and BLA have shown that disabling communication between the PFC and BLA resulted in a significant increase in risky choice on the probabilistic discounting task (St. Onge et al., 2012) indicating that the PFC may act to dampen biases towards larger uncertain rewards when risky choices are less optimal. Electrophysiological data is consistent with pharmacological inactivation data, which suggests that the BLA has an active role in reward assessment and guiding goal-directed behavior in 5  general and specifically in probabilistic discounting (Ghods-Sharifi et al., 2009), where it seems to be promoting risky choice, even when it is not the most beneficial option. Mesolimbic dopamine (DA) activity is critical in mediating normal functioning of these decision making circuits. The probabilistic discounting task is highly dependent on DA (St Onge and Floresco, 2009; St. Onge et al., 2012; Stopper et al., 2013). Systemic injections of flupenthixol (D1 and D2 antagonist) reliably reduced choice of the risky lever (St. Onge et al., 2010). Therefore, endogenous activation of DA receptors on a systems level biases animals towards choosing larger/risky rewards. Additionally, excessive increases in DA seem to disrupt the ability to shift decision biases in response to changes in reward probabilities. Systemic administration of amphetamine (AMPH) increased risky choice when probabilities are ‘descending’ (100%→12.5%) but decreased risky choice when probabilities are ‘ascending’ (12.5%→100%), similar to the effects of medial PFC inactivation (St. Onge and Floresco, 2009). This shows that animals may be exhibiting an inability to update choice bias as probabilities change dynamically. Manipulations of DA receptor activity using receptor specific modulators have produced dissociable effects on choice bias during the probabilistic discounting task. When the D1 or D2 receptors were systemically blocked via SCH 23390 and eticlopride respectively, both produced a significant reduction in risky choice (St. Onge et al., 2009). Together, these results show that systemic DA plays a role in modulating bias for the risky option on the probabilistic discounting task.  These systemic findings are further complemented by region-specific administration of dopaminergic drugs within the PFC and NAc (St. Onge et al., 2011; Stopper et al., 2013). Following D1 blockade in either region animals tended to choose less risky and displayed an increase in negative feedback sensitivity, being more likely to shift to the smaller/certain reward 6  after a risky loss (i.e.; increased lose-shift behavior). Conversely, administration of D1 agonists exerted differential effects depending on terminal region. D1 agonist administered into the PFC (St. Onge et al., 2011) showed no significant effect on risky choice, whereas following D1 stimulation in the NAc decision making was optimized (Stopper et al., 2013), increasing bias for the risky lever when it was more beneficial and decreasing choice of the risky lever when it was less advantageous.  Manipulation of D2 receptors has also caused diverse effects on probabilistic discounting. As opposed to the effects of systemic administration, blockade of D2 receptors within the PFC increased risky choice (St. Onge et al., 2011). However, similar manipulation in the NAc did not vary risky choice compared to saline, although it did increase response latencies and reduced locomotor activity (Stopper et al., 2013). Stimulation of D2 receptors in the PFC profoundly blunted the ability of animals to discount, where animals chose less risky when the risky choice was more beneficial and more risky when it was less advantageous (St. Onge et al., 2011). Stimulation of D2 receptors in the NAc produced no effect on risky choice (Stopper et al., 2013). Although the contribution by DA transmission within the PFC and NAc regarding reward-related behaviors has been fairly well characterized, considerably less attention has been paid to how mesoamygdala DA transmission may facilitate these processes.  The BLA has a well-established role in the acquisition of aversive and appetitive conditioned responses, yet it is surprising that only a handful of studies have shown that DA may play an important role in modulating goal-directed behavior in the BLA. The BLA receives projections from the ventral tegmental area (VTA) and the substantia nigra (SN) (Brinley-Reed & McDonald, 1999). Affective and aversive stimuli cause DA levels in the BLA to increase (Coco et al., 1992; Hori et al., 1993; Harmer and Phillips, 1999; Inglis and Moghaddam, 1999), 7  and DA is known to play a critical role in modulating firing of both projection neurons and GABAergic inhibitory interneurons in the BLA (Kroner et al., 2005). Intra-BLA infusions of DA agonists like AMPH increase goal-directed behavior (Ledford et al., 2003), supporting the idea of a direct role for DA within the BLA in reward-seeking behavior. Additionally, reducing DA activity in the BLA via infusions of flupenthixol produced a profound reduction in lever pressing for cocaine (Di Ciano & Everitt, 2004). Further, formation of affective memories guiding behavior is dependent on mesoamygdala DA. Co-activation of D1 and N-methyl-D-aspartic (NMDA) receptors seems to be one of the primary mechanisms through which affective memories are formed (Touzani et al., 2013) and DA receptors are co-localized with NMDA receptors on the dendritic arbors of pyramidal neurons (Pickel et al., 2006). This strongly suggests DA as a primary modulating factor of the output of the BLA. Studies using electrophysiology corroborate this, showing that the facilitatory effect of DA on BLA firing is through the D1 receptor (Ohshiro et al., 2011). D1 stimulation in the BLA also seems to be integral for the acquisition of instrumental learning (Andrzejewski et al., 2005), but blocking D1 after a set of task-related rules have been learned did not affect performance, implying that the BLA may play a more prominent role in learning associations, but not acting on previously formed associations (Andrzejewski et al., 2005). Subsequently, it has been shown that blocking D1 prevents the acquisition of a reward memory, but blockade of D2 did not show the same effect (Lintas et al., 2011). Thus, DA in the BLA seems to have an active role in reward-seeking behaviors and the formation of affective memories in order to guide future reward-seeking behavior.  Yet, how DA in the BLA may affect reward-seeking when animals have become familiarized with reward contingencies associated with different actions is considerably less clear.  8  In regards to the probabilistic discounting task, a dissociable role for the D1 and D2 receptors has been firmly established in the PFC and NAc, yet the role of D1 and D2 receptor during a dynamic decision making task is as of yet unexplored within the BLA. This experiment explores the potential impact that these receptors have on modulating the choice biases of animals performing the probabilistic discounting task via intra-BLA infusions of DA modulating drugs, and could lend important evidence to the understanding of the role that DA receptors play in maintaining behavioral course during a task in which animals are well trained. Understanding this circuit is important because while the PFC conveys important cognitive information to the NAc, the BLA provides separate affective information. Additionally, the BLA is an important node within broader brain circuit that mediate decision making, and understanding how DA modulates behavior on the probabilistic discounting task could shed light on the role of DA receptors in other goal-directed behaviors.            9  II. MATERIALS AND METHODS Animals: Male Long-Evans rats (Charles River Laboratory) weighing 225-250g at arrival to our facility were used. Initially, they were group housed to become familiar with the colony space. After 1 week of acclimatization in the colony animals were single housed and food restricted to 85-90% of their free-feeding weight prior to the onset of behavioral training. Feeding occurred in the rat’s home cages at the end of an experimental day and the weight of each animal was monitored daily. All testing was in accordance with the Canadian Council of Animal Care and the Animal Care Committee of the University of British Columbia.  Apparatus: Behavioral testing was conducted in 16 operant chambers (30.5 x 24 x 21 cm; Med Associates, St Albans, VT, United States of America) enclosed in sound attenuating boxes. The boxes were equipped with a fan that masked external noise, as well as providing ventilation. Each chamber was fitted with 2 retractable levers, 1 located on each side of a central food receptacle where food reinforcement (45 mg; Bioserv, Frenchtown, NJ, United States of America) was delivered via a pellet dispenser. The chambers were illuminated by a single 100-mA house light located in the top-center of the wall opposite the levers. 4 infrared photobeams were mounted on the sides of each chamber, and another photobeam was located in the food receptacle. Locomotor activity was recorded and indexed by the number of beam breaks that occurred during a session. All experimental data were recorded by personal computer connected to the chambers through an interface.  Lever-pressing training:  Initial lever training used protocols established previously (Ghods-Sharifi et al., 2009; St Onge and Floresco, 2009). On the day before their first exposure to the operant chamber, rats were given approximately 25 food reward pellets in their home cage. On the first day of training crushed sugar pellets were placed on a lever in the operant chamber 10  before the rats were introduced in addition to 2-3 sugar pellets being placed in the food receptacle. Rats were initially trained under a fixed-ratio 1 schedule to a criterion of 60 presses in 30 minutes, first for 1 lever, and then repeated for the other lever (counterbalanced left/right between subjects). They were then trained on a simplified version of the task that trained the animals that both levers deliver a single pellet of reward at 50% probability, to familiarize the animal to the probabilistic nature of the task. These 90 trial sessions began with levers retracted and the operant chamber in darkness. Every 40 s, a trial was initiated with the illumination of the houselight and the insertion of 1 of the 2 levers into the chamber. If the rat failed to press a lever after 10 s the lever was retracted, with the houselight turning off, and the trial counting as an omission. If the rat responded within 10 s, the lever retracted and 1 sugar pellet of reward was delivered with 50% probability. In every pair of trials, the left and right levers were each presented once, and the order within the pair of trials was random. Rats were trained for ~5 d to a criterion of 80 or more successful trials (i.e. ≤10 omissions), after which they were trained on the full version of the risk discounting task, 6-7 days per week.  Side Preference Testing: Immediately after the last day of retractable lever training, rats that were to be trained on the discounting task were tested for their side bias; using procedures we have described elsewhere (Floresco et al., 2008; Haluk and Floresco, 2009). This procedure was instituted because previous studies in our laboratory have revealed that accounting for rats innate side bias when designating the lever to be associated with a larger reward reduced considerably the number of training sessions required to observe prominent discounting by groups of rats. This session resembled pretraining, except that both levers were inserted into the chamber simultaneously. On the first trial, a food pellet was delivered after responding on either lever. Upon subsequent lever insertion, food was delivered only if the rat responded on the lever 11  opposite to the one chosen initially. If the rat chose the same lever as the initial choice, no food was delivered, and the house light was extinguished. This continued until the rat chose the lever opposite to the one chosen initially. After choosing both levers, a new trial commenced. Thus, a single trial of the side bias procedure consisted of at least one response on each lever. Rats received 7 such trials, and typically required 13–15 responses to complete side bias testing. The lever (right or left) that a rat responded on first during the initial choice of a trial was recorded and counted toward its side bias. If the total number of responses on the left and right lever were comparable, the lever that a rat chose initially 4 or more times over seven total trials was considered its side bias. However, if a rat made a disproportionate number of responses on one lever over the entire session (ie, 2:1 ratio for the total number of presses), that lever was considered its side bias. On the following day, rats commenced training on the decision making task. Probabilistic discounting: This task was originally modified from Cardinal and Howes (2005) and has been employed by our laboratory to assess the contribution of DA transmission to risk-based decision making (St Onge and Floresco, 2009). Rats received daily sessions which included 72 trials over 48 minutes. These 72 trials were broken up into 4 blocks of 18 trials, where the probability of receiving a large reward either decreased (100%, 50%, 25%, 12.5%) or increase (12.5%→100%). A session began in darkness with both levers retracted (the intertrial state). A session began every 40 s with the illumination of the houselight and insertion of one or both levers into the chamber. One lever was designated the large/risky lever, the other the small/certain lever, which remained consistent throughout training (counterbalanced left/right based on the side preference task). If the rat did not respond within 10 s of lever presentation, the chamber was reset to the intertrial state until the next trial (omission). A response on either lever 12  caused both to be retracted. Choice of the small/certain lever always delivered 1 pellet with 100% probability; choice of the large/risky lever delivered 4 pellets in a probabilistic manner that was varied systematically across the session. After a response was made and food was delivered, the house light remained on for another 4 s after which the chamber reverted back to the intertrial state until the next trial. Multiple pellets were delivered 0.5 s apart.  The 4 blocks were comprised of 8 forced choice trials where only 1 lever was presented (4 trials for each lever, randomized), permitting animals to learn the amount of food associated with each lever press and the respective probability of receiving reinforcement during each block. This was followed by 10 free-choice trials where both levers were presented, allowing an animal to choose.  The probability of obtaining 4 pellets after pressing the large/risky lever was varied in a systematic manner over the course of a daily session. Previous research in our laboratory has shown that inactivation of the prelimbic region of the medial PFC or systemic treatment with amphetamine induces differential effects on probabilistic discounting. When the probability of obtaining the larger reward decreased over the session, these manipulations increased risky choice, whereas when reward probabilities were initially low and subsequently increased they exerted the opposite effect (St. Onge and Floresco 2010). In light of these findings, separate groups of rats were trained on a variant of the task where probabilities of obtaining the large/risky reward systematically descended (100%, 50%, 25%, and 12.5%) or ascended (12.5%, 25%, 50%, and 100%) across blocks. Thus, when the probability of obtaining the 4-pellet reward was 100% or 50%, this option would be more advantageous. At 25%, it is arbitrary which lever the animal chooses, and at 12.5%, the small/certain lever would be the more advantageous option in the long-term. 13   For each session and trial block, the probability of receiving the large reward was drawn from a set probability distribution. Therefore, on any given day, the probabilities in each block may have varied, but averaged across many training days, the actual probability experienced by the rat approximated the set value. Previous behavioral studies from our laboratory have shown that even though different rats may experience better or worse “luck” following the choice of the large/risky lever during earlier phases of training on this task, these outcomes do not substantially impact the final discounting rates exhibited by animals once they display stable discounting behavior at the end of training (St. Onge et al., 2011). Training procedure, surgery and microinfusion protocol: Rats were trained on their respective task until as a group they demonstrated stable baseline levels of discounting for 3 consecutive days. Stability was assessed using statistical procedures similar to that described by Winstanley et al. (2004), and St. Onge and Floresco (2009). In brief, data from three consecutive sessions were analyzed with repeated-measures ANOVA with two within-subjects factors (day and trial block). If the effect of block was significant at the p<.05 level, but there was no main effect of day or day by trial block interaction (at p>.01 level), animals were judged to have achieved stable baseline levels of choice behavior. After stability criterion was achieved, rats were provided food ad libitum and 1-2 days later were subjected to surgery.  Rats were anesthetized with 100mg/kg ketamine hydrochloride and 7 mg/kg xylazine and implanted with bilateral 23 gauge steel guide cannulae into the BLA (flat skull: anteroposterior = -3.1 mm; mediolateral = ±5.2mm from bregma; dorsoventral -6.5 mm from dura) using standard stereotaxic techniques. The guide cannulae were held in place with stainless steel screws and dental acrylic. Thirty gauge obdurators flush with the end of guide cannulae remained in place until the infusions were made. Rats were given at least 7 days to recover from surgery before 14  testing. During this period, they were handled at least 5 minutes each day and were food restricted to 85% of their free-feeding weight. Rats were subsequently retrained on their respective task for at least 5 days until the group displayed stable levels of choice behavior for 3 consecutive days. 1-2 days before their first microinfusion test day, obdurators were removed, and a mock infusion procedure was conducted. Stainless steel injectors were placed in the guide cannulae for 2 minutes without infusions as to reduce stress on subsequent test days. The day after displaying stable discounting, the group received its first microinfusion test day.  A within-subjects design was used for all experiments, with animals receiving a saline infusion in addition to two doses of a particular drug, in a counterbalanced fashion. Modulation of DA receptors in the BLA was achieved by microinfusion of one of four different drugs. Separate groups of animals received the D1 receptor antagonist SCH23390 (0.1 and 1.0 µg), the D2 antagonist eticlopride (0.1 and 1.0 µg), the D1 agonist SKF81297 (0.1 and 1.0 µg), and the D2 agonist quinpirole (1 and 10 µg ). These drugs were dissolved in physiological saline. Drugs or saline were infused at a volume of 0.5µl. Infusions were administered bilaterally directly into the BLA via 30 gauge injection cannulae that protruded 0.8mm past the end of the guide cannulae at a rate of .5µl/75 s by a microsyringe pump. Injection cannulae were left in place for an additional 1 minute to allow for diffusion. Each rat remained in its home cage for an additional 10 minutes before behavioral testing. On the first infusion test day, half of the rats in each group received saline infusions, and the other half receive one of two doses of a single drug. They next day, they received a baseline training day (no infusion). If, for any individual rat, choice of the large/risky lever deviated by 15  >15% from its preinfusion baseline, it received an additional day of training before the second infusion test. On the following day, rats received drug or saline in a counterbalanced fashion depending on what they received on the first test day. Following the second test day, animals again received a baseline training day, followed by the third and final infusion of drug or saline.  Reward magnitude discrimination: A priori, we determined that if a treatment group induced a decrease in risky choice on the probabilistic discounting task, we would test the effect of this treatment on a reward magnitude discrimination task in a separate group of rats. This procedure involved four blocks of 12 trials (2 forced-choice, 10 free-choice). Here, a single response on the one lever immediately delivered four pellets with 100% probability, whereas one press of the other lever always delivered one pellet immediately. For each rat, the lever associated with the four pellet reward was the animal’s non-preferred lever, as determined by a side preference task following basic lever training. After 7 days of training, rats received a sequence of drug or saline in the same fashion as animals performing on the probabilistic discounting task.  Histology: After completion of behavioral testing, rats were euthanized in a carbon dioxide chamber. Brains were removed and fixed in a 4% formalin solution. The brains were frozen and sliced in 50µm sections before being mounted and stained with Cresyl Violet. Placements were verified with reference to the neuroanatomical atlas of Paxinos and Watson (1998). Data from rats whose placements were outside the borders of the BLA were removed from the analysis. In general, animals with inaccurate placements did not display prominent changes in choice behavior following drug treatment relative to saline infusions. The location of acceptable placements is presented in Figure 7. 16  Data analysis: The primary dependent measure of interest was the proportion of choices directed toward the large/risky lever for each block of free-choice trials, factoring in trial omissions. This was calculated by dividing the number of choices of the large/risky lever by the total number of successful trials for each block. For the probabilistic discounting experiments, choice data were analyzed using 3-way, between/within-subjects ANOVAs, with treatment and probability block as 2 within-subject factors and task variant (i.e. reward probabilities descending or ascending over blocks) as a between subjects factor. Thus, in this analysis, the proportion of choices of the large/risky option across the 4 levels of the trial block was analyzed irrespective of the order in which they were presented. The main effect of block was significant for all analyses and will not be further reported in the results. Response latencies (the time elapsed between lever insertion and subsequent press), locomotor activity (i.e. photobeam breaks), and the number of trial omissions were analyzed with 1-way repeated-measures ANOVA (the between- subjects factor of task or baseline values was not incorporated in these. Supplementary analyses were conducted to further clarify whether changes in manipulations of BLA DA transmission altered sensitivity to reward (win-stay performance) or negative-feedback (lose-shift performance) (Bari et al., 2009; Stopper and Floresco 2011; St Onge et al., 2011, 2012). Animals’ choices during the task were analyzed according to the outcome of each preceding trial (reward or non-reward) and expressed as a ratio. The proportion of win-stay trials was calculated from the number of times a rat chose the large/risky lever after choosing the risky option on the preceding trial and obtaining the large reward (a win), divided by the total number of free-choice trials where the rat obtained the larger reward. Conversely, lose-shift performance was calculated from the number of times a rat shifted choice to the small/certain lever after choosing the risky option on the preceding trial and was not rewarded (a 17  loss), divided by the total number of free-choice trials resulting in a loss. This analysis was conducted for all trials across the 4 blocks. It was not possible to conduct a block-by-block analysis of these data because there were many instances where rats either did not select the large/ risky lever or did not obtain the large reward at all during the latter blocks.  Win-stay and lose-shift ratios observed after saline and the highest dose of the drug were analyzed separately with 2-way ANOVAs, with task variant as a between-subjects factor and treatment (saline or drug dose) as a within subjects factor. Changes in win-stay performance were used as an index of reward sensitivity, whereas changes in lose-shift performance served as an index of negative feedback sensitivity.            18  III. RESULTS Blockade of BLA D1 and D2 Receptors D1 receptor blockade: Animals for this group were trained for an average of 23 days before implantation of cannulae into the BLA. They were then retrained until they displayed stable patterns of choice and subsequently received counterbalanced infusions of SCH 23390 and saline.  The analysis included 16 animals that had acceptable placements in the BLA (Figure 7) (8 in the descending condition; 8 in the ascending condition). The analysis of choice data showed a significant main effect of treatment (F(2,28)=7.56, p<0.01) as shown in Figure 1A. This main effect of treatment was accompanied by a treatment by task interaction (F(2,28)=6.03, p<0.01). Simple main effects analyses revealed that this interaction was driven by a peculiar difference in the dose of SCH 23390 that was most effective across the two task conditions. In the ascending condition (Figure 1B, top right) the 0.1µg dose was more effective at reducing risky choice, but in the descending condition (Figure 1B, bottom right) the 1.0µg dose was more effective than the low. Nevertheless, under both conditions, blockade of D1 receptors in the BLA reduced risky choice.  The reduction in risky choice induced by D1 receptor antagonism was accompanied by a significant reduction in reward sensitivity. As displayed in Figure 1C (left), infusions of the high dose of SCH 23390 reduced win-stay behavior, suggesting a disruption in reward sensitivity. (F(1,15)=4.44, p=.05). In contrast, no significant effect on negative feedback sensitivity as indexed by changes in lose shift behavior was observed [F(1,15)=1.311, not significant (n.s.); Figure 1C, right]. This indicates that the reductions in risky choice induced by BLA D1 antagonism were driven by a reduced tendency for animals to follow a risky win with another risky choice. D1 receptor blockade in the BLA caused a slight but non-significant increase in 19  choice latency (F(2,30)=3.01, n.s.), and had no effect on trial omissions or overall locomotor activity (all Fs<1.67, n.s.) (Table 1).  Figure 1      Figure 1. Effects of the D1 antagonist SCH 23390 during the probabilistic discounting task. A significant main effect of treatment was found for both drug doses (saline vs 0.1µg dose, saline vs 1.0µg dose; p<.05). Symbols represent mean and SEM. A. All animals analyzed together. B. Animals who performed the task in differing contingencies are analyzed separately, where the 0.1 µg dose was more effective in reducing risky choice in the ascending condition, while the 1.0 µg dose was more effective during the descending condition. C. A significant effect was found on the win/stay measure, where animals were less likely to choose the risky option again after receiving a 4 pellet large reward.   20  Table 1: Performance Measures      Drug Treatment Choice Latency (SEM) Omissions (SEM) Locomotion (SEM)   SCH 23390        Saline 0.97 s (0.19) 4.4 (2.5) 1,552 (205)    0.1µg Dose 0.92 s (0.11) 3.8 (1.9) 1,505 (193)    1.0µg Dose 1.26 s (0.21) 3.9 (2.1) 1,398 (153)   SKF 81297        Saline 0.72 s (0.10) 0.4 (0.2) 1,260 (128)    0.1µg Dose 0.73 s (0.09) 0.4 (0.18) 1,211 (105)    1.0µg Dose 0.72 s (0.08 1.2 (0.64) 1, 299 (14)   Eticlopride        Saline 0.94 s (0.13) 2.9 (1.8) 1,201 (97)    0.1µg Dose 0.99 s (0.15) 2.3 (0.98) 1,229 (107)    1.0µg Dose 0.91 s (0.13) 2.5 (1.3) 1,193 (97)   Quinpirole        Saline 0.66 s (0.05) 0.3 (0.21) 1,091 (80)    1.0µg Dose 0.71 s (0.06) 0.7 (0.28) 1,022 (72)    10.0µg Dose 1.01** s (0.13) 2.4* (0.97) 870* (78)          Reward Magnitude Discrimination Task              SCH 23390        Saline .97 s (0.16) 0.67 (0.42) 733 (142)    1.0µg Dose 1.32* s (0.29) 1.5 (.96) 847 (79)                  *p<.05       **p<.01             Table 1: Performance measures for different DA manipulations and doses. Values displayed are mean and SEM. Locomotor counts measured by number of beam breaks. *p<.05; **p<.01 between vehicle and treatment  21  Reward Magnitude Discrimination D1 blockade: The observation of a significant reduction in risky choice following intra-BLA infusions of a D1 antagonist led us to conduct an additional control experiment. This assay is similar to the probabilistic discounting task, though the varying probability of reward is removed and animals are required to choose between a certain large reward and a certain small reward. Six animals with acceptable placements were included in the analysis, which found no significant difference in preference for the large reward following infusion of SCH 23390 (F(1,5)=2.49, n.s.) shown in Figure 2. This indicates that the reduction in risky choice seen following administration of the D1 antagonist cannot be easily attributed to reduced preference for larger rewards compared to smaller rewards, or other motivational or spatial discrimination deficits. When performance measures were analyzed, an effect on the latency to choice was unveiled (F(1,5)=6.42,p=.05). Additional performance measures such as omission rate and overall locomotion were not significantly different from saline (all Fs<2.36, n.s.). These findings suggest that the reductions in risky choice induced by reducing BLA D1 receptor activity did not reflect a more general reduction in preference for a large reward versus a small reward.        22  Figure 2          Figure 2.  Effects of SCH 23390 on the reward magnitude discrimination task. Markers indicate mean and SEM. Animals chose between levers associated with either 4 pellets or 1 pellet reward, both delivered with 100% probability. 1.0µg dose had no effect on animals preference of the large reward.   23  D2 receptor blockade: Animals in this group were trained for an average of 23 days before implantation of cannulae into the BLA. Following recovery, they were retrained until their behavior was statistically stable before they received counterbalanced infusions of eticlopride and saline. The analysis included 19 animals (10 in the descending condition; 9 in the ascending condition) that had acceptable placements in the BLA. In contrast to the effects of D1 receptor antagonism, analysis of these choice data showed no significant effect of treatment (F(2,34)=0.79, n.s.) (Figure 3A) and no significant interactions with the treatment factors (all Fs<1.58, n.s.; Table 1). However, even though blockade of the D2 receptor did not change the overall proportion of risky choices, these manipulations did lead to more subtle changes in choice patterns with respect to how animals responded after a rewarded risky choice (Figure 3C, bottom). Specifically, the high dose of eticlopride caused a significant reduction in reward sensitivity, as indexed by a decrease in win-stay behavior (F(1,18)=4.74, p<.05), similar to the effects of SCH 23390. In comparison, lose-shift behavior was unaffected by these treatments (F(1,18)=0.196, n.s.). No other performance measures (choice latencies, locomotion, trial omissions) were affected by intra-BLA eticlopride relative to saline treatments (all Fs<1.6, n.s., Table 1). These results indicate that even though blocking the D2 receptor did not result in an overall change in risky choice biases, reducing activity at these receptors did change how a rewarded choice influenced their subsequent decision making on a trial-by-trial basis.      24  Figure 3         Figure 3. Animals behavior on the probabilistic discounting task following bilateral infusions of eticlopride. Symbols represent mean and SEM. A. No significant difference was found following D2 blockade B. Animals who performed the task in differing contingencies are analyzed separately, where the null effect was consistent across condition C. A significant decrease in win/stay behavior was found following infusion of the 1.0µg dose of eticlopride.   25  Stimulation of D1 and D2 receptors D1 receptor stimulation: Animals in the D1 receptor agonist group trained for an average of 27 days prior to undergoing surgery for cannulation. Subsequently animals were retrained and underwent counterbalanced infusions of SKF 81297. 20 animals with acceptable placements (10 in the descending condition; 10 in the ascending condition) in the BLA were included in this analysis. In this experiment, the analysis revealed a significant task × block interaction (F(3,54)= 4.48, p<.01), indicative of slightly different patterns of discounting displayed by rats trained on the two task variants across all treatment conditions(Figure 4A; top). Specifically, rats trained on the descending variant choose the risky option more in the 12.5% block and less in the 25% block compared to those trained on the ascending version of the task (Figure 4A; bottom). However, the analysis did not reveal any interactions with the task variant and treatment factors (all Fs<1.1, n.s.) indicating that SKF 81297 induced comparable effects irrespective of whether reward probabilities decreased or increased over a session. The analysis did not yield a main effect of treatment on the overall proportion of risky choices (F(2,36)=0.197, n.s.).  Analysis of the data from all three treatment conditions also did not reveal significant treatment × block interaction (F(6,108)=1.35, n.s.).  However, closer inspection of the data revealed that the 0.1 µg dose of SKF 81297 appeared to induce differential effects on choice across the different probability blocks.  Specifically, this dose increased risky choice in the 50% block and decreased risky choice in the 12.5% block compared to saline, with 17 out of 20 rats displaying this alteration in choice patterns after treatment with the 0.1 µg dose relative to saline (Figure 4B).  In comparison, the effects of the 1.0 µg dose were more variable, with only 14 out of 20 rats showing this effect (Figure 4C). This variability limited our ability to detect statistically a treatment × block interaction in the overall ANOVA.  However, separate 26  exploratory analyses that compared the effects of each dose of SKF 81297 to saline did reveal a significant treatment × block interaction when comparing the 0.1 µg dose to saline (F(3,54)=2.78, p<0.05) but not when comparing the 1.0 µg dose to saline (F(3,54)=1.19, n.s.). The results of these targeted analyses suggest that moderate increases in D1 stimulation optimizes risky choice, biasing animals to choose more risky when it is more beneficial, and reducing animal’s choice of the risky lever when it is less beneficial.  There were no significant differences in win/stay (saline = 0.91 ± 0.02; SKF 0.1µg = 0.87 ± 0.02; SKF 1.0µg = 0.90 ± 0.02; F(1,19)=1.38, n.s.)  or lose/shift behavior (saline = 0.32 ± 0.03, SKF 0.1 µg = 0.29 ± 0.04, SKF 1.0µg = 0.34 ± 0.03; F(1,19)=0.22, n.s.). It is possible that potential effects on reward sensitivity or negative feedback sensitivity are washed out in the D1 agonist group because data is analyzed as a whole across all free choice trials for a given day. In the case of the D1 agonist, where we see an increase in risky choice followed by a decrease in risky choice, it is possible that increase or decreases in reward or negative feedback sensitivity in one block or another are washed out due to the opposite effect in a different trial block. In addition, infusions of either dose of the D1 agonist into the BLA had no significant effect on other performance measures (all Fs<1.50, n.s.).       27  Figure 4     Figure 4. Effects of the D1 agonist SKF 81297 during the probabilistic discounting task. Symbols represent mean and SEM. A. Choice of the large/risky lever following saline and D1 agonists infusions in the BLA for rats that were trained on the descending (top) and ascending (bottom) variants of the task. No main effects of task or task x treatment interactions were observed.  B. Treatment with the lower, 0.1 µg dose of SKF optimized discounting, in that rats chose the risky option more during the 50% probability block, and chose it less during the 12.5% block, when that option had lower utility. Stars denote significance in a particular trial block. C. In comparison, treatment with the 1.0 µg dose did not induce a significant effect on choice relative to saline (same curve as in B). 28  D2 receptor stimulation: The D2 receptor agonist group included 19 animals with acceptable placements. These animals were trained for an average of 25 days on the probabilistic discounting task before undergoing bilateral cannulation surgery of the BLA. Following retraining, animals received counterbalanced infusions of quinpirole. There was no main effect of treatment on risky choice (F(2,34)=1.60, n.s.), shown in Figure 5A. Additionally, there were no significant effects on win/stay or lose shift behavior (Figure 5C, bottom). However, infusions of the 10µg dose of quinpirole were behaviorally active as they affected other performance measures including an increased latency to choice (F(2,36)=8.45, p=.001 and Dunnett’s, p<0.05), reduced locomotion (F(2,36)=4.96, p<.05 and Dunnett’s, p<0.05), and an increase in omissions (F(2,36)=5.53, p<.01 Dunnett’s, p<0.05)(Table 1). These results suggest that while D2 stimulation does not significantly modulate risky choice bias, increasing D2 receptor activity in the BLA may cause a reduction in motivational processes that affect task performance.         29  Figure 5        Figure 5.  Animals behavior on the probabilistic discounting task following bilateral infusions of quinpirole (D2 agonist). Symbols represent mean and SEM.  A. A non-significant decrease in risky choice was found following D2 stimulation. B. Animals who performed the task in differing contingencies are analyzed separately, where the null effect was consistent across condition C. There were no significant effects on reward sensitivity or negative feedback sensitivity.   30  Inaccurate Placements: Animals with infusion placements that were not within the anatomically-defined borders of the BLA bilaterally (Figure 8) were removed from the initial analyses. A sufficient number of animals with inaccurate placements were obtained from animals in each drug group that permitted an analysis of choice behavior for animals that received infusions in regions adjacent to the BLA. For each of these analyses, no significant effects of treatment or treatment × block interactions were observed (all Fs <1.26) (Figure 6). While some of the placements seem to be within BLA in one hemisphere, the contralateral infusion site resided outside of the target area.  Other placements fell bilaterally outside of the target area. The lack of effect observed in these groups indicates that the significant alterations in choice observed in animals with accurate placements were likely due to alterations of dopaminergic transmission bilaterally within the BLA and not adjacent regions.         31  Figure 6       Figure 6. Behavior of animals with placements that fell outside of the target area. Markers represent mean and SEM. No significant differences were found when cannulae fell outside of the target area. A. Animals in the SCH 23390 group. B. Animals in the eticlopride group. C. Animals in the SKF 81297 group. D. Animals in the quinpirole group.   32  Figure 7     Figure 7. Histology. Coronal sections of rat brain showing all acceptable placements of all DA manipulations.   33  Figure 8  Figure 8. Histology. Coronal sections of rat brain showing all unacceptable placements of all DA manipulations. Where placements are within the BLA, contralateral infusion landed outside of the target area.   34  IV. DISCUSSION D1 Manipulations: The present data has shown that selectively modulating the activity of DA receptors within the BLA significantly alters risky choice bias and other behavioral measures during probabilistic discounting. These effects were dissociable depending on whether D1 and D2 receptor were targeted. The results suggest that D1 receptors in the BLA play a direct role in influencing risky decision making, promoting a bias to choosing larger, uncertain rewards. D2 receptors in the BLA seem to play a more subtle role, not directly influencing risky choice but influencing how sensitive animals are to receipt of larger rewards on a trial by trial basis. These results are consistent with previous evidence (Lintas et al., 2011, Ohshiro et al., 2011, Fremeau et al., 1991) suggesting that the D1 receptor is the primary mechanism through which goal-directed behaviors are modulated within the BLA, adding the component of demonstrating a significant change in behavior on a task in which animals are well trained and probability of receiving a large reward is dynamic. D1 Blockade: Infusions of SCH 23390 caused an overall reduction in choice of the large risky lever. Reduction of risky choice was consistent regardless of the direction in which the probability changed (12.5%→100% or 100%→12.5%). This indicates that endogenous D1 receptor activation influences animals towards more risky choices and maintains this pattern of choice behavior even when choosing risky is not advantageous. One potential interpretation of this finding is that D1 receptor activity within the BLA facilitates the ability to overcome the risk associated with the large reward and maintain bias for the risky lever even as it becomes less profitable. Alternatively, the reduction in risky choice observed in this experiment may have been due to a reduction in the impact the larger reward had on subsequent choices. Analysis of 35  win-stay/lose shift data from this experiment suggests that BLA D1 receptor blockade induced the latter effect. These treatments selectively reduced win/stay behavior, a measure of reward sensitivity, indicating that when animals chose risky and received a large reward, they were less likely to choose the risky lever on a subsequent trial, suggesting that the large reward has less motivational impact on subsequent choice behavior. Thus, these findings imply that D1 receptor activity within the BLA serves to enhance reward sensitivity during risk/reward decision making. It is possible that BLA D1 receptor antagonism may have rendered animals unable to distinguish between the different magnitudes of rewards or discriminate between the two levers and this led to the reduction in risky choice. To control for this possibility, we employed the reward magnitude discrimination control task. The results of this experiment showed that animals could in fact differentiate between the large and small reward following infusion of SCH 23390, where animals continued to prefer 4 reward pellets compared 1, both delivered with 100% certainty. Thus, the reduced preference for the risky lever does not seem to be due to an inability to distinguish between levers or a dislike for the larger reward, but due to an overall reduced bias towards the risky option.  A role for D1 in the BLA has been established in affective behaviors, where activation of the D1 receptor within the BLA is required for acquisition of instrumental conditioning (Andrzejewski et al., 2005), and blocking these receptors impairs appetitive associative learning (Touzani et al., 2009). Knowing the critical role that the D1 receptor plays in facilitating acquisition of affective memories and facilitating goal-directed behavior, the present results expand on this by showing that not only are D1 receptors important for acquisition of conditioned behavior, but they also play an active role in guiding ongoing reward seeking. This idea may seem contradictory to previous research that has stated that D1 activation is important 36  only for the acquisition of an affective memory but not the performance on the task once the memory is formed (Andrzejewski et al., 2005). These results taken together indicate that the D1 receptor is critical not only for formation of associations, but also in biasing the direction of behavior when reward contingencies associated with different actions can vary. One potential mechanism through which choice bias could be influenced may be in balancing sensory and PFC inputs into the BLA, where the D1 receptor modulates these signals, changing the overall output of the BLA to the NAc and other downstream regions. The reduction in risky choice induced by intra-BLA blockade of D1 receptors complements previous findings from studies of the effects of D1 blockade within the PFC and NAc (St. Onge et al., 2011; Stopper et al., 2013). In these studies, infusions of SCH 23390 also reduced overall risky choice on the probabilistic discounting task. The consistency of these results implies that D1 receptor activity in distributed cortico-limbic-striatal circuits promotes choice of options that may result in larger rewards despite their potential uncertainty. However it is interesting to note that D1 receptor activity within the PFC and NAc appears to play a different yet complementary role in promoting risky choice compared to D1 effects within the BLA. Reduction in risky choice when induced by blockade of D1 receptors in the PFC or NAc is accompanied by an increase in lose-shift behavior, making rats more sensitive to non-rewarded choices. In comparison, intra-BLA infusions of a D1 antagonist did not affect sensitivity to losses, but instead reduced sensitivity to the impact that larger rewards had on subsequent choices. This suggests that while D1 receptor blockade within different nodes of these cortico-limbic-striatal circuits can induce a similar overall change in risky choice, the underlying manner in which animals reduce their choice of the large/uncertain option is different. D1 receptors in the PFC and NAc aid in overcoming reward omissions to promote choice of larger rewards, 37  whereas in the BLA, D1 receptors seem to strengthen the impact that larger rewards have on subsequent behavior. This could be due in part to changes in how the risk and reward are perceived. It is possible that D1 receptor activity in the BLA may have enhanced some form of memory about the receipt of larger, uncertain rewards and therefore increasing the tendency to choice risky again. Alternatively, the reduction in risky choice bias induced by D1 receptor blockade in the BLA could be due to the reward having less motivational salience, though this seems unlikely as the reward magnitude discrimination task indicates that a strong preference for the large reward remains following blockade of D1. Thus, the D1 receptor activity seems to be playing complementary yet distinct roles in modulating risky choice within the different regions of the cortico-limbic-striatal circuits.  Disconnection of the BLA and the NAc significantly reduces risky choice during probabilistic discounting, indicating that the projection from BLA to NAc is essential for biasing animals towards choosing the risky lever (St. Onge et al., 2012). The present results, along with previous findings (Stopper et al., 2013) reveal that reducing D1 activity in either of these regions causes animals to choose the risky lever significantly less, indicating that D1 activation may be one of the mechanisms by which the BLA-NAc connection influences animals towards risky choice. Given that excitatory input from the BLA can drive the firing of NAc neurons (Floresco et al., 2001) and modulates DA release in the NAc (Jones et al., 2010), we can infer that changes in BLA neuronal firing have a downstream modulatory effect in the NAc, contributing to changes in behavior. Previous evidence has shown that activity at D1 receptors in the BLA can potentiate appetitive effects of opiate rewards (Lintas et al., 2012), and this effect can be prevented via blockade of the NMDA receptor in the NAc. BLA projections to the NAc are critical for modulating cue-related motivational behavior (Stuber et al., 2013), and a 38  neuropharmacological disconnection of the BLA and NAc via intra-BLA DA antagonism and intra-NAc glutamate blockade produced a similar effect by reducing cocaine seeking (Di Ciano & Everitt, 2004).  These findings add to a growing literature implicating the BLA→NAc circuit as critical for reward seeking and other goal-directed behaviors.  Therefore, it can be concluded that D1 receptors seems to have a region specific role in modulating different components of risky choice within the cortico-limbic-striatal circuit, and that it is critical for each component of the circuit to be functioning normally for optimal decision making.  D1 Stimulation: Stimulation of the D1 receptor produced a markedly different behavioral phenotype than infusions of the D1 antagonist. Infusions of SKF 81297 tended to optimization choice behavior, with this effect being most prominent following infusions of the lower, 0.1 µg dose of the D1 agonist. Thus, during the 50% block, when the large/risky option is more advantageous than the small/certain lever SKF 81297 infusions increased risky choice. In the 12.5% probability block, when it is statistically more advantageous to choose the small/certain lever, these treatments cause a decrease in risky choice. This indicates that when the D1 receptor is stimulated animals optimize their risky choice and perform the task better. Interestingly, the 0.1µg dose (low dose) of the D1 agonist had a more profound effect on risky choice bias. This type of dose/response function induced by D1 agonist drugs has been observed previously when administered in the PFC (St. Onge et al., 2011). In that study, a 0.1 µg dose of SKF 81297 produced an increase in risky choice that was greater in magnitude compared to a higher, 0.4 µg dose. Taken together, these finding suggest that modulation by either PFC or BLA-related cognitive functions by D1 receptor agonists may occur within a relatively narrow and biphasic dose response range, with lower doses of these compounds being more effective at modifying behavior and optimizing decision making compared to higher ones. 39  Despite the biphasic dose/response effects on risky choice induced by BLA D1 receptor stimulation, the finding that these treatments could optimize decision making is in keeping with previous studies of the effects of these treatments within the NAc (Stopper et al., 2013). In that study infusions of a 1.0µg dose of the D1 agonist optimized decision making in a similar fashion to intra-BLA administration. The similar effect of D1 stimulation in the NAc and the BLA, and the knowledge that projections from the BLA to the NAc influence risky choice (St. Onge et al., 2011) indicate that the optimizing effect seen in these regions could be causally related. Therefore, it seems as though the D1 receptor in the BLA may refine information about the relative value of different options that in turn can influence the direction of choice behavior via the NAc.  In support of this idea, neurophysiological studies have revealed that D1 receptors can modulate neuronal activity in the BLA, as D1 receptor stimulation in the BLA increases the firing rate of BLA pyramidal cells (Kroner et al., 2005) which can drive NAc neuronal firing (Floresco et al., 2001). Translating to the current study, a moderate increase in the excitability of BLA pyramidal cells following intra-BLA infusions of a low dose of SKF 81297 may have enhanced processing of information related to the relative utility of different options allowing for the optimized decision making displayed during the probabilistic discounting task.  D2 manipulations: In contrast to the notable effects of D1 manipulations risk/reward decision making, intra-BLA infusions of D2 receptor agonists or antagonists did not alter overall choice patterns during probabilistic discounting. Animals showed no change in risky choice when D2 was blocked (Figure 3A), and although infusions of a D2 agonist cause a slight reduction in risky choice, these differences were not statistically significant versus control infusions (Figure 5A).  However, even though blockade of BLA D2 receptors did not affect overall choice patterns, these treatments did induce more subtle variations in choice preferences 40  following a rewarded risky choice. Specifically, infusions of eticlopride did cause a slight yet reliable decrease in win-stay behavior, in that they were less likely to follow a rewarded risky choice with another risky choice (Figure 3C).  The fact that D1 receptor blockade induced a similar effect on reward sensitivity suggest both types of DA receptors within the BLA may work cooperatively to modulate risk/reward decision making, with D2 receptors playing a permissive role in enhancing the impact that previously rewarded choices exert over subsequent decision biases. This is consistent with evidence that D2 plays a facilitatory role in modulating the neurophysiological actions of D1 receptors (Kroner et al., 2005) contributing to the overall output of the BLA (Kroner et al., 2005).   There is considerable evidence that D2 receptor activity within the BLA modulates sensory inputs to the BLA (de Oliveria et al., 2011; Rosenkranz and Grace, 2001) in addition to a role in modulating cue-evoked goal directed behavior (Berglind et al., 2006). In light of these findings, one potential explanation for the reduction in reward sensitivity following intra-BLA D2 antagonism may be that these treatments disrupted processing of sensory information related to the receipt of rewards. This in turn may have reduced the salience of the larger reward or may have diminished the memory for the particular action-outcome association, so that when faced with another decision opportunity on the next trial (40 s later), rats were less likely to choose the risky option again. This contrasts with the results of previous D2 manipulations in the PFC where an increase in win/stay behavior was observed (St. Onge et al., 2011). This indicates that the reciprocal connections between the BLA and the PFC may be important for influencing how animals interpret reward and that DA within these brain regions may play a key role in modulating reward sensitivity. 41  Infusions of a D2 agonist into the BLA had no reliable effect on risky choice behavior. However, it is also notable that these treatments did induce significant effects on other performance measures, including increased latency to choice, decreased locomotion and an increase in omissions (Table 1). These effects were primarily observed in the 10µg dose of quinpirole.  Thus, excessive activation of D2 receptors in the BLA may slow decision latencies and reduce the motivation to engage in the task. Yet, despite these effects, these treatments did not cause any significant changes in overall choice bias, highlighting dissociations between mechanisms that mediate motivational and decisional aspects of task performance.  Furthermore, these results suggest that excessive D2 receptor activation may serve to suppress certain aspects of motivation. While activation of the D2 receptor in the BLA has been shown to be important for modulating fundamental body processes such as regulating respiration (Sugita et al., 2014), there have been relatively few studies investigating how these treatments may play a role in altering goal-directed behavior. Previous research using D2 receptor antagonists have shown that D2 blockade within the BLA has no effect on conditioned place preference for opiate rewards (Lintas et al., 2011), indicating that D2 may not play a role in actively influencing animals towards rewarding stimuli, consistent with the results presented here. On the other hand, D2 receptor activity in the BLA may play a more prominent role in mediating aversively-motivated behavior. Co-administration of DA agonists and a D2 antagonist has blocked the anxiolytic effect produced by DA agonists alone (Bananej et al., 2012), indicating that D2 may play a more prominent role in regulation of reaction to aversive stimuli rather than appetitive stimuli. In addition another study showed that blockade of D2 receptors in the BLA with sulpiride reduced conditioned freezing (Souza de Caetano et al., 2013). These findings suggest that D2 receptors in 42  the BLA may influence behaviors that are more predominantly fear-related, rather than appetitive behaviors. This may explain the lack of effect of BLA D2 receptor manipulations on reward-related decision making reported here. While D1 receptors seems to have a relatively consistent role in modulating risky choice bias in cortico-limbic-striatal circuitry, the role D2 plays seems to be relatively more diverse.  Blockade of D2 in the PFC caused a significant increase in risky choice (St. Onge et al., 2011), whereas infusions of a D2 agonist into the PFC caused blunted discounting; choosing less risky when it was more advantageous (100%) and more risky when it was less advantageous (12.5%). When compared to D2 manipulations in the BLA, it seems that the D2 receptor has a more direct role on modulating risky choice within the PFC, while the role of D2 in the BLA is more subtle. The lack of effect of manipulations of D2 receptors in the BLA in probabilistic discounting parallels similar findings from studies of the NAc (Stopper et al., 2013) where intra-NAc infusions of the D2 antagonist eticlopride also failed to alter risky choice. Likewise, stimulation of NAc D2 receptors with quinpirole also did not alter choice biases during probabilistic discounting. When these results are taken together it seems that, in cortical circuitry, both D1 and D2 (St. Onge et al., 2011) have a prominent role in modulating risky choice bias, but in subcortical circuits like the BLA and NAc, while D1 maintains a prominent role (Stopper et al., 2013), D2 seems to take a back seat, modulating how animals respond to reward in more subtle manners. A dissociable role for D1 and D2 receptors has previously been show in the BLA, where blockade of D1 but not D2 receptors prevented acquisition of a reward memory (Lintas et al., 2011). While DA receptor density within the BLA is sparse compared to neighboring regions such as the intercalated cell masses (ICM) and the central amygdala (CeA), both D1 and D2 have 43  been reliably shown to be present the BLA (Pinto & Sesack, 2008; Perez de la Mora et al., 2012). D1 receptor distribution in the amygdaloid complex is heaviest in the ICM and BLA (Fremeau et al., 1991), whereas D2 receptors are heaviest in the ICM and CeA and only sparsely located in the BLA (Perez de la Mora et al., 2012). Despite the relatively low levels, D1 and D2 receptors have been reliably shown to modulate behavior. DA receptor localization within the BLA is diverse depending on the DA receptor subtype (Pinto & Sesack., 2008). Evidence suggests that D1 receptors are distributed both presynaptically in axon terminals and post-synaptically on dendrites (Muly et al., 2009), whereas D2 receptors have been shown to be presynaptically within PFC input terminals. Excitatory transmission of projection neurons of the BLA via NMDA receptors seems to be highly dependent on postsynaptic DA receptor activation, specifically the D1 receptor, though D2 is implicated as well (Pickel et al., 2006; Chang and Grace, 2015). The differential localization of these receptors has led previous researchers to imply that the D1 receptor is the primary mechanism through which DA exerts modulatory control over the BLA, a hypothesis supported by the present results. However, recent electrophysiological data has indicated that D1 and D2 receptors likely work in concert, modulating which input is driving BLA projection neurons dependent on relative activity (Chang and Grace, 2015). This data showed that while D1 modulates both cortical and thalamic inputs to the BLA, D2 seems to bias BLA projection neurons towards reacting to thalamic inputs. This is in keeping with the present results, which shows that D2 modulation changes how animals respond to large rewards following blockade.     44  V. SUMMARY AND CONCLUSION To summarize, the present findings reveal previously uncharacterized and dissociable effects of D1 and D2 receptors in the BLA on cost/benefit decision making involving reward uncertainty. D1 within the BLA seems to be playing a more direct role in influencing decision making, while D2 may be modulating more discreet aspects of cost/benefit analysis. While these receptors have been tested and analyzed separately it is important to note that both D1 and D2 receptors have been shown to have synergistic roles in the BLA, where D1 effects are enhanced by low levels of D2 activation as well (Kroner et al., 2005). An active role for DA in modulating highly stable behavior implicates a potential role in integrating inputs to the BLA, from the PFC including influencing the contingencies (St. Onge et al., 2012), sensory information from thalamic inputs (Chang and Grace, 2015), as well as affective information, ultimately discretely influencing whether or not an animal will choose the risky option.  Given the role for DA in diseases of addiction, as well as disorders such as schizophrenia, these results provide some insight into the role DA signaling in the BLA may be playing during decision making. This research could shed new light on the pathology of neurological diseases that are typified by disrupted or disorganized amygdala activity and show disturbed decision making, such as schizophrenia.      45  Bibliography Andrzejewski, M. E., Spencer, R. C., & Kelley, A. E. (2005). Instrumental learning, but not performance, requires dopamine D1-receptor activation in the amygdala. Neuroscience, 135(2), 335–345. Bananej, M., Karimi-Sori, A., Zarrindast, M. R., & Ahmadi, S. (2012). D1 and D2 dopaminergic systems in the rat basolateral amygdala are involved in anxiogenic-like effects induced by histamine. Journal of Psychopharmacology (Oxford, England), 26(4), 564–574.  Bari, A., Eagle, D. M., Mar, A. C., Robinson, E. S. J., & Robbins, T. W. (2009). Dissociable effects of noradrenaline, dopamine, and serotonin uptake blockade on stop task performance in rats. Psychopharmacology, 205(2), 273–283. Bechara, A., Damasio, A R., Damasio, H., & Anderson, S. W. (1994). Insensitivity to future consequences following damage to human prefrontal cortex. Cognition, 50(1-3), 7–15. Bechara, A., Damasio, H., Damasio, A R., & Lee, G. P. (1999). Different contributions of the human amygdala and ventromedial prefrontal cortex to decision-making. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 19(13), 5473–5481.  Berglind, W. J., Case, J. M., Parker, M. P., Fuchs, R. A, & See, R. E. (2006). Dopamine D1 or D2 receptor antagonism within the basolateral amygdala differentially alters the acquisition of cocaine-cue associations necessary for cue-induced reinstatement of cocaine-seeking. Neuroscience, 137(2), 699–706. Brinley-Reed, M., & McDonald, A. J. (1999). Evidence that dopaminergic axons provide a dense innervation of specific neuronal subpopulations in the rat basolateral amygdala. Brain Research, 850(1-2), 127–135.  Cardinal, R. N., & Howes, N. J. (2005). Effects of lesions of the nucleus accumbens core on choice between small certain rewards and large uncertain rewards in rats. BMC Neuroscience, 6, 37.  Chang, C., & Grace, A.A. (2015) Dopaminergic Modulation of Lateral Amygdala Neuronal Activity: Differential D1 and D2 Receptor Effects on Thalamic and Cortical Afferent Inputs. The International Journal of Neuropsychopharmacology (CINP), 1–8. Coco, M. L., Kuhn, C. M., Ely, T. D., & Kilts, C. D. (1992). Selective activation of mesoamygdaloid dopamine neurons by conditioned stress: attenuation by diazepam. Brain Research, 590(1-2), 39–47. De Oliveira, A. R., Reimer, A. E., de Macedo, C. E. A., de Carvalho, M. C., Silva, M. A. D. S., & Brandão, M. L. (2011). Conditioned fear is modulated by D2 receptor pathway connecting the ventral tegmental area and basolateral amygdala. Neurobiology of Learning and Memory, 95(1), 37–45. 46  Di Ciano, P., & Everitt, B. J. (2004). Direct interactions between the basolateral amygdala and nucleus accumbens core underlie cocaine-seeking behavior by rats. The Journal of Neuroscience, 24(32), 7167–7173.  Floresco, S. B., Blaha, C. D., Yang, C. R., & Phillips, A. G. (2001). Dopamine D1 and NMDA receptors mediate potentiation of basolateral amygdala-evoked firing of nucleus accumbens neurons. The Journal of Neuroscience, 21(16), 6370–6376. Floresco, S. B., St Onge, J. R., Ghods-Sharifi, S., & Winstanley, C. A. (2008). Cortico-limbic-striatal circuits subserving different forms of cost-benefit decision making. Cognitive, Affective & Behavioral Neuroscience, 8(4), 375–389.  Fremeau, R. T., Duncan, G. E., Fornaretto, M., Dearry, A., Gingrich, J. A., Breese, G. R., & Caron, M. G. (1991). Localization of D1 dopamine receptor mRNA in brain supports a role in cognitive, affective, and neuroendocrine aspects of dopaminergic neurotransmission. Proc Natl Acad Sci. 88 (May), 3772–3776. Ghods-Sharifi, S., St Onge, J. R., & Floresco, S. B. (2009). Fundamental contribution by the basolateral amygdala to different forms of decision making. The Journal of Neuroscience, 29(16), 5251–5259. Grace, A. A., & Rosenkranz, J. A. (2002). Regulation of conditioned responses of basolateral amygdala neurons. Physiology & Behavior, 77(4-5), 489–493. Haluk, D. M., & Floresco, S. B. (2009). Ventral striatal dopamine modulation of different forms of behavioral flexibility. Neuropsychopharmacology , 34(8), 2041–2052. Harmer, C. J., & Phillips, G. D. (1999). Enhanced dopamine efflux in the amygdala by a predictive, but not a non-predictive, stimulus: facilitation by prior repeated D-amphetamine. Neuroscience, 90(1), 119–130. Hori, K., Tanaka, J., & Nomura, M. (1993). Effects of discrimination learning on the rat amygdala dopamine release: a microdialysis study. Brain Research, 621(2), 296–300. Inglis, F. M., & Moghaddam, B. (1999). Dopaminergic innervation of the amygdala is highly responsive to stress. Journal of Neurochemistry, 72(3), 1088–1094. Ishikawa, A., Ambroggi, F., Nicola, S. M., & Fields, H. L. (2008). Contributions of the amygdala and medial prefrontal cortex to incentive cue responding. Neuroscience, 155(3), 573–584. Jones, J. L., Day, J. J., Aragona, B. J., Wheeler, R. A, Wightman, R. M., & Carelli, R. M. (2010). Basolateral amygdala modulates terminal dopamine release in the nucleus accumbens and conditioned responding. Biological Psychiatry, 67(8), 737–744.  47  Kröner, S., Rosenkranz, J. A., Grace, A. A, & Barrionuevo, G. (2005). Dopamine modulates excitability of basolateral amygdala neurons in vitro. Journal of Neurophysiology, 93(3), 1598–610. Ledford, C. C., Fuchs, R. A., & See, R. E. (2003). Potentiated reinstatement of cocaine-seeking behavior following D-amphetamine infusion into the basolateral amygdala. Neuropsychopharmacology, 28(10), 1721–1729.  Lintas, A., Chi, N., Lauzon, N. M., Bishop, S. F., Gholizadeh, S., Sun, N., Tan, H. & Laviolette, S. R. (2011). Identification of a dopamine receptor-mediated opiate reward memory switch in the basolateral amygdala-nucleus accumbens circuit. The Journal of Neuroscience, 31(31), 11172–11183. Lintas, A., Chi, N., Lauzon, N. M., Bishop, S. F., Sun, N.,Tan, H., & Laviolette, S. R. (2012). Inputs from the basolateral amygdala to the nucleus accumbens shell control opiate reward magnitude via differential dopamine D1 or D2 receptor transmission. The European Journal of Neuroscience, 35(2), 279–290. Muly, E. C., Senyuz, M., Khan, Z. U., Guo, J.-D., Hazra, R., & Rainnie, D. G. (2009). Distribution of D1 and D5 dopamine receptors in the primate and rat basolateral amygdala. Brain Structure & Function, 213(4-5), 375–393. Ohshiro, H., Kubota, S., & Murakoshi, T. (2011). Dopaminergic modulation of oscillatory network inhibition in the rat basolateral amygdala depends on initial activity state. Neuropharmacology, 61(4), 857–866. Paxinos G., Watson C. (1998) The rat brain in stereotaxic coordinates, Ed 4. San Diego: Academic. Pickel, V. M., Colago, E. E., Mania, I., Molosh, A. I., & Rainnie, D. G. (2006). Dopamine D1 receptors co-distribute with N-methyl-D-aspartic acid type-1 subunits and modulate synaptically-evoked N-methyl-D-aspartic acid currents in rat basolateral amygdala. Neuroscience, 142(3), 671–690. Pinto, A., & Sesack, S. R. (2008). Ultrastructural analysis of prefrontal cortical inputs to the rat amygdala: spatial relationships to presumed dopamine axons and D1 and D2 receptors. Brain Structure & Function, 213(1-2), 159–175. Perez de la Mora, M., Gallegos-Cari, A, Crespo-Ramirez, M., Marcellino, D., Hansson, A. C., & Fuxe, K. (2012). Distribution of dopamine D(2)-like receptors in the rat amygdala and their role in the modulation of unconditioned fear and anxiety. Neuroscience, 201, 252–266.  Pratt, W. E., & Mizumori, S. J. (1998). Characteristics of basolateral amygdala neuronal firing on a spatial memory task involving differential reward. Behavioral Neuroscience, 112(3), 554–570. 48  Roesch, M. R., Calu, D. J., Esber, G. R., & Schoenbaum, G. (2010). Neural correlates of variations in event processing during learning in basolateral amygdala. The Journal of Neuroscience, 30(7), 2464–2471.  Rosenkranz, J. A, & Grace, A A. (2001). Dopamine attenuates prefrontal cortical suppression of sensory inputs to the basolateral amygdala of rats. The Journal of Neuroscience, 21(11), 4090–4103. Shimp, K. G., Mitchell, M. R., Beas, B. S., Bizon, J. L., & Setlow, B. (2014). Affective and cognitive mechanisms of risky decision making. Neurobiology of Learning and Memory, 117, 60–70.  St Onge, J. R., Chiu, Y. C., Floresco, S. B., & Onge, J. R. S. (2010). Differential effects of dopaminergic manipulations on risky choice. Psychopharmacology, 211(2), 209–221.  St Onge, J. R., & Floresco, S. B. (2009). Dopaminergic modulation of risk-based decision making. Neuropsychopharmacology, 34(3), 681–697. St Onge, J. R., & Floresco, S. B. (2010). Prefrontal cortical contribution to risk-based decision making. Cerebral Cortex (New York, N.Y. : 1991), 20(8), 1816–1828. St Onge, J. R., Stopper, C. M., Zahm, D. S., & Floresco, S. B. (2012). Separate prefrontal-subcortical circuits mediate different components of risk-based decision making. The Journal of Neuroscience, 32(8), 2886–2899. St Onge, J. R., Abhari, H., & Floresco, S. B. (2011). Dissociable contributions by prefrontal D1 and D2 receptors to risk-based decision making. The Journal of Neuroscience, 31(23), 8625–8633. Stopper, C. M., & Floresco, S. B. (2011). Contributions of the nucleus accumbens and its subregions to different aspects of risk-based decision making. Cognitive, Affective & Behavioral Neuroscience, 11(1), 97–112. Stopper, C. M., Khayambashi, S., & Floresco, S. B. (2013). Receptor-specific modulation of risk-based decision making by nucleus accumbens dopamine. Neuropsychopharmacology, 38(5), 715–728. Souza de Caetano, K. A., de Oliveira, A. R., & Brandão, M. L. (2013). Dopamine D2 receptors modulate the expression of contextual conditioned fear: role of the ventral tegmental area and the basolateral amygdala. Behavioural Pharmacology, 24(4), 264–274. Stuber, G. D., Sparta, D. R., Stamatakis, A. M., van Leeuwen, W. A., Hardjoprajitno, J. E., Cho, S., Tye K.M., Kempadoo, K.A., Zhang F, Deisseroth K., Bonci, A. (2011). Excitatory transmission from the amygdala to nucleus accumbens facilitates reward seeking. Nature, 475(7356), 377–80. 49  Sugita, T., Kanamaru, M., Iizuka, M., Sato, K., Tsukada, S., Kawamura, M., Homma, I. & Izumizaki, M. (2014). Breathing is affected by dopamine D2-like receptors in the basolateral amygdala. Respiratory Physiology & Neurobiology, 1–5. Touzani, K., Bodnar, R. J., & Sclafani, A. (2009). Dopamine D1-like receptor antagonism in amygdala impairs the acquisition of glucose-conditioned flavor preference in rats. The European Journal of Neuroscience, 30(2), 289–98. Touzani, K., Bodnar, R. J., & Sclafani, A. (2013). Glucose-conditioned flavor preference learning requires co-activation of NMDA and dopamine D1-like receptors within the amygdala. Neurobiology of Learning and Memory, 106, 95–101. Winstanley, C. A, Theobald, D. E. H., Cardinal, R. N., & Robbins, T. W. (2004). Contrasting roles of basolateral amygdala and orbitofrontal cortex in impulsive choice. The Journal of Neuroscience, 24(20), 4718–4722.  


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items