Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

fMRI comparison of response to familiar and unfamiliar music in AD Yang, Lillian 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2015_november_yang_lillian.pdf [ 3.34MB ]
JSON: 24-1.0165783.json
JSON-LD: 24-1.0165783-ld.json
RDF/XML (Pretty): 24-1.0165783-rdf.xml
RDF/JSON: 24-1.0165783-rdf.json
Turtle: 24-1.0165783-turtle.txt
N-Triples: 24-1.0165783-rdf-ntriples.txt
Original Record: 24-1.0165783-source.json
Full Text

Full Text

	   1	      fMRI Comparison of Response to Familiar and Unfamiliar Music in AD by Lillian Yang Bachelor of Arts, The University of British Columbia, 2010 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The Faculty of Graduate and Postdoctoral Studies (Neuroscience) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2015 © Lillian Yang 2015	  	   ii	  Abstract  Our study investigates whether neural activity corroborates existing behavioral evidence that familiar music is beneficial to Alzheimer’s Disease (AD) patients in music therapy (MT). We hypothesized that AD patients will show different patterns of blood-oxygen-level dependent (BOLD) activity, compared to healthy elderly controls, when listening to familiar versus unfamiliar music. Ten subjects with mild to moderate AD and ten healthy elderly controls underwent functional magnetic resonance imaging (fMRI) while listening to blocks of differing auditory stimuli, interleaved with blocks of static noise. The stimuli were a) familiar music b) unfamiliar music c) scrambled familiar music d) scrambled unfamiliar music. Each subject was exposed to each stimulus category twice, in randomized order. We found different patterns of activation emerge for AD patients versus control subjects when listening to familiar music and when listening to unfamiliar music. For familiar music, AD patients had more activated areas than control subjects. For unfamiliar music, control subjects had more activated areas than AD subjects. However, only one investigated area was significantly differently activated between the 2 groups. This area was the temporopolar area, and it was significantly more active in controls than in AD subjects during unfamiliar music. In this paper, we will first discuss the known neurological substrates of music processing and then the possible functions of the areas found significantly active in response to our musical stimuli. We begin with providing background information on music and the brain.   	   iii	  Preface This thesis paper is an original, unpublished work by the author, L. Yang.  The research design was created by Dr. C. Jacova, Dr. L. Boyd and Dr. R. Hsiung with support from P. Slack. Imaging analysis and reporting was conducted by E. Shahinfard. Our group consulted with K. Kirkland in choosing stimulus material. The research presented here was approved by the University of British Columbia’s Clinical Research Ethics Board (Project Title: “Studies on Music Processing in Persons with Alzheimer Disease”,  H09-00722). 	   iv	  Table of Contents Abstract…………………………….…………………….…………………….……………...ii Preface…………………………….…………………….…………………….……………...iii Table of Contents………………………….…………………….…………………….……...iv List of Tables…………………………………………………………………………………vi List of Figures………………………………………………………………………………..vii List of Abbreviations………………………………………………………………………..viii Acknowledgements…………………………………………………………………………....x Dedication…………………………………………………………………………………….xi Introduction…………………………….…………………….…………………….………….1 Background: Music and the Brain…………………………….………………………………2  How Does Sound Become Music? …………………….……………………………..2  What Makes Something “Music” …………………….………………………………4  Music Processing is Robust…………………….……………….……….……………7  Dissociation Studies on Music Perception…………………….……………….……...7  Beyond Music Perception…………………….……………….….………………….11  Our Study is Applicable to Music Therapy…………………….………………........16 Background: Alzheimer’s Disease…………………….……………………………………18  Overview…………………….……………….………………………………………18  Diagnosis and Treatment…………………….……………….……………………...19 Background: fMRI…………………….……………….…………………………………….22  fMRI: How it Works…………………….……………….…………………………..22  fMRI and Music Cognition…………………….……………….……………………22 	   v	   What Have We Learned About Music Cognition Through fMRI…………………23 Tying It All Together……………………..……………….…………………………………27  Inferences from Atrophy Progression and Music Processing…………………….….27 Hypothesis……………….……………….…………………….…………………………….29 Methods…………………….……………….…………………….………………………….33  Recruitment…………………….……………….…………………….……………...33  fMRI Setup……………….……………….…………………….…………………...33  Stimuli……………….……………….…………………….………………………...34  Study Design……………….……………….…………………….………………….35  Imaging Data Analysis……………….……………….…………………….………..35 Results……………….……………….…………………….……………………….………..43  Subjects……………….……………….…………………….……………………….43  Unfamiliar Music vs Static Noise……………….……………….…………………..44  Familiar Music vs Static Noise…………….……………….………………………..46 Discussion…………….……………….………………………….……………….…………54  Discussing the Results for Unfamiliar Music…………….……………….…………55  Discussing the Results for Familiar Music…………….……………….……………58  Discussing Familiar Music vs Unfamiliar Music Results…….……………………...60  Conclusion….……………….………………………….………….………….……..62  Issues…….………………………….…………….………………………….………64  Future Directions….……………….………………………….………….………….64 References………….……………….………………………….……………….……………65   	   vi	  List of Tables Table 1. Subject Background………………………………………………………….43 Table 2. Detailed Subject Background.……………………………………………….44 Table 3. Detailed Control Background.……………………………………………….44 Table 4. Unfamiliar Music vs Static Noise Results (control group)...………………...49 Table 5. Unfamiliar Music vs Static Noise Results (AD group)……………………...50 Table 6. Familiar Music vs Static Noise Results (control group)……………………..52 Table 7. Familiar Music vs Static Noise Results (AD group)………………………...53  	   vii	  List of Figures Figure 1. Auditory Cortex…………………...………………………………………..…4 Figure 2. Block Design………….……………………...………………………………35 Figure 3. Pre-processing Steps….……………………...………………………………39 Figure 4. Unfamiliar Music Whole Brain Results……………………………………...48 Figure 5.  Familiar Music Whole Brain Results………………………………………...51  	   viii	  List of Abbreviations  A1: primary auditory cortex AC: anterior commissure ACC: anterior cingulate cortex AD: Alzheimer’s Disease BA: Brodmann Area BOLD: blood oxygenation level dependent CBF: cerebral blood flow CN: cochlear nuclei dB: decibels dHB: deoxyhemoglobin DSM: Diagnostic and Statistical Manual of Mental Disorders ERP: event-related potential fMRI: functional magnetic resonance imaging FTD: frontotemporal dementia FTLD: frontotemporal lobar degeneration  FWHM: full width at half maximum HG: Heschl’s gyrus HRF: hemodynamic response function IMIA: Individualized Music Intervention for Agitation MCI: mild cognitive impairment MEG: magnetoencephalography 	   ix	  MGN: medial geniculate nucleus MIP: maximum intensity projection MNI: Montreal Neurological Institute MPFC: medial prefrontal cortex MT: music therapy MTG: middle temporal gyrus NINCDS-ADRDA: National Institute of Neurological and Commununicative Disorders and Stroke-Alzheimer Disease and Related Disorders OfC: orbitofrontal cortex PAC: primary auditory cortex PC: posterior commissure PET: positron emission tomography PP: planum polare PT: planum temporale ReML: restricted maximum likelihood TE: echo time SMA: supplementary motor area SNR: signal-to-noise ratio SPM: statistical parametric mapping STG: superior temporal gyrus UBCH-CARD: UBC Hospital Clinic for Alzheimer Disease and Related Dementias VBM: voxel-based morphometry  	   x	  Acknowledgements  I would like to thank my first supervisor, Dr. Claudia Jacova, for giving me a chance to explore the field of Neuroscience. I would also like to thank my current supervisor, Dr. Robin Hsiung for his continued support. Thank you to Dr. Lara Boyd for acting as Chair of my supervisory committee and thank you to Dr. Debbie Giaschi for joining my supervisory committee. A final thanks to Elham Shahinfard for performing the imaging analysis and thank you to Dr. Teresa Liu-Ambrose for agreeing to act as external examiner for my defense.    	   xi	  Dedication  I dedicate this thesis to my dad. For you, I will always keep trying my best.  	   1	  Introduction This paper begins with providing the background information necessary to better interpret the results of the study. The first section details what we currently know about how music activates the brain. The second section is a review on how Alzheimer’s diminishes the structures of the brain, with a focus on areas relevant to music listening. Finally, the third section provides some background on our imaging techniqe, fMRI.  The wealth of research on music and the brain comes from healthy young adults and patients with surgical lesions. There are few neuroimaging studies on music involving patients with Alzheimer’s. Therefore we combine the knowledge from these sections to get a picture of the music processing abilities of patients with Alzheimer’s. Thus, before we begin interpreting our data, we can understand what areas of the brain are normally used to process music and which of these areas are preserved and still able to process music in AD patients.            	   2	  Background: Music and the Brain Music is a form of expression that can range from background music in an elevator to chills-inducing solos in a sold-out stadium. Evidence of music has been discovered in instruments dated to over 35,000 years ago (Conard et al., 2009). The term “prehistoric music” exists to describe music from preliterate times and allows us to better appreciate the age and persistence of this activity. The presence of music throughout human existence suggests adaptive uses. Synchronization and movement to musical beats facilitates social behavior and cooperation (Grahn and McAuley, 2009). Some anthropologists believe it was an early form of communication. Whether music was a byproduct of language or a precursor to language continues to be debated (Clark et al., 2014). In humans, hearing is the most developed sense at the time of birth (Carter, 2009). Infants of only 3 months have event-related potential (ERP) responses to unexpected pitch changes that are similar to adult patterns (He et al., 2007, Trainor, 2010) and, as they age, they are able to flexibly adapt to the music of their culture (Hannon and Trainor, 2007). The processing of sound as music begins several steps after sound waves enter the ears.   How Does Sound Become Music?  When sound waves enter our ears, this mechanical disturbance of the inner ear structures is transduced into electrical signals, which are passed on to the brain. The following is a simplified description of the path the signals take to reach the brain. Each brain hemisphere receives inputs from both ears through the cochlear nuclei (CN) of the brainstem and the signal is passed a little ways up the brainstem to the superior olivary nuclei. From there, the signals reach the midbrain at the inferior colliculi and continue on to pass through 	   3	  the medial geniculate nucleus (MGN) of the thalamus and then travel laterally to the temporal lobes, which include the primary auditory cortex, the secondary auditory cortex, and the tertiary auditory cortex (Kandel et al., 1991).  The primary auditory cortex (A1) has a tonotopic organization and processes frequencies. Its location is in the superior portion of bilateral superior temporal gyrus (STG), referred to as Heschl’s gyri (HG) or transverse temporal gyri (Da Costa et al., 2011, Javad et al., 2014) (See Figure 1). The primary auditory cortex is also known as the core. It is active in respose to any sound, whether meaningful or not, and it differs from the surrounding belt (secondary auditory cortex) and parabelt (tertiary auditory cortex areas partly because it has a large amount of direct input from the MGN (Pickles, 2012). The secondary auditory cortex is involved in processing melody, harmony and rhythm (Liegeois-Chauvel et al., 1998, Carter, 2009). It is more selectively active than the primary auditory cortex, responding more to meaningful sounds, such as melodic or rhythmic sounds. Its boundaries are ambiguously defined as the area around the core, but researchers often include the planum temporale (PT).  The tertiary auditory cortex processes further integration and appreciation (Carter, 2009). This parabelt area wraps around the belt area, particularly extending in the lateral and posterior directions. Up until reaching the cortex, research with mice has shown that the brain processes mainly sound localization and initiation of reflexive responses important for surviving predatory attacks (Willott et al., 1979). The auditory cortex creates our perception of what we are hearing. Sound becomes music when it is perceived as a coherent package, held together by some form of Gestalt grouping feature (any feature for which similar examples of the feature are considered part of a group), such as rhythm or melody (Levitin and Tirovolas, 2009). 	   4	  Within the three levels of the auditory cortex, where the features of the incoming soundwaves and their integration is formed, the perception of music is achieved. Fig. 1 Shows the general location of the core (primary auditory cortex), belt (secondary auditory cortex), and parabelt (tertiary auditory cortex) locations.    What Makes Something “Music” The dimensions of music are “pitch, rhythm, timbre, tempo, meter, contour, loudness, and spatial location” (Levitin and Tirovolas, 2009, p.213).   1. Pitch corresponds to the frequency at which sound waves are vibrating. The faster the frequency, the higher the pitch. Each note in music is of a specific frequency. For example, the A above middle C is at the frequency of 440Hz. Pitch can be further analyzed by pitch chroma and pitch height. Pitch chroma describes the note, for example, A or C. Pitch height describes which octave the note is in. It has been shown using fMRI that pitch height and chroma are processed in separate areas of the brain (Warren et al., 2003). Changes in pitch chroma activate the area anterior to the primary auditory cortex (A1) while changes in pitch height activate the area posterior to A1 in the secondary auditory cortex.  2. Rhythm is the pattern of notes in time. The duration of each note in the pattern is also important as part of the construction of rhythm. 	   5	   3. Timbre describes what makes a note sound different, even when its pitch and loudness are the same. When a note is created from different sources, for example a piano versus a trombone, the note is comprised of what is called a “fundamental frequency,” which we hear as the pitch, as well as a series of higher frequencies, which are multiples of the fundamental frequency (Pantev et al., 2003). It is the differences in these higher frequencies (also called “overtones”) that distinguishes sound emitted from different sources.   4. Tempo is the speed of a piece of music. It can be measured in beats per minute (BPM).   5. Meter is how music is broken down into equal pieces. Because the brain processes stimuli by grouping features, meter is imposed on musical pieces, even when no meter is implied in the construction of the piece (Palmer and Krumhansl, 1990, Grahn and McAuley, 2009).   6. Contour is the shape of a melody. If you treat sheet music as a game of connect the dots and you draw a line from one musical notation to the next, you get the contour of the song. Contour does not consider the starting point - it simply concerns the up and down movement of the notes.   7. Loudness can be described as the perceived pressure on the ears. Because it is a perceptual dimension, a single auditory stimulus can elicit different perceptions of loudness by different individuals in the same testing environment. The actual pressure on the ears is measured by decibels (dB) and is related to loudness on the equal loudness curve.  	   6	  8. Spatial location is the identification of where sounds are coming from. An auditory experience can be very different when the multiple layers of a song come from different locations, as we would hear from a live band performance or stereo recording, than when all the elements of the song come from one location, as we would hear from a mono recording. Whether a surround sound experience or a flattened sound experience makes a therapeutically significant difference in the brain would be interesting to investigate as this would inform the selection of sound delivery systems during music therapy, but is beyond the scope of this paper.  Music can persist in the disturbance, absence or perceptual absence of most of its variable dimensions. Differences in the shapes of our ears and the development of our tonotopic maps, as well as culturally engrained musical structure expectations, lead to subtly unique listening experiences for each person. This would explain why “world music”, an umbrella term for music from around the world, sounds strange to many. Yet hearing the same song differently than other people, doesn’t necessarily make it sound unpleasant or even wrong, and it certainly is still perceived as music. In the case of metre and tempo, people impose perceptual accents on different beats and can perceive a different metre and tempo from one another when given the same beat pattern stimulus (Grahn and McAuley, 2009). Perception of musical timing can also be manipulated, as shown in a study where infants were bounced on either the second beat or the third beat of a rhythm pattern with no accented beats (Phillips-Silver and Trainor, 2005). Following the bounce training, infants preferred listening to the rhythm with accents added to the beats they were bounced on.  	   7	  These findings show that music is perceptually malleable – its dimensions can be heard differently without detrimental effects on wholistic perception. Indeed, cover artists transpose entire melodies up and down, use different instruments, manipulate rhythms, change tempos and even alter the contour of songs, which remain recognizable as the original songs. The use of music therapy for dementia assumes that, as these perceptual differences become more extreme, with the degradation of music processing centers in the brain, the perception of music is maintained.   Music Processing is Robust Fortunately, the brain is robust in its ability to process music. Music activates both hemispheres and several systems, such as the motor system, the visual system, the emotion system and reward system. Damage to the processing of specific musical features often leaves the processing of other features intact. For example, you may be unable to process rhythm, but still be able to process contour. Lesion studies provide many examples of dissociations for the processing of music features.   Dissociation Studies on Music Perception 1. Metre and Rhythm In a lesion study which matched lesion locations to musical skill deficits, a lesion of the posterior section of the STG was detrimental to rhythm processing, while a lesion of the anterior section of the STG damaged metre processing (Liegeois-Chauvel et al., 1998). This study shows double dissociation of metre and rhythm with no apparent lateralization of 	   8	  either. This dissociation was shown in another earlier lesion study evaluating musical skills following left hemisphere versus right hemisphere damage (Peretz, 1990). “Same” or “different” forced-choice trials were used to test processing of rhythm and metre. Rhythm processing was impaired by damage to either hemisphere, without concurrent metre processing disruption.   2. Contour and Interval A lesion study involving 65 patients with unilateral temporal cortical lesions showed that impaired use of interval information in melody discrimination tasks could occur without impaired use of contour information (Liegeois-Chauvel et al., 1998). This was the case for patients with left hemisphere lesions. Patients with right hemisphere lesions had difficulty in the melody discrimination task while using contour information as well as interval information. However, it should be clarified that an intact right hemisphere is only involved in the interval computation process due to interhemispheric dependence (Baeck, 2002). Contour has already been defined earlier. Interval is the distance between two notes, which can be played either simultaneously or sequentially.   Indeed, the computation of intervals, defined as the pitch distance between neighboring notes, certainly needs the basic pitch encoding abilities of the right hemisphere. While both the left and right hemispheres have functioning tonotopic arrangements in the auditory cortex, it has been shown that the right hemisphere has finer pitch resolution than the left. In an fMRI study where subjects listened to pure tone sequences with steady increments from C6, the right PT responded with increasing blood flow as pitch distance increased (Hyde et al., 2008). The right planum temporale could detect pitch differences of 	   9	  1/16th of a semitone. In the left PT, the BOLD response remained constant and only increased when the pitches rose by increments of at least 2 semitones. One magnetoencephalography (MEG) study showed coordinated activity between hemispheres, which the authors suggest may represent integration of interval (referred to as “local pitch pattern”) and contour (referred to as “global pitch pattern”) information (Patel and Balaban, 2000). Temporal synchronization between recordings of activity over the left posterior hemisphere and recordings over the rest of the brain were tighter for melodic pitch sequences than for random pitch sequences. This would suggest separate processing of contour and interval, which converge as stimuli become recognized as melodic.   3. Melody and Rhythm A case study of a 20 year old non-professional musician with a focal lesion in the middle and posterior third of his left STG presents a case of melody and rhythm dissociation (Piccirilli et al., 2000). The subject was revealed to have normal processing of vocal and environmental sounds, as well as recognition and production of rhythm patterns, but was impaired in both recognition and production of melodies. This study further evinces the importance of the posterior STG in melody processing, previously noted in cases of unilateral lesions of its right side homologue (Liegeois-Chauvel et al., 1998). Due to its proximity to the speech comprehension centre, Wernicke’s area, surgeons do not operate on the posterior STG and thus this case presents a rare example of the effects of a left side focal lesion of this area.  Combined, the 2 studies discussed here also suggest some rhythm processing lateralized to the right posterior STG where damage leads to impairments in rhythm 	   10	  processing. However, other studies conclude with uncertainty on hemispheric lateralization of rhythm functions (Peretz, 1990). More recently, rhythm processing lateralization was considered in more depth in a case study of a patient with left hemisphere damage to the STG, posterior middle temporal gyrus and areas of the inferior parietal lobule (Di Pietro et al., 2004). This former musician exhibited perceptive arrhythmia with no accompanying impairment of melody processing or rhythm production. It was shown that the patient could discriminate note duration, thus distinguishing the impairment from an issue of low level processing upstream of rhythm processing. Rhythm processing lateralization to the right or left hemisphere remains inconclusive, but it is possible that the varying lateralization seen across patients has to do with lifetime changes in lateralization that results from musical training. Right to left lateralization of melody processing is a well documented effect of musical training, now understood to serve analytical and labelling functions (Bever and Chiarello, 2009). Training effects, exclusive of maturational effects, have been shown in the left hemisphere posterior STG (Ellis et al., 2012). This area, bilaterally considered important for melody processing, positively correlates training with BOLD activity. Now there is some evidence that rhythm and metre processing also becomes left lateralized with musical training. MEG measurements of event related potential (ERP) magnitude during rhythm incongruency processing reported lateralization to the left hemisphere and greater response amplitude for musicians (Vuust et al., 2009). Nonmusicians showed right hemisphere lateralization.  The patient exhibiting functioning rhythm processing with a left hemisphere lesion was 20 years old and a nonprofessional musician while the patient exhibiting damaged rhythm processing with a left hemisphere lesion was 48 years old and a professional 	   11	  musician (Piccirilli et al., 2000, Di Pietro et al., 2004). Perhaps the latter patient, with far greater musical experience and formal training, relied on left-lateralized rhythm processing. This would explain the pattern of results reported for rhythm processing lateralization, but is currently speculation.  In summary of this section, since music can exist in the absence of some musical features and the brain is robust to the loss of music processing areas, if one loses the brain area for processing a certain feature of music, one can most likely still perceive music. This is why music is considered a useful tool for reaching patients with continuous brain atrophy.  Beyond Music Perception Thus far, we have only discussed the creation of our music perception. From the neuroscientific perspective, we’ve stayed in the auditory cortex. This is similar to discussing poetry but only talking about syllables and grammar. Music is relevant because it affects us emotionally. It also has unique ties to memory and movement.   It is often said that “music is the language of emotion”. Indeed, there are sounds that convey specific emotions, just as words convey specific meaning. From 2 months old, infants prefer consonant chords over dissonant chords, showing an early developing emotional response to music (Trainor et al., 2002). Consonant sounds are associated with pleasant emotions and dissonant sounds make us feel uncomfortable (Fishman et al., 2001, Masataka, 2006). Within the musical context, these consonant and dissonant sounds can be used to evoke many more sensations than the simple pleasant/unpleasant sensations they create outside of the musical context (Trainor et al., 2002). Music has its own syntax/grammar and creators follow and violate the rules to evoke predictable emotions (Koelsch et al., 2000). 	   12	  The brain perceives music in an organized fashion, through physiological and cultural filters. It hears what it is tuned to hear, groups patterns of musical dimensions and relies on culturally-defined wholistic structuring, such as musical scales, to predict outcomes as a song unfolds.  Music may stimulate emotional responses through 2 very different pathways. The first is the more direct pathway of triggering emotion through the esthetics of a song. The second pathway is through triggering memories with either a strong emotional component, or a powerful significance to the present day.  A. Emotion From Song Esthetics The ability to understand the emotional language of music has been studied in frontotemporal lobar degeneration (FTLD) patients (Omar et al., 2011). This study used MRI to determine associations between gray matter volume and performance on an emotion recognition task. They concluded that a network – including the insula, orbitofrontal cortex, anterior cingulate, medial prefrontal cortex, amygdala, subcortical mesolimbic system and anterior-temporal as well as more posterior-temporal and parietal cortices – was important in recognizing emotions in music. These areas showed significant association with music emotion recognition, using covariates of “emotion recognition from faces task score” and “emotion recognition from voices task score”, to ensure specific association to music. When music causes intense, pleasurable feelings, we may describe the sensation as “getting chills”. As chills reliably occur in the same part of a song for each individual, it is considered a useful event to study when investigating the neural activity behind emotional responses to music (Blood and Zatorre, 2001). Using positron emission tomography (PET), 	   13	  research has shown that cerebral blood flow increases to the left ventral striatum, dorsomedial midbrain, bilateral insula, right orbitofrontal cortex (OfC), thalamus, anterior cingulate cortex (ACC) and supplementary motor area (SMA) as the intensity of chills increases (Blood and Zatorre, 2001). The same study showed that chill intensity also has a negative correlation with blood flow to bilateral amygdala, left hippocampus, ventromedial prefrontal cortex and bilateral regions of the posterior neocortex.  B. Emotion From Memories We’ve seen that music we have never heard before can elicit emotions, even ones with great intensity. Music that we are familiar with can also bring memories and attached emotions to the surface. In the discussion of memory, it is important to make the distinction between episodic and semantic memory. Episodic memory is the autobiographical memory of life events, while semantic memory is the memory of facts or knowledge for which you can’t recall the spatial or temporal context.  In a study on memory for music, Platel et al. defines semantic music memory as memory for well-known songs for which one has no memories of any specific encounters (Platel et al., 2003). They define episodic music memory as the memory of musical pieces for which one can recall details surrounding an encounter with the piece. For example, for an episodic music memory, one would have answers to questions such as these: “Who was I with when I heard it?” “What was I doing when I was listening to it?” Emotional responses to familiar songs most often come from episodic memories of one’s youth. Janata reports that around 30% of songs chosen from “the Billboard Pop and R&B Top 100 charts” from when people were 7 – 19 years old were rated as 	   14	  autobiographically significant (Janata, 2009a). It’s been suggested by Janata that the medial prefrontal cortex (MPFC) is important for emotional responses to familiar songs (Janata, 2009a). As discussed in the next section on music and memories, research has shown that the middle frontal regions are active when assessing the familiarity of songs using semantic memory. Janata’s support for this idea comes from observations of affective responses to music in late stages of Alzheimer’s, correlated with findings that the MPFC is also relatively preserved until later stages of the disease. However, more research is needed to support or refute this suggestion.   Music and memory is another important area of study in music processing. Functional neuroimaging shows distinct neural activity for semantic and episodic musical memory retrieval. Researchers have found right hemisphere dominant activity for episodic music memory and left lateralization for semantic music memory (Groussard et al., 2009). In a study on the neural substrates of semantic and episodic music, episodic music memory elicited activation from bilateral middle frontal gyri, as well as the precuneus, and expressed right hemisphere dominance typical of memory retrieval tasks across sensory domains (Platel et al., 2003, Platel, 2005). The authors noted that activity in the planum temporale is not specific to musical memories and interpreted the activation as representing the successful retrieval of episodic memory (Platel et al., 2003). It is important to note that the study’s lab-created episodic memories were formed and retrieved in the same day and likely has significantly different neural substrates from real-life long-term episodic music memory.  The same study showed that semantic music memory activates bilateral medial frontal cortex, bilateral orbitofrontal cortex, left angular gyrus and left hemisphere anterior middle and anterior superior temporal gyri (Platel et al., 2003, Platel, 2005). The authors note 	   15	  the nonspecificity of the activations in the medial frontal cortex (which is involved in categorizing semantic data that may or may not be musical) and left angular gyrus (which may be involved in categorizing via verbal labeling), leaving the left hemisphere temporal gyri and the orbitofrontal cortex activation to be considered as specific to musical semantic memory.  A followup study was aimed at limiting verbal processing of words associated with the musical stimuli (Groussard et al., 2009). The results support the distinction of the left hemisphere temporal gyri (specifically the anterior portion of the superior temporal gyri) and the inferior frontal areas as being specific to music semantic memory. The nonspecificity of most of the areas involved in processing musical semantic memory, which is probed through familiarity judgment and rating tasks, suggests that feelings of familiarity arise from a multimodal network in the brain (Groussard et al., 2010). Semantic and episodic memory are both part of explicit memory (memories that can be verbalized). Another form of memory is implicit memory (memories that can’t be verbalized). Because we see evidence of implicit musical memory in AD patients (even with severe AD) that are able to perform pieces of music by memory (Polk and Kertesz, 1993), understanding the neural correlates of implicit musical memory is important to the study of music processing with AD. Performing a musical piece by memory is a form of implicit memory called “procedural memory”. Unfortunately, there is scarcity in neuroscientific research on implicit musical memory.  Aside from procedural memory, it is commonly agreed upon that the presence of the “mere exposure effect” is also evidence of implicit memory. Studies on whether AD patients exhibit the mere exposure effect (a preference for 	   16	  stimuli one has been previously exposed to) are inconclusive (Halpern and O'Connor, 2000, Quoniam et al., 2003). That said, implicit musical memory is poorly understood.  Our Study is Applicable to Music Therapy  What Is Music Therapy? Music therapy is the use of music to improve a patient or client’s physical, emotional and/or mental state. As it is noninvasive, has no known negative effects, has few contraindications and can continue to be used as patients progress to severe cognitive and physical decline, it is an attractive therapeutic tool. Given music’s ability to elicit movement and physical signs of enjoyment, humans have been aware of its therapeutic uses since at least as early as the days of our hunter-gatherer ancestors (Fukui and Toyoshima, 2008). The development of music as a recognized form of therapy, which is administered by accredited professionals, began during the aftermath of the first world wars. Volunteer musicians performed for injured veterans and it was clear that music had the effect of instantly lifting spirits. This led to systems of formal training of musicians to work with special populations (AMTA, 1998).  Gerdner’s heuristic model of Individualized Music Intervention for Agitation (IMIA) suggests the use of individualized music (familiar music which the patient is likely to enjoy) to reduce the agitation symptom of Alzheimer’s (Gerdner, 1997). Several studies provide support for the use of individualized music for music therapy with dementia patients (Gerdner and Swanson, 1993, Sung and Chang, 2005, Park and Pringle Specht, 2009). While several studies have shown the merits of an individualized music intervention, few studies have directly compared the therapeutic effects of individualized music against the effects of non-individualized music. One study which did test the assumption that individualized music 	   17	  is more therapeutically effective than non-individualized music chose to compare the agitation-reducing effects of individualized music versus relaxing classical music in patients with ADRD (Gerdner, 2000). The Modified Hartsock Music Preference Questionnaire was used to find the preferred music for the 39 subjects in this pretest-posttest crossover study. The study found that the positive effect of  individualized music was significantly greater than that of classical music, providing further support for IMIA. Our study may provide a neuroscientific avenue of support for music therapists’ use of familiar music in treating the AD population.                  	   18	  Background: Alzheimer’s Disease Overview Alzheimer’s Disease is the most prevalent form of dementia. Globally, it is projected that 42.3 million people will suffer from dementia in 2020, with over half of those cases being AD (Ferri et al., 2005). The disease has a slow onset, beginning with episodic memory problems, then progresses to general cognitive dysfunction (Dubois et al., 2007). Signs of the disease can present as cognitive (such as poor planning, judgment, problem solving and memory), behavioral (such as difficulties with familiar tasks and withdrawal from social environments), or emotional (such as negative changes in mood) and the more detailed breakdown of early signs of the disease are available through the Alzheimer’s Association publication, “Know the 10 Signs” (2014). AD typically occurs late in life, after the age of 65. While the disease appears sporadically (with the exception of early-onset familial AD), lifestyle has an impact on the likelihood of developing AD, as healthy lifestyle choices can delay onset (Ballard et al., 2011). A systematic review confirmed the benefit of exercise in risk reduction for dementia (Hamer and Chida, 2009). Health problems and poor health choices including obesity (Beydoun et al., 2008), smoking habits and excessive alcohol consumption (Anstey et al., 2009) have been implicated as risk-escalating factors for dementia. Beyond lifestyle choices, risk of typical AD comes from a combination of alleles from different genes, making the search for relevant genes difficult (Bertram and Tanzi, 2008).  Early-onset AD is diagnosed before the age of 65 and represents a very small percentage of AD cases (Bertram and Tanzi, 2008). Unlike for typical late-onset AD, researchers have found genetic causes of early-onset AD. Three confirmed genetic mutations 	   19	  lead to early-onset AD. These mutations occur on chromosomes 1, 14 and 21 (Hardy, 1997, Dubois et al., 2007). On chromosome 1 is the presenilin 2 (PS2) encoding gene (Levy-Lahad et al., 1995), on chromosome 14 is the presenilin 1 (PS1) encoding gene (Sherrington et al., 1995) and on chromosome 21 is the APP gene (Goate et al., 1991). These three autosomal dominant mutations all effect quantity of amyloid-β (Aβ) in the brain (Bertram and Tanzi, 2008). Significantly, the Aβ peptide is the main component of the Aβ plaques implicated in neuronal cell dysfunction and death in AD.  Diagnosis and Treatment Presently, AD can only be confirmed through postmortem histopathology of patients with a clinical diagnosis of AD (Ferri et al., 2005). The only exception is a clinical diagnosis of AD for someone who’s family line includes the rare familial form of AD (Dubois et al., 2007). Histological examination looks for two main signs of AD: amyloid plaques and neurofibrillary tangles. The dominant hypothesis explaining AD pathology is the amyloid cascade hypothesis. This hypothesis describes the errant processing of amyloid precursor protein (APP), resulting in a skewed ratio of the Aβ peptide isoforms being produced. The production ratio is skewed in favor of the longer peptide, Aβ1-42(43), which collects into plaques more readily than the shorter peptide, Aβ1-40 (Lee et al., 2004). Since there is currently no cure for AD, treatment is targeted at ameliorating symptoms. Pharmacological treatment with cholinesterase inhibitors has moderate efficacy in improving mood, cognition and behavior (Ballard et al., 2011). Cholinesterase inhibitors work by blocking the breakdown of the neurotransmitter, acetylcholine. A Cochrane Collaboration review of the cholinesterase inhibitor Donepezil showed global improvements 	   20	  in clinical assessment (Birks and Harvey, 2006). However, there are unpleasant side effects including nausea and diarrhrea. As discussed previously, non-pharmacological treatments, such as music therapy, are available as complementary or alternative choices that offer positive outcomes without negative physical side effects (Koger et al., 1999, Sherratt et al., 2004).   The earlier the AD diagnosis can be made, the earlier treatment can begin. Brief cognitive tests are available to screen for mild cognitive impairment (MCI), a cognitive state between normal and dementia (Jacova et al., 2007). 10-15% of clinically-referred patients decline from a diagnosis of MCI to dementia in the subsequent year (Feldman and Jacova, 2005). The most commonly used brief cognitive test is the Mini Mental State Examination (MMSE) and many newer assessment tools (such as the MoCA and the DemTect) are available with similar levels of sensitivity (Jacova et al., 2007).   While brief cognitive testing is useful in detecting early signs of dementia, it is not sufficient for a diagnosis of probable AD. The standard criteria for AD diagnosis in research studies comes from the National Institute of Neurological and Communicative Disorders and Stroke-Alzheimer Disease and Related Disorders (NINCDS-ADRDA) and the latest edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM), which is currently DSM-IV-TR (Dubois et al., 2007). The NINCDS-ADRDA requires time consuming neuropsychological testing, which assesses memory, logical reasoning and praxis, personality and behavior, visuospatial ability and language skills, as well as consideration of medical history (McKhann et al., 2011).  Unfortunately, even with extensive neuropsychological testing, AD can be confused with other forms of dementia, such as frontotemporal dementia (FTD) (Varma et al., 1999, 	   21	  Jacova et al., 2007). Significant advances in the understanding of AD biomarkers has prompted the suggestion of updating NINCDS-ADRDA and DSM-IV-TR diagnostic criteria to include at least one AD biomarker (Dubois et al., 2007). Various methods of neuroimaging have been able to uncover useful biomarkers, which improve diagnostic abilities in 3 ways: 1. reducing the number of AD cases mistaken for normal aging, 2. providing earlier diagnosis of AD and 3. improving differential diagnosis to distininguish AD from other forms of dementia (Small, 2002). One study reported that assessing whole-brain volume allowed for diagnosis of AD one year before clinical diagnosis and that assessing hippocampal volume allowed for diagnosis of AD 3 years before clinical diagnosis (Ridha et al., 2006).   Our study is interested in using neuroimaging data for assessing treatment response, rather than for diagnosis. We are using fMRI to provide evidence-based support for treating dementia with music therapy. The following section provides background information on fMRI and how it has been used to study music cognition.           	   22	  Background: fMRI fMRI – How it Works   Functional magnetic resonance imaging is a non-invasive method of collecting images of the brain. With fMRI, we are able to collect time-series scans depicting changes in cerebral blood flow (CBF) in response to stimuli. CBF changes are related to increased or decreased oxygen and glucose metabolism, which indicates brain activity. Because changes in CBF are greater than changes in oxygen metabolism in areas of activity, those areas experience increased oxygenation in the supplying blood vessels. The increased oxygen then binds to deoxyhemoglobin (dHb), which is the oxygen-free form of the oxygen-transport protein, hemoglobin. This changes the protein into oxyhemoglobin, which also changes its magnetic properties. Thus changes in dHb concentration changes signal intensity, which allows us to create functional brain images using BOLD imaging. fMRI is spatially and temporally sensitive enough to allow researchers to study focal brain responses to changing stimuli.    fMRI and Music Cognition  In studying music, fMRI studies fall in two broad categories: those that study music performance and those that study music listening. For the purposes of our study, we will look at studies on music listening, including studies on music cognition.   fMRI studies on music cognition can be either passive or task-based. The issues with each of these two approaches are pointed out in Schmithorst’s paper, which attempts to parse out separate modules of music processing (Schmithorst, 2005). The problem with passive music listening fMRI studies is that the various brain regions processing different 	   23	  components of music (such as timbre and harmony) are likely activated very closely together in time. Since we don’t know the hemodynamic characteristics of these processes a priori, we don’t know which activated regions are processing which parts of music.  Task-based studies are needed to target a specific music processing area. The issue with task-based studies is that the tasks require attentional demand which creates activity in the brain extraneous to music processing. This activity may be difficult to disentangle from the activity of interest. Aside from the problem of irrelevant and possibly misleading brain activity which is a result of task performance, other issues have been brought up by researchers concerning task-based fMRI studies on music: 1. studies often use different stimuli for different tasks and 2. some studies use trained musicians whereas other studies use non-musicians (Seung et al., 2005). Therefore studies of music cognition through neuroimaging during musical task performance has resulted in inconsistencies amongst conclusions made about localization and lateralization of music processing modules.  Our study uses the passive listening approach, because it is an exploratory study. The data we present represents the brain’s activity as it processes music wholistically without any heightened attention to a particular component of music. However, we consulted the literature of both passive-listening and task-based music studies.  What Have We Learned About Music Cognition Through fMRI  1. Pitch In a study delineating brain regions that process pitch height (note) versus pitch chroma (octave), neuroimaging showed that the most basic sound processing area is medial HG, which is activated by any noise, including pitch (Warren et al., 2003). The same study 	   24	  showed that moving outwards, lateral HG is more specific to pitch processing. Then moving further outwards anteriorly, we reach the planum polare (PP), which is able to process changes in pitch chroma (note). If we move further outwards posteriorly from the HG, then we reach PT, which processes changes in pitch height (octave). Note that the activity in the brain is not as orderly as the above simplification may make it seem. For example, noise does not only activate the medial HG – the study’s contrast of “broadband noise minus silence” shows extensive bilateral activation of areas within the STG. Outside of medial HG, areas are more specific to processing certain sound features, but there is still overlapping activity in these other areas. When considering the real world implications of damage to any of these areas, it’s good to keep in mind the likelihood of overlapping functions that exist in other areas of the brain. The findings from neuroimaging have also suggested separate processing of what and where streams. Music processing appears to begin in HG and then continues anterolaterally away from HG, with some processing in the posterior direction from HG. Whether a sound has no pitch (noise), a fixed pitch (a single note) or a sequence of pitches (melody), it will activate HG and PT (Patterson et al., 2002) so this can be considered the centre before the streams split. If a sound comprises a sequence of pitches, it is more likely to be meaningful and is therefore further processed along the what and where streams. When researchers presented a fixed pitch stimulus with varying spatial sources (where) they activated posteromedial PT, and when they presented a fixed location stimulus with varying pitches (what) they activated anterolateral PT (as well as HG, PP, and STG anterior to HG) (Warren and Griffiths, 2003). More support for PT as part of the “where” stream comes from a study on brain activity during the processing of multiple auditory streams (Janata et al., 2002). In 	   25	  this study, the left supramarginal gyrus (an area superior and adjacent to PT) was active when participants were focusing on a single auditory stream, out of several streams. Separation of auditory streams is important in separating auditory sources.  Another fMRI study found finer pitch resolution in the right versus the left hemisphere PT by analyzing BOLD activity changes corresponding to incremental pitch changes (Hyde et al., 2008). However, it is important to note that the subjects in this study had no formal training in music. In studies with trained musicians, we consistently see a lateralization from the left to the right hemisphere for musical processes in the secondary auditory cortex (defined as the area outside of HG) (Ohnishi et al., 2001, Bever and Chiarello, 2009, Schlaug, 2015). The PT, which is part of the secondary auditory cortex, is the most notable area concerning the right to left lateralization of music processes. It is larger in the left hemisphere than the right hemispher on average, but this size difference is even greater in musicians (Schlaug et al., 1995). Using cross-sectional fMRI data, partialling out age effects, music training correlated with greater activity in PT during both rhythm and melody discimination tasks (Ellis et al., 2012). Whether musical training develops pitch resolution in the left hemisphere PT is unknown and studies of musicians vs nonmusicians listening to incremental pitch shifts should be conducted to see whether the lateralization affects only music processing strategies (such as note-labelling) or actual music processing abilities (such as finer pitch resolution).  2. Rhythm and Beat  Neuroimaging has shown that rhythm perception involves a network including the pre-SMA, SMA, insula, premotor area (dorsal), basal ganglia, cerebellum, STG, and 	   26	  prefrontal cortex (ventrolateral) (Grahn and Brett, 2007). These motor areas activate bilaterally to rhythm contrasted with rest. It is important to note that, quite astoundingly, this network activates in the absence of movement. The researchers had not only instructed participants not to move during stimulus presentation, they also saw no activation of the primary motor cortex, so these motor areas are responding purely to rhythm perception.  Part of what makes a rhythm is the duration of the notes in the rhythm pattern. Some of the motor areas which are so crucial to rhythm perception have been shown to be involved in discriminating time durations. This was shown in an fMRI study using a visual stimulus which appeared for differrent durations (Lewis and Miall, 2003). Interestingly, there were slightly different networks recruited for the shorter (sub-second) duration tasks versus the longer (supra-second) duration tasks. The pre-SMA was active during both sub- and supra- second duration discrimination and the left hemisphere cerebellum was active during sub-second duration discrimination. The temporal processing capabilities of the motor area are likely non-specific as to stimulus domain (for example visual versus auditory domain) as similar findings have come from auditory temporal discrimination studies as well (Schall et al., 2003).  Within this rhythm processing network, is the structures which may underlie beat perception. Grahn and Brett noted an increased activation of the basal ganglia and SMA when participants listened to rhythms with a more simple (and therefore more easily and noticeably perceived) meter (Grahn and Brett, 2007). Furthermore, the use of both retrograde and anterograde tract tracers in macaque monkeys has revealed that these 2 structures have strong connections (Inase and Tanji, 1994) so their joint involvement in beat perception is anatomically feasible.  	   27	  3. Music and Memory As mentioned earlier, neuroimaging has revealed separate brain processing of semantic versus episodic music memory (Groussard et al., 2009). Semantic music memory activates the frontal and orbitofrontal cortices, and left hemisphere middle temporal gyrus (MTG) and STG, while episodic music memory right-hemisphere-dominantly activates frontal gyri, precuneus and PT (Platel et al., 2003, Platel, 2005). Of those areas, the frontal cortex, angular gyrus and PT are nonspecific to music memory. In a review of memory for music, with a focus on determining the extent of spared musical memory in Alzheimer’s, the importance of the temporal lobes in music memory was evident in neuroimaging studies (as well as lesion studies) (Baird and Samson, 2009).    Tying It All Together Inferences from Atrophy Progression and Music Processing  Early neural changes attributable to AD include significant atrophy of the hippocampal formation as well as the temporal lobe as a whole (Killiany et al., 1993, Ridha et al., 2006), significant loss of neurons in the entorhinal cortex which lies between the hippocampal formation and the neocortex (Gómez-Isla et al., 1996) and amygdala (Cuénod, 1993). The disease begins in the medial temporal lobes deep in the brain, affecting the transentorhinal then entorhinal cortices (rhinal stage), followed by the hippocampal formation, insula and STG (limbic stage), then finally the striate area of the occipital lobe (neocortical stage) (Braak et al., 2006, Henry-Feugeas, 2007).  As the temporal lobes are affected early on in Alzheimer’s progression, the apparent recognition of familiar songs by patients with varying degrees of AD is not explained by the 	   28	  literature on musical memory, which contrarily suggests that such recognition would be lost, given the importance of temporal lobes in music memory. More encouragingly, the previously cited review of music memory (Baird and Samson, 2009) supports the dissociation of implicit music memory (such as the ability to play the piano) from explicit music memory (which encompasses the semantic and episodic memory we have discussed) and reports evidence of spared implicit music memory. The neural structures involved in preserving implicit musical memories are therefore important to our discussion of in-tact music processing areas, but as mentioned previously, there is a lack of neuroscientific research on the topic. As we’ve discussed in detail, many structures within the temporal lobes are not only important for explicit music memory, but also for music processing. All aspects of melodic processing, such as processing pitch height, pitch chroma, intervals and contours involve areas within the temporal lobes. This suggests that musical sounds are, to some unknown degree, distorted when perceived by patients with AD. This is especially true at later stages as the disease begins medially in the brain and only affects neocortical areas in the mid to late stages. Other prominently affected areas in AD are the hippocampal formation, entorhinal cortex, amygdala and insula. Thus music processes that may involve these areas may be distorted or disrupted as well. Activity in the hippocampus, amygdala and insula are correlated with the intensity of musical chills (the pleasant thrill of music that causes goosebumps). The PET study revealed that insula activity has a positive correlation to chills while bilateral amygdala and left hippocampus have a negative correlation to chills in healthy 20-30 year old subjects (Blood and Zatorre, 2001). Thus the thrilling chill effect may be lacking in AD patients in response to music. 	   29	  But music processing happens outside of the temporal lobes as well. In the paragraph above, we only discussed melodic processing, but rhythmic processing is equally as important. The rhythm perception network does include the STG of the temporal lobes (as well as the early-compromised insula), but it also includes the pre-SMA, SMA, premotor area (dorsal), basal ganglia, cerebellum, and prefrontal cortex (ventrolateral) (Grahn and Brett, 2007). Rhythm perception has not been consistently lateralized and it is possible that it is robust to damage in one hemisphere or the other as the rhythm perception network described by Grahn and Brett activates bilaterally. The preservation of these rhythm processing areas/motor areas is in line with nursing home anecdotes of AD patients becoming engaged with and dancing to music.  We hope to further the understanding of how the brain with AD processes music. The following section details the methods and design of our study, which exists to investigate the activity of AD brains in response to familiar, as well as unfamiliar, music.  Hypothesis  Our hypothesis is that elderly subjects with Alzheimer’s disease will show different patterns of BOLD activity, compared to healthy elderly controls, when listening to familiar versus unfamiliar music.   In order to find areas of significant positive or negative activation in response to unfamiliar and familiar music, we ran several contrasts. We looked at the data of pooled subjects (AD  + control), then each subject group separately (AD > 0; control  > 0), then compared the subject groups (AD > control; control > AD).  In order to test our hypothesis, we ran the following contrasts for unfamiliar music: 	   30	  a. unfamiliar music > static noise (pooled AD + controls) This contrast is designed to show areas with significantly more activation in response to unfamiliar music than to static noise. We expect to see many areas become activated across the brain as music should stimulate much more complex processing than unorganized noise, possibly stimulating not only auditory processing areas but also areas for memory, movement, emotion and visualization. b. unfamiliar music > static noise (AD > 0) This contrast looks at activation for unfamiliar music that is significantly greater than the activation for static noise in patients with dementia. c. unfamiliar music > static noise (controls > 0) This contrast looks at activation for unfamiliar music that is significantly greater than the activation for static noise in controls. d. unfamiliar music > static noise (AD > controls) This contrast looks at activation (beyond the response for static noise) for unfamiliar music that is significantly greater in dementia patients than in controls. e. unfamiliar music > static noise (controls > AD) This contrast looks at activation for unfamiliar music that is significantly greater in controls than in dementia patients. f. unfamiliar music < static noise (pooled AD + controls) This contrast is designed to show areas with significantly more decreased activation in response to unfamiliar music than to static noise. g. unfamiliar music < static noise (AD > 0) This contrast looks at areas with significantly more decreased activation in response to unfamiliar music than to static noise in dementia patients. 	   31	  h. unfamiliar music < static noise (controls > 0) This contrast looks at areas with significantly more decreased activation in response to unfamiliar music than to static noise in controls. i. unfamiliar music < static noise (AD > controls) This contrast looks at activation (beyond the response for static noise) for unfamiliar music that is significantly more decreased in dementia patients than in controls. j. unfamiliar music < static noise (controls > AD) This contrast looks at activation (beyond the response for static noise) for unfamiliar music that is significantly more decreased in controls than in dementia patients. And we ran the following contrasts for familiar music: a. familiar music > static noise (pooled AD + controls) This contrast is designed to show areas with significantly more activation in response to familiar music than to static noise. We expect to see many areas become activated across the brain as music should stimulate much more complex processing than unorganized noise, possibly stimulating not only auditory processing areas but also areas for memory, movement, emotion and visualization. b. familiar music > static noise (AD > 0) This contrast looks at activation for familiar music that is significantly greater than the activation for static noise in patients with dementia. c. familiar music > static noise (controls > 0) This contrast looks at activation for familiar music that is significantly greater than the activation for static noise in controls. 	   32	  d. familiar music > static noise (AD > controls) This contrast looks at activation (beyond the response for static noise) for familiar music that is significantly greater in dementia patients than in controls. e. familiar music > static noise (controls > AD) This contrast looks at activation for familiar music that is significantly greater in controls than in dementia patients. f. familiar music < static noise (pooled AD + controls) This contrast is designed to show areas with significantly more decreased activation in response to familiar music than to static noise. g. familiar music < static noise (AD > 0) This contrast looks at areas with significantly more decreased activation in response to familiar music than to static noise in dementia patients. h. familiar music < static noise (controls > 0) This contrast looks at areas with significantly more decreased activation in response to familiar music than to static noise in controls. i. unfamiliar music < static noise (AD > controls) This contrast looks at activation (beyond the response for static noise) for unfamiliar music that is significantly more decreased in dementia patients than in controls. j. unfamiliar music < static noise (controls > AD) This contrast looks at activation (beyond the response for static noise) for unfamiliar music that is significantly more decreased in controls than in dementia patients.    	   33	  Methods Recruitment  Our subjects were recruited after approval through an institutional ethics board. Subjects were recruited through the UBC Hospital Clinic for Alzheimer Disease and Related Dementias (UBCH-CARD) and the Purdy Pavilion’s extended care units by a study coordinator. Alzheimer subjects fulfills the diagnostic criteria of probable AD using NINCDS-ADRDA criteria (McKhann et al., 1984). Patients not suitable for the MRI environment were excluded, for example, patients with cardiac pacemakers, cochlear implants or claustrophobia.   fMRI Setup Functional magnetic resonance imaging was conducted in the UBC Hospital Purdy Pavilion, using a 3 Tesla Philips scanner. Images were acquired using softone smooth gradients. Single-shot, blipped gradient-echo echo-planar pulse sequences (TR=2.5 s, 90° flip angle, FOV=240 mm, 80×80 matrix, 3x3mm voxel size, 3mm thick slices, 36 slices) were acquired. Prior to functional imaging, high-resolution 3D spoiled gradient recalled at steady-state T1 anatomic images were collected for anatomic localization and co-registration. T1 parameters were SENSE-Head-8 coil selection, 1mm voxel size, 3D scan mode, TFE fast imaging mode, shortest echo time (TE), and field of view parameters of 256mm anterior-posterior, 200mm right-left and 170mm foot-head,. Functional images were collected from bottom to top, interleaved, with a repetition time (TR) of 2.5s. 108 scans were collected for each condition in our block design. Total scan time was 18 minutes for each subject – the 	   34	  sum of four conditions, each taking 4.5 minutes to scan. The short session times were necessary for our elderly subjects to avoid problems associated with fatigue and restlessness. Prior to acquiring functional images, we took high resolution, steady-state T1 anatomical images, to be used in concert with the functional images to provide more anatomical data for the steps of statistical analysis. They were used for the manual reorientation and coregistration steps of pre-processing.  Stimuli There were 4 categories of stimuli: 1) Familiar Music 2) Unfamiliar Music 3) Familiar Scrambled 4) Unfamiliar Scrambled. Familiar Music was Blue Danube by Strauss and Eine Kleine Nachtmusik K 525 Serenade in G by Mozart and Unfamiliar Music was Einzugsmarsch by Johan Strauss and Violin Concerto in D K.211 by Mozart. Scrambled Familiar was created by scrambling the excerpts used in Familiar Music and equalizing the amplitudes. The original excerpts were first broken down into 300-500ms segments, then these segments were randomly reorganized and appended to one another. Unfamiliar Scrambled was done in the same manner with the same piece of the Unfamiliar Music clip. The familiar music was chosen after consulting a music therapist who informed us that these pieces would be relevant and very familiar to people in our subjects’ age range. The unfamiliar music was by the same composers who wrote the familiar music pieces, but have been rarely performed in public.    	   35	  Study Design Subjects were briefed on what to expect during the imaging procedure. They were then given their stimuli via headphones, lying supine in the fMRI machine. In order to prevent blurring of the brain images, subjects’ heads were secured with a chin strap. We also instructed the subjects to keep their eyes focused on a crosshair to keep the subjects’ eyes open and thus keep them awake. After the scanning session, subjects were asked to rate the musical pieces on familiarity, using a likert scale with 1 = very familiar, 2 = somewhat familiar, 3 = unsure, 4 = somewhat unfamiliar, and 5 = very unfamiliar.  Our stimuli were presented in a block design (See Fig. 2). After 7.5 seconds of silence, 37.5 seconds of static noise was followed by 75 seconds of a stimulus, then 37.5 seconds of static noise, then 75 seconds of the second version of the stimulus category, then 37.5 seconds of static noise. So subjects were exposed to each stimulus category twice, in randomized order.    Fig. 2  Shows the block design used for the study.  Imaging Data Analysis The Statistical Parametric Mapping 8 (SPM 8) software was run in the Matlab environment to preprocess the brain images and to run statistical analyses on the data. Preprocessing brain images acquired from fMRI scanners is a necessary step prior to 	   36	  statistical analysis, because functional images are taken over time and subject movements cause artefacts. As well, subject brains must be mapped onto a standardized brain image for the purpose of between-subjects comparison.  A. Preprocessing  The Philips scanner output DICOM images in the form of PAR and REC files, which we converted to the 3D NIFTI images. The images were converted using MRICRON’s dcm2nii. We pre-processed these images with reorientation, realignment, coregistration, segmentation, then smoothing (See Figure 2).     1. Reorientation We manually reoriented each subject’s anatomical image, so that the anterior commissure (AC) was set as the origin (coordinate 0, 0, 0) and the posterior commissure (PC) was horizontally in line with the AC. This is the orientation of brain images in the standard Talairach template. Manual reorientation was important for our analysis, because of the irregularity in the shapes of brains atrophied by age and Alzheimer’s, although it introduces human error.   2. Realignment   With our time-series of reoriented images, we then ran Realignment. This is a process which uses rigid body transformations (ie. translational and rotational shifts only) to remove movement artefacts. In setting the realignment parameters, we selected the first image in 	   37	  each time-series as a reference to which all subsequent scans were realigned. Our parameter settings favored quality over speed. We raised the “quality” parameter from the default 0.9 to the max 1.0. For the “separation” parameter, we chose 2 instead of the default 4. This means that the distance between points sampled from the reference image was reduced to 2mm. The “interpolation” parameter determines the number of neighboring voxels used during the realignment transformation. We used the high quality 6th degree B-Spline, rather than the much faster and lower quality 2nd degree B-Spline. The realignment step produces plots of the translations and rotations undergone during the step. We analyzed these plots to uncover any subjects with excessive movement that would need to be excluded from analysis. We defined excessive movement as translations greater than 6mm and rotations greater than 6 degrees (double the size of our voxels) this may change after Elham confirms voxel size. No subjects were excluded as no movements exceeded our movement threshold.   3. Coregistration  For each subject, we used coregistration to estimate the transformations needed to align the mean realigned functional data from the previous step to the higher resolution anatomical image, which we had reoriented in the first step. These estimated alignment transformations were then applied to all the functional images.    	   38	  4. Segmentation  After realigning all the functional images and the anatomical image to the same orientation, we were then ready to put the functional images into a standard space to better facilitate comparisons across subjects.  This was done through matching the gray matter in these images to a gray matter reference. Segmentation is used as an alternative to normalization, so that the confounding effects of the structural variability of non-brain areas (such as the scalp) are reduced. First, we had to fit the anatomical scan onto the T1 Montreal Neurological Institute (MNI) template, then we used these warping parameters to non-linearly transform the coregistered functional images.   5. Smoothing  The final step was to smooth the functional images, reducing the anatomical differences between the subjects’ functional scans and increasing the signal-to-noise ratio (SNR). Given the signal loss from areas with atrophy, the increased SNR was a necessary tradeoff for the loss of resolution. We used the default value of 8 x 8 x 8 mm full width at half maximum (FWHM) for the Gaussian smoothing kernel.  	   39	   Fig. 3  Shows the image pre-processing steps used prior to first and second level analyses.  B. Statistical Analysis  Following the preprocessing steps, the data are ready for statistical analysis of evoked hemodynamic response. Statistical analysis is done to create a model of the expected BOLD signal during the experiment using the general linear model and to estimate and test the fit of this model in every voxel in the brain. The statistical analysis is done in two levels:  1. First Level Analysis  The objective of the first level analysis is to analyze the presence of the defined activation pattern in each subject. The design and the contrasts at the individual subject level are specified in this step. All the information about different music conditions, onsets and durations of music and static periods for a single subject scan are entered here. A Boxcar 	   40	  block design with a canonical Hemodynamic Response Function (HRF) basis function is used for this step. Basis function is used to model the hemodynamic response.  After the model is specified, it must be “estimated”. Model parameters are estimated using the classical (ReML - Restricted Maximum Likelihood) algorithm. This assumes the error correlation structure is the same at each voxel.   ReML estimation is applied to spatially smoothed functional images. After estimation, specific profiles of parameters are tested using a linear compound or contrast with the T or F statistic. The resulting statistical map constitutes an SPM. The SPMT/F is then characterised in terms of focal or regional differences by assuming that (under the null hypothesis) the components of the SPM (ie. residual fields) behave as smooth, stationary Gaussian fields.   2. Second Level Analysis  The objective of the second level or random effect analysis is to make a population inference. It allows one to perform a group comparison. It uses the statistical summary from the first level analysis and uses the contrast image from each subject as a measure of subject response. Thus group analysis is done using these images as the new dependent variable in the GLM model. For example, a simple t-test on contrast images from the first-level will be a random-effects analysis with random subject effects, inferring for the population based on a particular sample of subjects.   This step involves defining a design matrix, describing the general linear model, data specification, and setting other parameters necessary for the statistical analysis, including any 	   41	  covariates. These parameters are then used to estimate the design. Inference on these estimated parameters is then handled by defining contrast.   Since we have independent measurements with unequal variance from the first level analysis and are interested in between-subjects comparisons, we have selected two-sample t-tests for our second level design.   Following second level analysis, we have defined a number of T-contrasts to look at between group activations. This has been done using the Maximum Intensity Projection (MIP) of the statistical map in the SPM graphics window and using the information from SPM statistical tables.   After observing whole brain images, we applied masks to our areas of interest. These masks were selected by consulting related papers, including “The Neural Architecture of Music-Evoked Autobiographical Memories” (Janata, 2009b) and “The Emotional Power of Music: How Music Enhances the Feeling of Affective Pictures” (Baumgartner et al., 2006). These areas were left and right hemisphere amygdala, left and right hemisphere angular gyrus, left and right hemisphere cerebellum, cerebellum anterior lobe, cerebellum posterior lobe, hippocampus, left and right hemisphere inferior gyrus, left and right hemisphere insula, left and right hemisphere MFG, MTG, parahippocampal gyrus, left and right hemisphere parahippocampal area, left and right hemisphere parietal lobe, left and right hemisphere STG, left and right hemisphere superior parietal lobe, left and right hemisphere SMA, BA06 (which includes premotor cortex and supplementary motor cortex), BA07 (parietal cortex), BA08 (frontal eye field), BA09 (dorsolateral prefrontal cortex), BA10 (anterior prefrontal cortex), BA17 (primary visual cortex), BA18 (secondary visual cortex), BA21 (MTG), BA22 (within STG), BA23 (ventral posterior cingulate cortex), BA24 (ventral anterior cingulate 	   42	  cortex), BA25 (ventromedial prefrontal cortex), BA29 (retrosplenial cingulate cortex), BA31 (dorsal posterior cingulate cortex), BA32 (dorsal anterior cingulate cortex), BA33 (part of anterior cingulate cortex), BA38 (temporopolar area), BA39 (angular gyrus), BA41 (auditory cortex), BA42 (auditory cortex), BA45 (pars triangularis), BA46 (dorsolateral prefrontal cortex) and BA47 (pars orbitalis).                  	   43	  Results  Subjects In total, there were 10 subjects diagnosed with Alzheimer Disease and 10 healthy, age-matched controls (see Table 1).  1. AD Subjects   The diagnoses of Alzheimer’s Disease were made according to NINCDS-ADRDA criteria. Ages ranged from 63 to 85, with a mean age of 73.1. MMSE scores ranged from 9 to 26, with a mean of 20 and a mode of 26. 4 were male and 6 were female. Musical backgrounds were varied. All subjects fell within the range of mild to moderate AD (see Table 2). 2. Control Subjects Ages ranged from 60-81, with a mean age of 69.8. 2 were male and 8 were female. Musical backgrounds were varied (See Table 3).    Table 1 Shows subject background information for both the AD and control groups.  	   44	   Table 2 Shows detailed subject background information for AD subjects.   Table 3 Shows detailed subject background information for control subjects.  Unfamiliar Music vs Static noise (See Figure 4) (pooled AD + controls) As expected, we found that the brain utilizes many more areas to process unfamiliar music than it does to process static noise.   (controls > 0)  In controls, the unfamiliar music > static noise contrast revealed right hemisphere activity in the amygdala, temporopolar area and parahippocampal gyrus. Bilateral activity 	   45	  was found in the cerebellum and insula, as well as bilateral STG. Within STG, BA22 and BA42 were both significantly active only in the right hemisphere. (See Table 4.)  (AD > 0) In dementia patients, the amygdala of the left hemisphere had significant positive activity and the angular gyrus (BA39) of the right hemisphere displayed significant negative activity. Bilateral activation of STG included significant positive activity in the right hemisphere BA22 and left hemisphere BA41/BA42. (See Table 5.) (control > AD)  The temporopolar area exhibited significantly more activation in controls than in AD patients.  (AD > control)  The temporopolar area exhibited significantly less activation in AD patients than in controls. Overlapping activity (controls > 0) and (AD > 0) Between the two groups, overlap in activity in response to unfamiliar music was only found within the STG where there was bilateral activity in both subject groups. Both dementia patients and controls exhibited right hemisphere activation of BA22. BA42 was active in both dementia patients and controls, but in the left and right hemispheres respectively, and amygdala was active in both dementia patients and controls, but also in the left and right hemispheres respectively. These results were uncovered using masks. The only surviving voxels in whole brain analysis were in BA22 (p = 0.026 cluster level) for pooled subjects.  	   46	  Familiar Music vs Static noise (See Figure 5) (pooled AD + controls) As expected, we found that the brain utilizes many more areas to process familiar music than it does to process static noise.  (controls > 0)  Familiar music elicited activity in the left hemisphere parahippocampal area, parietal lobe and left hemisphere STG (including BA22) of healthy elderly subjects. This data shows that minimal processing power was used for healthy elderly subjects to listen to familiar music. However, in dementia patients, familiar music elicited activity in many areas of the brain. More than seen in the control subjects. (See Table 6.) (AD > 0)  IFG and STG were bilaterally activated, SMA and BA06 were active, BA21, BA22, BA38 and BA41 were active in the left hemisphere, and BA45 and BA42 were active in the right hemisphere. Left hemisphere BA31 was deactivated. (See Table 7.) (control > AD)  This constrast revealed no significant results. (AD > control)  This constrast revealed no significant results. Overlapping activity for (controls > 0) and (AD > 0)   Between the two groups, overlap in activity in response to familiar music was only found within left hemisphere STG, including BA22. This is similar to what we found in the Unfamiliar Music vs Static noise contrasts. That both subject groups displayed significant activation in this area is to be expected as it is the auditory cortex and is necessary in sound 	   47	  processing. In the AD subjects, STG activation was bilateral, with BA22 specifically being active in the left hemisphere only. These results were uncovered using masks. The only surviving voxels in whole brain analysis were in BA22 (p = 0.016 cluster level) for pooled subjects.    	   48	   Fig. 4 Shows whole brain uncorrected images. Dark areas indicate activity in response to unfamiliar music (unfamiliar music > static noise). A) Pooled subjects B) AD subjects C) control subjects D) AD vs. control (AD > control) E) control vs AD (control > AD).  	   49	   Table 4 Shows p-values (p ≤ 0.05) for control group. Activation in blue. Set level values reported when available. Cluster level values indicated by (*).  c = clusters per set; kE = voxels per cluster 	   50	   Table 5 Shows p-values (p ≤ 0.05) for AD group. Activation in blue and deactivation in red. Set level values reported when available. Cluster level values indicated by (*).  c = clusters per set; kE = voxels per cluster 	   51	   Fig. 5 Shows whole brain uncorrected images. Dark areas indicate activity in response to familiar music (familiar music > static noise). A) Pooled subjects B) AD subjects C) control subjects D) AD vs. control (AD > control) E) control vs AD (control > AD).  	   52	   Table 6 Shows p-values (p ≤ 0.05) for control group. Activation in blue. Set level values reported when available. Cluster level values indicated by (*).  c = clusters per set; kE = voxels per cluster 	   53	   Table 7 Shows p-values (p ≤ 0.05) for AD group. Activation in blue and deactivation in red. Set level values reported when available. Cluster level values indicated by (*).  c = clusters per set; kE = voxels per cluster 	   54	  Discussion This study was an exploratory study investigating the neural responses to music in AD patients. We approached this topic by presenting both familiar music and unfamiliar music (from the same style and era) as stimuli during brain imaging. We hypothesized that elderly subjects with Alzheimer’s disease would show different patterns of brain activity, compared to healthy controls, when listening to familiar music and unfamiliar music. What we found was different patterns of activity when looking at how controls responded to familiar and unfamiliar music compared to how AD patients responded to these stimuli. In controls, we found a greater number of active areas in response to unfamiliar music than for familiar music. In the AD patients, we found a greater number of active areas in response to familiar music than for unfamiliar music, and this is opposite to the pattern of activity in controls. It is important to specify that the differences found between AD and control music processing was in the pattern of activation seen in each group separately for the contrasts (AD > 0) and (control > 0). When comparing the level of activation in the 50 mask regions for the between-groups comparison contrasts, (control > AD) and (AD > control), the only significant difference was found in the temporopolar area. Control subjects experienced signficantly greater activation than AD subjects in the temporopolar area during unfamiliar music. The temporopolar area (BA38) is a site with multisensory and limbic input, possibly a link between the sensory input of sound and the limbic input of emotion (Moran et al., 1987). As mentioned, it is a multisensory area and therefore not an area specific to music processing.   	   55	  Discussing the Results for Unfamiliar Music  Unfamiliar music elicits more brain activity than random noise in both patients and controls. In the healthy adults, this activity was seen in many areas of the brain. In the dementia patients, the activity was mostly limited to the temporal lobes, but this still shows the recruitment of additional auditory processing in response to music, beyond the processing elicited for non-musical noise.   Compared to dementia patients, controls had a greater number of significant activation areas across the brain when listening to unfamiliar music. This included positive activity in the cerebellum, temporopolar area, insula and right hemisphere parahippocampal gyrus.  The cerebellum is important in both physical and perceptual timing of discrete events (rather than continuous events) and is able to discriminate millisecond durations (Ivry et al., 2002, Buhusi and Meck, 2005). Event timing and duration discrimination is important in both music production and perception.   The temporopolar area (BA38) is a site with multisensory as well as limbic input (Moran et al., 1987), potentially creating a connection between music and emotion. As it is affected early on in AD progression (Thangavel et al., 2009), lesions would have explained why the area was active in control but not in AD subjects. However, we know that BA38 was still somewhat functional in our AD subjects, because familiar music was able to elicit significant temporopolar activation (p = 0.047). That said, the data may indicate that the AD subjects simply did not form an emotional connection to the unfamiliar music, whereas they were able to form an emotional connection with familiar music.  	   56	   The insula is an area that is bilaterally activated during pleasurable musical experiences (Blood and Zatorre, 2001). Control subjects’ neural activity in both BA38 and insula may suggest a preference for novel musical stimuli.   The parahippocampal gyrus surrounds the hippocampus and plays a role in memory encoding and retrieval. It also has projections to and from the amygdala and is involved in processing emotional valence (Blood et al., 1999). In dementia patients, the area of unique activity (ie. not also active in controls) was the angular gyrus, which had significant deactivation. As this area is involved in processing semantic data (Platel, 2005, Donnay et al., 2014), the deactivation suggests that the unfamiliar music did not convey any meaning to the AD patients. Overlapping activity between control and AD subjects in the Unfamiliar Music vs Static Noise contrasts was only found in bilateral STG. This is to be expected as the STG is the location of the auditory cortex where all forms of sound, whether musical or nonmusical, is processed. The lateralization differences of BA42 within the STG is less straightforward to interpret. BA42 was active in both dementia patients and controls, but in the left and right hemispheres respectively. We considered that this pattern might be the result of damage and subsequent compensation. In this interpretation, there is damage to bilateral BA42 in AD patients, as a result of the disease, but since HG (sometimes considered to include both BA41 and B42) is generally larger in the left hemisphere (Zatorre et al., 2002), the left hemisphere HG perhaps has the capacity to continue functioning for longer. However, this explanation did not explain why right hemisphere BA42 was active in AD subjects during familiar music. The right lateralization of BA42 activity in controls is not suprising as the basic, non-analytical music processing is well-known to be right hemisphere dominant.  	   57	  The amygdala was also active in both dementia patients and controls in the left and right hemispheres respectively. The reason for the differing lateralization is unclear. Why the amygdala is even active in the Unfamiliar Music condition is open to question, because the amygdala is known to play a role in fear and aggression. We considered the possibility that the amygdala was activated by the anxiety of being in the fMRI machine, but that would assume that the amygdala would also be active during the Familiar Music condition, which was not the case for either subject group. Research into music-evoked fear has shown that music is not a powerful fear-evoker without being integrated with another sense, such as a frightening visual stimulus (Eldar et al., 2007). As well, recent research has found that the superficial portion of the amygdala is able to respond to both joyful and fearful music (Koelsch et al., 2013). These studies show that it’s possible that the amygdala activity may simply be in response to the high arousal, low valence quality (Trost et al., 2012) of our unfamiliar music stimulus.  The findings from the Unfamiliar Music vs Static noise contrasts suggest that AD patients lack the level of rhythmic, multisensory and emotional connection to unfamiliar music that the control subjects have. This lack of connection to unfamiliar music is unsurprising. Dementia patients are most comfortable and engaged in environments where they are very familiar with the surroundings, such as their homes and lifetime neighborhoods. Cognitive function has been found to improve when AD patients are relaxed thus the relaxing quality of familiar classical music may play a role in releasing mental functions (Silber, 1999, Bruer et al., 2007). Our results from the contrasts in Familiar Music vs Static Noise do in fact show a greater neural response to familiar music in AD patients. This is especially poignant when compared to the activity found in control subjects in response to the same 	   58	  familiar music. As mentioned earlier, we employed 50 masks to investigate specific areas of the brain. Our Familiar Music vs Static noise contrasts found significant activity in 15/50 areas for the AD subjects compared to only 5/50 areas for the control subjects.  Discussing the Results for Familiar Music  Familiar music elicits more brain activity than random noise in both patients and controls. Compared to the controls, AD subjects had a greater number of significant activation areas across the brain when listening to unfamiliar music. In controls, there were only two areas of activation that were not also active for dementia patients (these were bilateral parietal lobe and left parahippocampal area), whereas several areas were active in dementia patients that weren’t active for control subjects. The bilateral IFG and STG, SMA, BA06 left hemisphere BA41, BA21, BA31 and BA38, and right hemisphere BA42 and BA45 all activated more to familiar music than to static noise in dementia patients only.   It has been suggested that the parietal lobe, which was only active in control subjects, is involved in sound localization as well as sensory-motor integration (Alain et al., 2008). There is also strong evidence in the literature that the parietal lobe is involved in episodic memory retrieval (Wagner et al., 2005). This proposed function of the parietal lobe makes  sense in the context of our study. This particular area was only active in healthy control subjects for the familiar music condition, as familiar music is the only condition which should instigate episodic memories. Also, it is not active in AD subjects in either condition because episodic memory is affected early in disease progression.   The following areas of discussion were only active in dementia patients for the Familiar Music contrasts. The IFG is involved in maintaining working memory for pitch 	   59	  (Zatorre et al., 1994, Gaab et al., 2003, Albouy et al., 2013), thus it is often activated during musical discrimination tasks (Platel, 2005).   The SMA is involved in processing durations, which is a similar function that the cerebellum plays in music perception, but researchers believe that the SMA is involved in processing longer durations (equal or greater than 1 second) and the cerebellum is involved in processing shorter durations (milliseconds) (Buhusi and Meck, 2005, Macar et al., 2006, Zatorre et al., 2007).   Brodmann Area 06 includes the premotor cortex and supplementary motor cortex. The premotor cortex is activated in anticipation of a sequence which is expected to follow a predictable pattern (Schubotz and von Cramon, 2003, Schubotz et al., 2003). This activation certainly makes sense in response to familiar music as a familiar melody would constitute a predictable pattern of notes.   Brodmann Area 21 is the middle temporal gyrus. An analysis of voxel-based morphometry (VBM) has indicated its involvement in naming familiar songs (Johnson et al., 2011). In fact, it is involved in the more general process of semantic memory retrieval (Platel et al., 2003), which would encompass the more specific function of naming a familiar song. The specific left hemisphere activity we see in our results also makes sense in light of existing research - musical semantic memory retrieval is a process with left hemisphere dominance, not only in the middle temporal gyrus, but also in the superior temporal gyrus (Platel, 2005). If the activity we found in the middle temporal gyrus and superior temporal gyrus was indeed generated by a semantic memory process such as familiar song-naming, then this would perhaps explain the left hemisphere dominant activity in the temporal cortex of both AD and control subjects in the Familiar Music condition. What does not fit neatly 	   60	  into this interpretation is the lack of significant activation in the control subjects’ middle temporal gyrus, but it is possible that there was some activation in that area that simply did not reach significance. Another possibility is that controls used another path for memory retrieval, as they exhibited activity in the parahippocampal area, which is also involved in memory processes.   Brodmann Area 31 is an area that is still poorly understood, but current research shows that it is an important part of the default mode network (Leech and Sharp, 2014). Our AD subjects displayed significant deactivation in the area, which may be a sign of attentional cognitive engagement (Hayden et al., 2010) but activity in this area is ambiguous for patients with brain damage (Leech and Sharp, 2014) so this interpretation should be taken lightly. BA38 has already been discussed as an area of multisensory association (Moran et al., 1987).  The temporopolar area (BA38) was discussed earlier and BA45 is part of IFG so it is also involved in working pitch memory (Nan and Friederici, 2013).  Discussing Familiar Music vs Unfamiliar Music Results  Comparing unfamiliar music vs familiar music activation in AD patients, the data seems to fit with theories of music therapy that suggest the use of familiar music as opposed to unfamiliar music during therapy (Gerdner, 1997, Gerdner, 1999). The theory behind music therapy has long suggested that familiar music is more effective in engaging patients and improving mood and function than unfamiliar music (Gerdner, 2000). Our scans of AD patients show a wider network and number of structures engaging in response to familiar music than to unfamiliar music. However, whether increased activity is actually a desired effect from music is unknown.   	   61	   Even though familiarity ratings indicated that the AD patients considered both the familiar and unfamiliar music as familiar (familiar music was rated “very familiar”, while unfamiliar music was rated “somewhat familiar” on a scale that also included the options of “unsure”, “somewhat unfamiliar” and “very unfamiliar”), their brains responded to the familiar music of their past differently to the unfamiliar music.  The IFG (including BA45), SMA, BA06, BA21, BA31, and BA38 are all areas that did not show significant positive activity in AD patients for unfamiliar music and yet were significantly positively activated by familiar music. These areas encompass functions of working memory for pitch, rhythm perception and production, pattern prediction, memory retrieval, attention and  multisensory association. Earlier, we specified that the differences found between AD and control music processing was in the pattern of activation seen in each group separately, but that direct comparison constrasts revealed a significant difference in the activity of only one area – the temporopolar area. An interpretation of this activity in controls may be that unfamiliar music elicits more emotion from controls than AD subjects. Why might this be the case? Consider novelty-seeking as a factor.  The difference in music processing between AD patients and controls may be explained by two interwoven factors: 1. novelty and 2. ease of processing. Healthy individuals tend to pay more attention to novel stimuli, whereas a characteristic of AD patients is a lack of novelty-seeking (Daffner et al., 1999, Fritsch et al., 2005). This would suggest that the controls would have paid more attention to the unfamiliar music (novel stimulus). Given the novelty of the unfamiliar music, it would have involved not only more attention, but also more brain resources to process than familiar music. This is indeed the 	   62	  pattern that our data suggested. As mentioned, AD patients are not novelty-seekers so the ease-of-processing factor determines the focus of their attention and engagement. Novelty makes comprehension more difficult in AD patients (Amanzio et al., 2008) and this can cause frustration and disengagement. Unfamiliar music (novel stimulus) is more difficult to process, and because AD patients are indifferent to novelty, there is no incentive to put in the extra effort of processing the novel stimulus. The perception of music should be better maintained for familiar music, as recognition of a familiar piece could aid in the perceptual recreation of the musical piece. Thus familiar music can provide AD patients with cognitive engagement, which would likely be fulfilling in contrast to the confusion and disconnection to other stimuli in the world.  Limitations While our interpretation of the results is informative for music therapy, it is important to remember that these results come from a small sample size, which makes it less reliable to extrapolate to the AD population. Another issue is that we were unable to select age-matched control and subject groups. While the two groups had similar average ages (69.8 vs 73.1 average years old), the difference is statistically significant. As well, hemodynamic responses in elderly subjects are more variable than in younger subjects and that could have affected our results as well (Huettel et al., 2001). Several factors were not controlled for in this initial analysis. MMSE severity, gender, education, age and music training differences are all possible covariates, which will be considered in future analyses. Given the structural deviance of our AD subjects’ brains, registration with the standard template brain was not ideal. Recognizing the registration issue, we set parameters 	   63	  in the registration to the highest quality. Defining the areas for investigation using structures and Brodmann areas was also not ideal, because we may be missing areas of activation. A better method may be to define masks using functional areas active in controls. However, we did use a large number of masks that covered all the major lobes, so we believe our data creates a rather detailed picture of music processing in the brain. Also, activation in control brains could be in different areas than in dementia patients due to compensatory areas of activation in the AD subjects. This appeared to be the case in this pilot study. The SPM analysis of AD brains is difficult because of signal loss from atrophied areas (Small, 2002), but more research in this area will lead to future improvements in image correction methods.  We should also note that familiarity ratings were not very different for our familiar and unfamiliar song choices. However, this is most likely because subjects needed to hear the songs in order to rate them as familiar or unfamiliar. Hearing the unfamiliar song and then being asked if the song was familiar was probably confusing for the AD subjects, because having just heard the song, it would have felt familiar. While the results were interesting, they were not evident in the initial whole-brain SPM analyses, as few voxels survived the stringent whole-brain corrections (only voxels in the right hemisphere STG survived). Instead, masks were applied to carefully selected areas of interest, using small volume corrections, to see the results. We applied a large number of masks, spanning each of the main lobes in the brain (frontal, parietal, temporal, occipital and limbic), which then introduced the problem of repeated testing, which increases the chance of false positives.    	   64	  Future Directions  Along with the imaging data collected during static noise and music conditions, we collected data from scrambled music conditions. In running further analyses for the submission of a research paper, we will be running another set of contrasts, using scrambled music activation contrasted with unscrambled music activation. We will also be introducing the covariates of age and MMSE. As the number of masks used in this exploratory study introduced the repeated testing problem, a smaller subset of brain regions can be selected for future contrasts involving a larger number of subjects.  Conclusion This study has potential implications applicable to music therapy, which is a valuable treatment modality available to AD patients throughout their disease course. Previous to our research, the specific use of familiar music instead of unfamiliar music had no basis beyond theory and behavioral evidence (Gerdner, 2000). Our study has shown a difference in neural responses to familiar versus unfamiliar music in AD patients, which exist despite little difference in subjective feelings of familiarity.  As mentioned, dementia patients exhibited less brain activity in response to unfamiliar music and the neural activity was mostly limited to the temporal lobes. This suggests that dementia patients process unfamiliar music at the level of sound, without engaging higher level associations with movement or memory. While rudimentary, the processing is still more advanced than basic noise processing because the primary auditory areas (BA41, BA42 and BA22) were all significantly more active in response to unfamiliar 	   65	  music than to static noise. This suggests that, although the AD patients had diseased temporal lobes, they could likely still perceive and appreciate music.  Aside from the deactivation in the angular gyrus, controls exhibited activity in the same brain areas as the dementia patients when listening to unfamiliar music, with additional areas of activity with functions of timing (cerebellum), multisensory association (BA38), and memory encoding/retrieval (parahippocampal gyrus).   Interestingly, while the data from the AD group listening to unfamiliar music seemed to show that AD patients lack the higher level music processing that controls have, some of this higher level processing was rescued through familiar music. The familiar and unfamiliar musical pieces had the same style, era and composers, but simply differed in their level of familiarity. The familiarity factor elicited activation in areas associated with working memory for pitch (IFG and BA45), pattern anticipation and movement planning (BA06), memory retrieval (BA21), attention (BA31), multisensory association (BA38) and tracking time duration (SMA), which may help preserve cognitive function in these brain regions. The results of this study provide knowledge on the pattern of activation in a wide array of brain areas in response to familiar and unfamiliar music. What the functional role of each activated or deactivated area is currently speculatory but can guide future research in the selection of responsive brain areas in neuroimaging music research. We can conclude that a greater number of brain areas are activated in response to familiar music compared to unfamiliar music in AD patients. We have also found the reverse pattern in control subjects. Whether the increased activation in AD patients in response to familiar music is a desired effect of music and indicates a more powerful music therapy stimulus than unfamiliar music is unclear, but appears promising as an avenue for further research.  	   66	  References (2014)	  Know	  the	  10	  Signs.	  vol.	  2015:	  Alzheimer's	  Association.	  Alain	  C,	  He	  Y,	  Grady	  C	  (2008)	  The	  contribution	  of	  the	  inferior	  parietal	  lobe	  to	  auditory	  spatial	  working	  memory.	  J	  Cogn	  Neurosci	  20:285-­‐295.	  Albouy	  P,	  Mattout	  J,	  Bouet	  R,	  Maby	  E,	  Sanchez	  G,	  Aguera	  PE,	  Daligault	  S,	  Delpuech	  C,	  Bertrand	  O,	  Caclin	  A,	  Tillmann	  B	  (2013)	  Impaired	  pitch	  perception	  and	  memory	  in	  congenital	  amusia:	  the	  deficit	  starts	  in	  the	  auditory	  cortex.	  Brain	  :	  a	  journal	  of	  neurology	  136:1639-­‐1661.	  Amanzio	  M,	  Geminiani	  G,	  Leotta	  D,	  Cappa	  S	  (2008)	  Metaphor	  comprehension	  in	  Alzheimer's	  disease:	  novelty	  matters.	  Brain	  Lang	  107:1-­‐10.	  AMTA	  (1998)	  History	  of	  Music	  Therapy.	  vol.	  2014:	  American	  Music	  Therapy	  Association.	  Anstey	  KJ,	  Mack	  HA,	  Cherbuin	  N	  (2009)	  Alcohol	  consumption	  as	  a	  risk	  factor	  for	  dementia	  and	  cognitive	  decline:	  meta-­‐analysis	  of	  prospective	  studies.	  Am	  J	  Geriatr	  Psychiatry	  17:542-­‐555.	  Baeck	  E	  (2002)	  The	  neural	  networks	  of	  music.	  Eur	  J	  Neurol	  9:449-­‐456.	  Baird	  A,	  Samson	  S	  (2009)	  Memory	  for	  music	  in	  Alzheimer's	  disease:	  unforgettable?	  Neuropsychology	  review	  19:85-­‐101.	  Ballard	  C,	  Gauthier	  S,	  Corbett	  A,	  Brayne	  C,	  Aarsland	  D,	  Jones	  E	  (2011)	  Alzheimer's	  disease.	  The	  Lancet	  377:1019-­‐1031.	  Baumgartner	  T,	  Lutz	  K,	  Schmidt	  CF,	  Jancke	  L	  (2006)	  The	  emotional	  power	  of	  music:	  how	  music	  enhances	  the	  feeling	  of	  affective	  pictures.	  Brain	  research	  1075:151-­‐164.	  Bertram	  L,	  Tanzi	  RE	  (2008)	  Thirty	  years	  of	  Alzheimer's	  disease	  genetics:	  the	  implications	  of	  systematic	  meta-­‐analyses.	  Nat	  Rev	  Neurosci	  9:768-­‐778.	  Bever	  TG,	  Chiarello	  RJ	  (2009)	  Cerebral	  dominance	  in	  musicians	  and	  nonmusicians.	  J	  Neuropsychiatry	  Clin	  Neurosci	  21:94-­‐97.	  Beydoun	  MA,	  Beydoun	  HA,	  Wang	  Y	  (2008)	  Obesity	  and	  central	  obesity	  as	  risk	  factors	  for	  incident	  dementia	  and	  its	  subtypes:	  a	  systematic	  review	  and	  meta-­‐analysis.	  Obes	  Rev	  9:204-­‐218.	  Birks	  J,	  Harvey	  RJ	  (2006)	  Donepezil	  for	  dementia	  due	  to	  Alzheimer's	  disease.	  Cochrane	  Database	  Syst	  Rev	  CD001190.	  Blood	  AJ,	  Zatorre	  RJ	  (2001)	  Intensely	  pleasurable	  responses	  to	  music	  correlate	  with	  activity	  in	  brain	  regions	  implicated	  in	  reward	  and	  emotion.	  Proceedings	  of	  the	  National	  Academy	  of	  Sciences	  of	  the	  United	  States	  of	  America	  98:11818-­‐11823.	  Blood	  AJ,	  Zatorre	  RJ,	  Bermudez	  P,	  Evans	  AC	  (1999)	  Emotional	  responses	  to	  pleasant	  and	  unpleasant	  music	  correlate	  with	  activity	  in	  paralimbic	  brain	  regions.	  Nature	  neuroscience	  2:382-­‐387.	  Braak	  H,	  Alafuzoff	  I,	  Arzberger	  T,	  Kretzschmar	  H,	  Del	  Tredici	  K	  (2006)	  Staging	  of	  Alzheimer	  disease-­‐associated	  neurofibrillary	  pathology	  using	  paraffin	  sections	  and	  immunocytochemistry.	  Acta	  Neuropathol	  112:389-­‐404.	  Bruer	  RA,	  Spitznagel	  E,	  Cloninger	  CR	  (2007)	  The	  temporal	  limits	  of	  cognitive	  change	  from	  music	  therapy	  in	  elderly	  persons	  with	  dementia	  or	  dementia-­‐like	  cognitive	  impairment:	  a	  randomized	  controlled	  trial.	  J	  Music	  Ther	  44:308-­‐328.	  	   67	  Buhusi	  CV,	  Meck	  WH	  (2005)	  What	  makes	  us	  tick?	  Functional	  and	  neural	  mechanisms	  of	  interval	  timing.	  Nat	  Rev	  Neurosci	  6:755-­‐765.	  Carter	  R	  (2009)	  The	  Human	  Brain	  Book.	  London:	  Dorling	  Kindersley	  Ltd.	  Clark	  CN,	  Downey	  LE,	  Warren	  JD	  (2014)	  Music	  biology:	  all	  this	  useful	  beauty.	  Current	  biology	  :	  CB	  24:R234-­‐237.	  Conard	  NJ,	  Malina	  M,	  Munzel	  SC	  (2009)	  New	  flutes	  document	  the	  earliest	  musical	  tradition	  in	  southwestern	  Germany.	  Nature	  460:737-­‐740.	  Cuénod	  C-­‐A	  (1993)	  Amygdala	  Atrophy	  in	  Alzheimer's	  Disease.	  Archives	  of	  Neurology	  50:941.	  Da	  Costa	  S,	  van	  der	  Zwaag	  W,	  Marques	  JP,	  Frackowiak	  RS,	  Clarke	  S,	  Saenz	  M	  (2011)	  Human	  primary	  auditory	  cortex	  follows	  the	  shape	  of	  Heschl's	  gyrus.	  The	  Journal	  of	  neuroscience	  :	  the	  official	  journal	  of	  the	  Society	  for	  Neuroscience	  31:14067-­‐14075.	  Daffner	  KR,	  Mesulam	  MM,	  Cohen	  LG,	  Scinto	  LF	  (1999)	  Mechanisms	  underlying	  diminished	  novelty-­‐seeking	  behavior	  in	  patients	  with	  probable	  Alzheimer's	  disease.	  Neuropsychiatry	  Neuropsychol	  Behav	  Neurol	  12:58-­‐66.	  Di	  Pietro	  M,	  Laganaro	  M,	  Leemann	  B,	  Schnider	  A	  (2004)	  Receptive	  amusia:	  temporal	  auditory	  processing	  deficit	  in	  a	  professional	  musician	  following	  a	  left	  temporo-­‐parietal	  lesion.	  Neuropsychologia	  42:868-­‐877.	  Donnay	  GF,	  Rankin	  SK,	  Lopez-­‐Gonzalez	  M,	  Jiradejvong	  P,	  Limb	  CJ	  (2014)	  Neural	  substrates	  of	  interactive	  musical	  improvisation:	  an	  FMRI	  study	  of	  'trading	  fours'	  in	  jazz.	  PloS	  one	  9:e88665.	  Dubois	  B,	  Feldman	  HH,	  Jacova	  C,	  Dekosky	  ST,	  Barberger-­‐Gateau	  P,	  Cummings	  J,	  Delacourte	  A,	  Galasko	  D,	  Gauthier	  S,	  Jicha	  G,	  Meguro	  K,	  O'Brien	  J,	  Pasquier	  F,	  Robert	  P,	  Rossor	  M,	  Salloway	  S,	  Stern	  Y,	  Visser	  PJ,	  Scheltens	  P	  (2007)	  Research	  criteria	  for	  the	  diagnosis	  of	  Alzheimer's	  disease:	  revising	  the	  NINCDS-­‐ADRDA	  criteria.	  Lancet	  Neurol	  6:734-­‐746.	  Eldar	  E,	  Ganor	  O,	  Admon	  R,	  Bleich	  A,	  Hendler	  T	  (2007)	  Feeling	  the	  real	  world:	  limbic	  response	  to	  music	  depends	  on	  related	  content.	  Cerebral	  cortex	  17:2828-­‐2840.	  Ellis	  RJ,	  Norton	  AC,	  Overy	  K,	  Winner	  E,	  Alsop	  DC,	  Schlaug	  G	  (2012)	  Differentiating	  maturational	  and	  training	  influences	  on	  fMRI	  activation	  during	  music	  processing.	  NeuroImage	  60:1902-­‐1912.	  Feldman	  HH,	  Jacova	  C	  (2005)	  Mild	  cognitive	  impairment.	  Am	  J	  Geriatr	  Psychiatry	  13:645-­‐655.	  Ferri	  CP,	  Prince	  M,	  Brayne	  C,	  Brodaty	  H,	  Fratiglioni	  L,	  Ganguli	  M,	  Hall	  K,	  Hasegawa	  K,	  Hendrie	  H,	  Huang	  Y,	  Jorm	  A,	  Mathers	  C,	  Menezes	  PR,	  Rimmer	  E,	  Scazufca	  M	  (2005)	  Global	  prevalence	  of	  dementia:	  a	  Delphi	  consensus	  study.	  The	  Lancet	  366:2112-­‐2117.	  Fishman	  YI,	  Volkov	  IO,	  Noh	  MD,	  Garell	  PC,	  Bakken	  H,	  Arezzo	  JC,	  Howard	  MA,	  Steinschneider	  M	  (2001)	  Consonance	  and	  dissonance	  of	  musical	  chords:	  neural	  correlates	  in	  auditory	  cortex	  of	  monkeys	  and	  humans.	  Journal	  of	  neurophysiology	  86:2761-­‐2788.	  Fritsch	  T,	  Smyth	  KA,	  Debanne	  SM,	  Petot	  GJ,	  Friedland	  RP	  (2005)	  Participation	  in	  novelty-­‐seeking	  leisure	  activities	  and	  Alzheimer's	  disease.	  J	  Geriatr	  Psychiatry	  Neurol	  18:134-­‐141.	  	   68	  Fukui	  H,	  Toyoshima	  K	  (2008)	  Music	  facilitate	  the	  neurogenesis,	  regeneration	  and	  repair	  of	  neurons.	  Medical	  hypotheses	  71:765-­‐769.	  Gaab	  N,	  Gaser	  C,	  Zaehle	  T,	  Jancke	  L,	  Schlaug	  G	  (2003)	  Functional	  anatomy	  of	  pitch	  memory—an	  fMRI	  study	  with	  sparse	  temporal	  sampling.	  NeuroImage	  19:1417-­‐1426.	  Gerdner	  L	  (1997)	  An	  individualized	  music	  intervention	  for	  agitation.	  Journal	  of	  the	  American	  Psychiatric	  Nurses	  Association	  3:177-­‐184.	  Gerdner	  LA	  (1999)	  Individualized	  music	  intervention	  protocol.	  J	  Gerontol	  Nurs	  25:10-­‐16.	  Gerdner	  LA	  (2000)	  Effects	  of	  individualized	  versus	  classical	  "relaxation"	  music	  on	  the	  frequency	  of	  agitation	  in	  elderly	  persons	  with	  Alzheimer's	  disease	  and	  related	  disorders.	  International	  psychogeriatrics	  /	  IPA	  12:49-­‐65.	  Gerdner	  LA,	  Swanson	  EA	  (1993)	  Effects	  of	  individualized	  music	  on	  confused	  and	  agitated	  elderly	  patients.	  Arch	  Psychiatr	  Nurs	  7:284-­‐291.	  Goate	  A,	  Chartier-­‐Harlin	  MC,	  Mullan	  M,	  Brown	  J,	  Crawford	  F,	  Fidani	  L,	  Giuffra	  L,	  Haynes	  A,	  Irving	  N,	  James	  L,	  et	  al.	  (1991)	  Segregation	  of	  a	  missense	  mutation	  in	  the	  amyloid	  precursor	  protein	  gene	  with	  familial	  Alzheimer's	  disease.	  Nature	  349:704-­‐706.	  Gómez-­‐Isla	  T,	  Price	  J,	  McKeel	  DJ,	  Morris	  J,	  Growdon	  J,	  Hyman	  B	  (1996)	  Profound	  loss	  of	  layer	  II	  entorhinal	  cortex	  neurons	  occurs	  in	  very	  mild	  Alzheimer's	  disease.	  The	  Journal	  of	  neuroscience	  :	  the	  official	  journal	  of	  the	  Society	  for	  Neuroscience	  16:4491-­‐4500.	  Grahn	  JA,	  Brett	  M	  (2007)	  Rhythm	  and	  beat	  perception	  in	  motor	  areas	  of	  the	  brain.	  J	  Cogn	  Neurosci	  19:893-­‐906.	  Grahn	  JA,	  McAuley	  JD	  (2009)	  Neural	  bases	  of	  individual	  differences	  in	  beat	  perception.	  NeuroImage	  47:1894-­‐1903.	  Groussard	  M,	  Viader	  F,	  Hubert	  V,	  Landeau	  B,	  Abbas	  A,	  Desgranges	  B,	  Eustache	  F,	  Platel	  H	  (2010)	  Musical	  and	  verbal	  semantic	  memory:	  two	  distinct	  neural	  networks?	  NeuroImage	  49:2764-­‐2773.	  Groussard	  M,	  Viader	  F,	  Landeau	  B,	  Desgranges	  B,	  Eustache	  F,	  Platel	  H	  (2009)	  Neural	  correlates	  underlying	  musical	  semantic	  memory.	  Annals	  of	  the	  New	  York	  Academy	  of	  Sciences	  1169:278-­‐281.	  Halpern	  AR,	  O'Connor	  MG	  (2000)	  Implicit	  memory	  for	  music	  in	  Alzheimer's	  disease.	  Neuropsychology	  14:391-­‐397.	  Hamer	  M,	  Chida	  Y	  (2009)	  Physical	  activity	  and	  risk	  of	  neurodegenerative	  disease:	  a	  systematic	  review	  of	  prospective	  evidence.	  Psychol	  Med	  39:3-­‐11.	  Hannon	  EE,	  Trainor	  LJ	  (2007)	  Music	  acquisition:	  effects	  of	  enculturation	  and	  formal	  training	  on	  development.	  Trends	  in	  cognitive	  sciences	  11:466-­‐472.	  Hardy	  J	  (1997)	  Amyloid,	  the	  presenilins	  and	  Alzheimer's	  disease.	  Trends	  in	  neurosciences	  20:154-­‐159.	  Hayden	  BY,	  Smith	  DV,	  Platt	  ML	  (2010)	  Cognitive	  control	  signals	  in	  posterior	  cingulate	  cortex.	  Front	  Hum	  Neurosci	  4:223.	  He	  C,	  Hotson	  L,	  Trainor	  LJ	  (2007)	  Mismatch	  responses	  to	  pitch	  changes	  in	  early	  infancy.	  J	  Cogn	  Neurosci	  19:878-­‐892.	  Henry-­‐Feugeas	  MC	  (2007)	  MRI	  of	  the	  'Alzheimer	  syndrome'.	  J	  Neuroradiol	  34:220-­‐227.	  	   69	  Huettel	  SA,	  Singerman	  JD,	  McCarthy	  G	  (2001)	  The	  effects	  of	  aging	  upon	  the	  hemodynamic	  response	  measured	  by	  functional	  MRI.	  NeuroImage	  13:161-­‐175.	  Hyde	  KL,	  Peretz	  I,	  Zatorre	  RJ	  (2008)	  Evidence	  for	  the	  role	  of	  the	  right	  auditory	  cortex	  in	  fine	  pitch	  resolution.	  Neuropsychologia	  46:632-­‐639.	  Inase	  M,	  Tanji	  J	  (1994)	  Projections	  from	  the	  globus	  pallidus	  to	  the	  thalamic	  areas	  projecting	  to	  the	  dorsal	  area	  6	  of	  the	  macaque	  monkey:	  a	  multiple	  tracing	  study.	  Neurosci	  Lett	  180:135-­‐137.	  Ivry	  RB,	  Spencer	  RM,	  Zelaznik	  HN,	  Diedrichsen	  J	  (2002)	  The	  Cerebellum	  and	  Event	  Timing.	  Annals	  of	  the	  New	  York	  Academy	  of	  Sciences	  978:302-­‐317.	  Jacova	  C,	  Kertesz	  A,	  Blair	  M,	  Fisk	  JD,	  Feldman	  HH	  (2007)	  Neuropsychological	  testing	  and	  assessment	  for	  dementia.	  Alzheimer's	  &	  dementia	  :	  the	  journal	  of	  the	  Alzheimer's	  Association	  3:299-­‐317.	  Janata	  P	  (2009a)	  Music	  and	  the	  self.	  Music	  That	  Works	  131-­‐141.	  Janata	  P	  (2009b)	  The	  neural	  architecture	  of	  music-­‐evoked	  autobiographical	  memories.	  Cerebral	  cortex	  19:2579-­‐2594.	  Janata	  P,	  Tillmann	  B,	  Bharucha	  JJ	  (2002)	  Listening	  to	  polyphonic	  music	  recruits	  domain-­‐general	  attention	  and	  working	  memory	  circuits.	  Cogn	  Affect	  Behav	  Neurosci	  2:121-­‐140.	  Javad	  F,	  Warren	  JD,	  Micallef	  C,	  Thornton	  JS,	  Golay	  X,	  Yousry	  T,	  Mancini	  L	  (2014)	  Auditory	  tracts	  identified	  with	  combined	  fMRI	  and	  diffusion	  tractography.	  NeuroImage	  84:562-­‐574.	  Johnson	  JK,	  Chang	  CC,	  Brambati	  SM,	  Migliaccio	  R,	  Gorno-­‐Tempini	  ML,	  Miller	  BL,	  Janata	  P	  (2011)	  Music	  recognition	  in	  frontotemporal	  lobar	  degeneration	  and	  Alzheimer	  disease.	  Cogn	  Behav	  Neurol	  24:74-­‐84.	  Kandel	  ER,	  Schwartz	  JH,	  Jessel	  M	  (1991)	  Principles	  of	  Neural	  Science.	  New	  York:	  McGraw-­‐Hill.	  Killiany	  RJ,	  Moss	  MB,	  Albert	  MS,	  Sandor	  T,	  Tieman	  J,	  Jolesz	  F	  (1993)	  Temporal	  Lobe	  Regions	  on	  Magnetic	  Resonance	  Imaging	  Identify	  Patients	  With	  Early	  Alzheimer's	  Disease.	  Archives	  of	  Neurology	  50:949-­‐954.	  Koelsch	  S,	  Gunter	  T,	  Friederici	  AD,	  Schroger	  E	  (2000)	  Brain	  indices	  of	  music	  processing:	  "nonmusicians"	  are	  musical.	  J	  Cogn	  Neurosci	  12:520-­‐541.	  Koelsch	  S,	  Skouras	  S,	  Fritz	  T,	  Herrera	  P,	  Bonhage	  C,	  Kussner	  MB,	  Jacobs	  AM	  (2013)	  The	  roles	  of	  superficial	  amygdala	  and	  auditory	  cortex	  in	  music-­‐evoked	  fear	  and	  joy.	  NeuroImage	  81:49-­‐60.	  Koger	  SM,	  Chapin	  K,	  Brotons	  M	  (1999)	  Is	  Music	  Therapy	  an	  Effective	  Intervention	  for	  Dementia?	  A	  Meta-­‐Analytic	  Review	  of	  Literature.	  Journal	  of	  Music	  Therapy	  36:2-­‐15.	  Lee	  H,	  Casadesus	  G,	  Zhu	  X,	  Joseph	  J,	  Smith	  M	  (2004)	  Perspectives	  on	  the	  amyloid-­‐beta	  cascade	  hypothesis.	  Journal	  Of	  Alzheimer's	  Disease	  6:137-­‐145.	  Leech	  R,	  Sharp	  DJ	  (2014)	  The	  role	  of	  the	  posterior	  cingulate	  cortex	  in	  cognition	  and	  disease.	  Brain	  :	  a	  journal	  of	  neurology	  137:12-­‐32.	  Levitin	  DJ,	  Tirovolas	  AK	  (2009)	  Current	  advances	  in	  the	  cognitive	  neuroscience	  of	  music.	  Annals	  of	  the	  New	  York	  Academy	  of	  Sciences	  1156:211-­‐231.	  Levy-­‐Lahad	  E,	  Wasco	  W,	  Poorkaj	  P,	  Romano	  D,	  Oshima	  J,	  Pettingell	  W,	  Yu	  C,	  Jondro	  P,	  Schmidt	  S,	  Wang	  K,	  al	  e	  (1995)	  Candidate	  gene	  for	  the	  chromosome	  1	  familial	  Alzheimer's	  disease	  locus.	  Science	  269:973-­‐977.	  	   70	  Lewis	  PA,	  Miall	  RC	  (2003)	  Brain	  activation	  patterns	  during	  measurement	  of	  sub-­‐	  and	  supra-­‐second	  intervals.	  Neuropsychologia	  41:1583-­‐1592.	  Liegeois-­‐Chauvel	  C,	  Peretz	  I,	  Babai	  M,	  Laguitton	  V,	  Chauvel	  P	  (1998)	  Contribution	  of	  different	  cortical	  areas	  in	  the	  temporal	  lobes	  to	  music	  processing.	  Brain	  :	  a	  journal	  of	  neurology	  121	  (	  Pt	  10):1853-­‐1867.	  Macar	  F,	  Coull	  J,	  Vidal	  F	  (2006)	  The	  supplementary	  motor	  area	  in	  motor	  and	  perceptual	  time	  processing:	  fMRI	  studies.	  Cogn	  Process	  7:89-­‐94.	  Masataka	  N	  (2006)	  Preference	  for	  consonance	  over	  dissonance	  by	  hearing	  newborns	  of	  deaf	  parents	  and	  of	  hearing	  parents.	  Dev	  Sci	  9:46-­‐50.	  McKhann	  G,	  Drachman	  D,	  Folstein	  M,	  Katzman	  R,	  Price	  D,	  Stadlan	  EM	  (1984)	  Clinical	  diagnosis	  of	  Alzheimer's	  disease:	  report	  of	  the	  NINCDS-­‐ADRDA	  Work	  Group	  under	  the	  auspices	  of	  Department	  of	  Health	  and	  Human	  Services	  Task	  Force	  on	  Alzheimer's	  Disease.	  Neurology	  34:939-­‐944.	  McKhann	  GM,	  Knopman	  DS,	  Chertkow	  H,	  Hyman	  BT,	  Jack	  CR,	  Jr.,	  Kawas	  CH,	  Klunk	  WE,	  Koroshetz	  WJ,	  Manly	  JJ,	  Mayeux	  R,	  Mohs	  RC,	  Morris	  JC,	  Rossor	  MN,	  Scheltens	  P,	  Carrillo	  MC,	  Thies	  B,	  Weintraub	  S,	  Phelps	  CH	  (2011)	  The	  diagnosis	  of	  dementia	  due	  to	  Alzheimer's	  disease:	  recommendations	  from	  the	  National	  Institute	  on	  Aging	  and	  the	  Alzheimer's	  Association	  workgroup.	  Alzheimer's	  &	  dementia	  :	  the	  journal	  of	  the	  Alzheimer's	  Association	  7:263-­‐269.	  Moran	  MA,	  Mufson	  EJ,	  Mesulam	  MM	  (1987)	  Neural	  inputs	  into	  the	  temporopolar	  cortex	  of	  the	  rhesus	  monkey.	  J	  Comp	  Neurol	  256:88-­‐103.	  Nan	  Y,	  Friederici	  AD	  (2013)	  Differential	  roles	  of	  right	  temporal	  cortex	  and	  Broca's	  area	  in	  pitch	  processing:	  evidence	  from	  music	  and	  Mandarin.	  Human	  brain	  mapping	  34:2045-­‐2054.	  Ohnishi	  T,	  Matsuda	  H,	  Asada	  T,	  Aruga	  M,	  Hirakata	  M,	  Nishikawa	  M,	  Katoh	  A,	  Imabayashi	  E	  (2001)	  Functional	  anatomy	  of	  musical	  perception	  in	  musicians.	  Cerebral	  cortex	  11:754-­‐760.	  Omar	  R,	  Henley	  SM,	  Bartlett	  JW,	  Hailstone	  JC,	  Gordon	  E,	  Sauter	  DA,	  Frost	  C,	  Scott	  SK,	  Warren	  JD	  (2011)	  The	  structural	  neuroanatomy	  of	  music	  emotion	  recognition:	  evidence	  from	  frontotemporal	  lobar	  degeneration.	  NeuroImage	  56:1814-­‐1821.	  Palmer	  C,	  Krumhansl	  CL	  (1990)	  Mental	  representations	  for	  musical	  meter.	  J	  Exp	  Psychol	  Hum	  Percept	  Perform	  16:728-­‐741.	  Pantev	  C,	  Ross	  B,	  Fujioka	  T,	  Trainor	  LJ,	  Schulte	  M,	  Schulz	  M	  (2003)	  Music	  and	  learning-­‐induced	  cortical	  plasticity.	  Annals	  of	  the	  New	  York	  Academy	  of	  Sciences	  999:438-­‐450.	  Park	  H,	  Pringle	  Specht	  JK	  (2009)	  Effect	  of	  individualized	  music	  on	  agitation	  in	  individuals	  with	  dementia	  who	  live	  at	  home.	  J	  Gerontol	  Nurs	  35:47-­‐55.	  Patel	  AD,	  Balaban	  E	  (2000)	  Temporal	  patterns	  of	  human	  cortical	  activity	  reflect	  tone	  sequence	  structure.	  Nature	  404:80-­‐84.	  Patterson	  RD,	  Uppenkamp	  S,	  Johnsrude	  IS,	  Griffiths	  TD	  (2002)	  The	  processing	  of	  temporal	  pitch	  and	  melody	  information	  in	  auditory	  cortex.	  Neuron	  36:767-­‐776.	  Peretz	  I	  (1990)	  Processing	  of	  local	  and	  global	  musical	  information	  by	  unilateral	  brain-­‐damaged	  patients.	  Brain	  :	  a	  journal	  of	  neurology	  113	  (	  Pt	  4):1185-­‐1205.	  Phillips-­‐Silver	  J,	  Trainor	  LJ	  (2005)	  Feeling	  the	  Beat:	  Movement	  Influences	  Infants	  Rhythm	  Perception.	  Science	  308:1430.	  	   71	  Piccirilli	  M,	  Sciarma	  T,	  Luzzi	  S	  (2000)	  Modularity	  of	  music:	  evidence	  from	  a	  case	  of	  pure	  amusia.	  Journal	  of	  neurology,	  neurosurgery,	  and	  psychiatry	  69:541-­‐545.	  Pickles	  J	  (2012)	  Introduction	  to	  the	  Physiology	  of	  Hearing	  (4th	  Edition).	  p	  459:	  Introduction	  to	  the	  Physiology	  of	  Hearing	  (4th	  Edition).	  Platel	  H	  (2005)	  Functional	  neuroimaging	  of	  semantic	  and	  episodic	  musical	  memory.	  Annals	  of	  the	  New	  York	  Academy	  of	  Sciences	  1060:136-­‐147.	  Platel	  H,	  Baron	  J-­‐C,	  Desgranges	  B,	  Bernard	  F,	  Eustache	  F	  (2003)	  Semantic	  and	  episodic	  memory	  of	  music	  are	  subserved	  by	  distinct	  neural	  networks.	  NeuroImage	  20:244-­‐256.	  Polk	  M,	  Kertesz	  A	  (1993)	  Music	  and	  language	  in	  degenerative	  disease	  of	  the	  brain.	  Brain	  and	  cognition	  22:98-­‐117.	  Quoniam	  N,	  Ergis	  A-­‐M,	  Fossati	  P,	  Peretz	  I,	  Samson	  S,	  Sarazin	  M,	  Allilaire	  J-­‐F	  (2003)	  Implicit	  and	  Explicit	  Emotional	  Memory	  for	  Melodies	  in	  Alzheimer's	  Disease	  and	  Depression.	  Annals	  of	  the	  New	  York	  Academy	  of	  Sciences	  999:381-­‐384.	  Ridha	  BH,	  Barnes	  J,	  Bartlett	  JW,	  Godbolt	  A,	  Pepple	  T,	  Rossor	  MN,	  Fox	  NC	  (2006)	  Tracking	  atrophy	  progression	  in	  familial	  Alzheimer's	  disease:	  a	  serial	  MRI	  study.	  The	  Lancet	  Neurology	  5:828-­‐834.	  Schall	  U,	  Johnston	  P,	  Todd	  J,	  Ward	  PB,	  Michie	  PT	  (2003)	  Functional	  neuroanatomy	  of	  auditory	  mismatch	  processing:	  an	  event-­‐related	  fMRI	  study	  of	  duration-­‐deviant	  oddballs.	  NeuroImage	  20:729-­‐736.	  Schlaug	  G	  (2015)	  Musicians	  and	  music	  making	  as	  a	  model	  for	  the	  study	  of	  brain	  plasticity.	  Prog	  Brain	  Res	  217:37-­‐55.	  Schlaug	  G,	  Jancke	  L,	  Huang	  Y,	  Steinmetz	  H	  (1995)	  In	  vivo	  evidence	  of	  structural	  brain	  asymmetry	  in	  musicians.	  Science	  267:699-­‐701.	  Schmithorst	  VJ	  (2005)	  Separate	  cortical	  networks	  involved	  in	  music	  perception:	  preliminary	  functional	  MRI	  evidence	  for	  modularity	  of	  music	  processing.	  NeuroImage	  25:444-­‐451.	  Schubotz	  RI,	  von	  Cramon	  DY	  (2003)	  Functional–anatomical	  concepts	  of	  human	  premotor	  cortex:	  evidence	  from	  fMRI	  and	  PET	  studies.	  NeuroImage	  20:S120-­‐S131.	  Schubotz	  RI,	  von	  Cramon	  DY,	  Lohmann	  G	  (2003)	  Auditory	  what,	  where,	  and	  when:	  a	  sensory	  somatotopy	  in	  lateral	  premotor	  cortex.	  NeuroImage	  20:173-­‐185.	  Seung	  Y,	  Kyong	  JS,	  Woo	  SH,	  Lee	  BT,	  Lee	  KM	  (2005)	  Brain	  activation	  during	  music	  listening	  in	  individuals	  with	  or	  without	  prior	  music	  training.	  Neuroscience	  research	  52:323-­‐329.	  Sherratt	  K,	  Thornton	  A,	  Hatton	  C	  (2004)	  Music	  interventions	  for	  people	  with	  dementia:	  a	  review	  of	  the	  literature.	  Aging	  &	  mental	  health	  8:3-­‐12.	  Sherrington	  R,	  Rogaev	  EI,	  Liang	  Y,	  Rogaeva	  EA,	  Levesque	  G,	  Ikeda	  M,	  Chi	  H,	  Lin	  C,	  Li	  G,	  Holman	  K,	  Tsuda	  T,	  Mar	  L,	  Foncin	  JF,	  Bruni	  AC,	  Montesi	  MP,	  Sorbi	  S,	  Rainero	  I,	  Pinessi	  L,	  Nee	  L,	  Chumakov	  I,	  Pollen	  D,	  Brookes	  A,	  Sanseau	  P,	  Polinsky	  RJ,	  Wasco	  W,	  Da	  Silva	  HA,	  Haines	  JL,	  Perkicak-­‐Vance	  MA,	  Tanzi	  RE,	  Roses	  AD,	  Fraser	  PE,	  Rommens	  JM,	  St	  George-­‐Hyslop	  PH	  (1995)	  Cloning	  of	  a	  gene	  bearing	  missense	  mutations	  in	  early-­‐onset	  familial	  Alzheimer's	  disease.	  Nature	  375:754-­‐760.	  Silber	  F	  (1999)	  The	  Influence	  of	  Background	  Music	  on	  the	  Performance	  of	  the	  Mini	  Mental	  State	  Examination	  with	  Patients	  Diagnosed	  with	  Alzheimer's	  Disease.	  J	  Music	  Ther	  36:196-­‐206.	  	   72	  Small	  GW	  (2002)	  Structural	  and	  Functional	  Brain	  Imaging	  of	  Alzheimer	  Disease.	  Sung	  HC,	  Chang	  AM	  (2005)	  Use	  of	  preferred	  music	  to	  decrease	  agitated	  behaviours	  in	  older	  people	  with	  dementia:	  a	  review	  of	  the	  literature.	  Journal	  of	  clinical	  nursing	  14:1133-­‐1140.	  Thangavel	  R,	  Van	  Hoesen	  GW,	  Zaheer	  A	  (2009)	  The	  abnormally	  phosphorylated	  tau	  lesion	  of	  early	  Alzheimer's	  disease.	  Neurochem	  Res	  34:118-­‐123.	  Trainor	  LJ	  (2010)	  Using	  Electroencephalography	  (EEG)	  to	  Measure	  Maturation	  of	  Auditory	  Cortex	  in	  Infants:	  Processing	  Pitch,	  Duration	  and	  Sound	  Location	  .	  Encyclopedia	  on	  Early	  Childhood	  Development.	  Trainor	  LJ,	  Tsang	  CD,	  Cheung	  VH	  (2002)	  Preference	  for	  sensory	  consonance	  in	  2-­‐	  and	  4-­‐month-­‐old	  infants.	  Music	  Perception	  20:187-­‐194.	  Trost	  W,	  Ethofer	  T,	  Zentner	  M,	  Vuilleumier	  P	  (2012)	  Mapping	  aesthetic	  musical	  emotions	  in	  the	  brain.	  Cerebral	  cortex	  22:2769-­‐2783.	  Varma	  AR,	  Snowden	  JS,	  Lloyd	  JJ,	  Talbot	  PR,	  Mann	  DMA,	  Neary	  D	  (1999)	  Evaluation	  of	  the	  NINCDS-­‐ADRDA	  criteria	  in	  the	  differentiation	  of	  Alzheimer's	  disease	  and	  frontotemporal	  dementia.	  Journal	  of	  Neurology,	  Neurosurgery	  &	  Psychiatry	  66:184-­‐188.	  Vuust	  P,	  Ostergaard	  L,	  Pallesen	  KJ,	  Bailey	  C,	  Roepstorff	  A	  (2009)	  Predictive	  coding	  of	  music-­‐-­‐brain	  responses	  to	  rhythmic	  incongruity.	  Cortex	  45:80-­‐92.	  Wagner	  AD,	  Shannon	  BJ,	  Kahn	  I,	  Buckner	  RL	  (2005)	  Parietal	  lobe	  contributions	  to	  episodic	  memory	  retrieval.	  Trends	  in	  cognitive	  sciences	  9:445-­‐453.	  Warren	  JD,	  Griffiths	  TD	  (2003)	  Distinct	  mechanisms	  for	  processing	  spatial	  sequences	  and	  pitch	  sequences	  in	  the	  human	  auditory	  brain.	  The	  Journal	  of	  neuroscience	  :	  the	  official	  journal	  of	  the	  Society	  for	  Neuroscience	  23:5799-­‐5804.	  Warren	  JD,	  Uppenkamp	  S,	  Patterson	  RD,	  Griffiths	  TD	  (2003)	  Separating	  pitch	  chroma	  and	  pitch	  height	  in	  the	  human	  brain.	  Proceedings	  of	  the	  National	  Academy	  of	  Sciences	  of	  the	  United	  States	  of	  America	  100:10038-­‐10042.	  Willott	  JF,	  Shnerson	  A,	  Urban	  GP	  (1979)	  Sensitivity	  of	  the	  acoustic	  startle	  response	  and	  neurons	  in	  subnuclei	  of	  the	  mouse	  inferior	  colliculus	  to	  stimulus	  parameters.	  Exp	  Neurol	  65:625-­‐644.	  Zatorre	  RJ,	  Belin	  P,	  Penhune	  VB	  (2002)	  Structure	  and	  function	  of	  auditory	  cortex:	  music	  and	  speech.	  Trends	  in	  cognitive	  sciences	  6:37-­‐46.	  Zatorre	  RJ,	  Chen	  JL,	  Penhune	  VB	  (2007)	  When	  the	  brain	  plays	  music:	  auditory-­‐motor	  interactions	  in	  music	  perception	  and	  production.	  Nat	  Rev	  Neurosci	  8:547-­‐558.	  Zatorre	  RJ,	  Evans	  AC,	  Meyer	  E	  (1994)	  Neural	  Mechanisms	  Underlying	  Melodfic	  Perception	  and	  Memory	  for	  Pitch.	  J	  Neurosci	  14:1908-­‐1109.	   


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items