"Regular People" Have Musical Expertise
Sight-Reading Music: A Unique Window on the Mind
Recent Publications of Special Interest
This is the first Winter issue of MRN, added in response
to "popular demand". Thus, issues will be published
three times per year, Winter, Spring and Fall. We are pleased
that this newsletter is meeting a need in the Community at large.
We will do our best to continue to provide articles and information
of interest on research in music, behavior, the brain, and related
The MuSICA database now has a new, user-friendly search engine. This is the EXCITE engine already in widespread use on the Web. You can now search by concepts, phrases, questions, etc., in addition to using key words with Boolean operators (e.g., "and", "or"). Also, you can easily get lists of related citations/abstracts by "clicking" on any item that comes up in your first search. Moreover, you can have the search results organized either by confidence level or subject. The EXCITE search mode is available for users who access the MuSICA either by a Web server or via Telnet, and replaces the WAIS search mode which is outdated. We would like to receive your comments on EXCITE.
Web Address: http://www.musica.uci.edu
Direct tests of the general population indicate a very low
level of ability to distinguish between basic elements and concepts
in music, such as the interval between two notes. This marked
distinction from musicians encourages the view that "regular
people" have little musical competence. However, when tested
under conditions that allow them to relate test material to their
own familiarity with folk tunes, non-musicians can perform at
the level of experts.
Membership in the Flat Earth Society has not been increasing. Those "moving lights" on a movie marquee "trick" young children but not the rest of us. But why? For most of human existence, a flat earth was assumed. And the lights certainly look like they are moving. In these and innumerable other situations there is a conflict between direct, clear experience and the truth. How do we know that the earth isn't flat and that the lights aren't moving? Because systematic and documented investigation has shown otherwise; because we have valid theories that explain why our experiences, our immediate perceptions, are misled under certain circumstances. In the case of the flat earth, it is a matter of having too restricted a perspective, literally too small of a visual perspective; you can see the earth's curvature from the sky but not so easily from the ground. In the case of the apparently moving lights, scientists have determined the temporal resolving power of the visual system; apparent motion is unsupported below a certain rate of change (lights on-off in sequence). So we have learned to be cautious about the "seeing is believing" approach to reality.
But what happens when we have no direct knowledge of much of our own experience? In these cases, we are unaware of what we "know", so we are left with a restricted view of ourselves. This is nowhere more true than for musical experience. We are rightly awed by performing artists. Studies of their expertise have documented their difficult path to achievement. Thus, expert levels of performance generally require at least ten years of "extended, daily amounts of deliberate practice activities."(1) Such expertise defies aging. Thus, cognitive and motor skills, which decline in the rest of us, can be maintained in expert pianists well into the eighth decade.(2)
However, the expertise of musicians also sets them apart from regular people. And this naturally diminishes our self-view with respect to musical knowledge. But should this really be so? If we distinguish between expertise in performance from expertise about musical elements and their abstract relationships, we may find that we may know more than we think we know. Before addressing the knowledge of "regular people" (RP) about musical structures, we need first to understand how we can "know without knowing that we know".
Simply put, our memories involve at least two mnemonic brain systems, usually called "declarative" and "procedural".(3) The first concerns everyday experience of events, sort of the story of our lives, and general facts (e.g., "peanut butter is sticky"). We can recall and talk about the content of these experiences. The second (often called "implicit memory") involves acquiring a great deal of information that is not readily accessible to personal awareness, such as well practiced sensory-motor skills and even certain cognitive skills. Sensory-motor skills involve the precise coordination of numerous muscles, each of which is continually sending information to parts of the brain that are specialized to "know" the exact position of our limbs and fingers, and the exact degree of stretch or contraction of every muscle. Fortunately, we are not consciously aware of each of these bits of information, because our limited conscious capacity to process information would be overwhelmed. Yet this torrent of neuronal signals gets organized and stored; hence, we can walk, dance, write, type, play a musical instrument ... the list is seemingly endless.
But we also acquire unconscious cognitive knowledge. This ability is not rudimentary but can involve highly abstract and complex information. For example, people acquire and correctly use highly complex sets of grammatical rules, as revealed in studies of artificial grammars, without actually being able to state the rules or even be aware that they know them.(4)
And this brings us to music. We expect and accept that trained musicians know a great deal about music, including details about musical building blocks and how they are related. They have expertise about musical pitches, pitch intervals, scales, major-minor modes, harmonic relationships, tempo, meter, etc. They are in fact "experts". On the other hand, we know that RP, that is the vast majority of us (or "non-musicians" if you prefer), don't have this sort of knowledge; we are not musical experts. So we expect that musicians will show greater perceptual abilities in music tests than we "regulars". And they do.(5) The level of demonstrated knowledge of us RP is not only worse, it is generally too low. Too low for what? Too low to explain how we can have any comprehension and enjoyment of music.
This critical issue is the focus of investigation by J. David Smith of the Department of Psychology and the Center for Cognitive Science, State University of New York at Buffalo.(6) If regular people [my term] are unable to hear basic aspects of music, how do they manage to achieve musical understanding? Consider the interval between two notes, a basic element of Western tonal music. Experts, but not RP, identify intervals between two notes accurately, correctly classifying them into the interval that corresponds to the musical scale implied by a melody. Thus in the key of C, the successively played notes C and E are classed as a major third interval, C and F as a perfect fourth, C and E flat as a minor third. Yet despite direct instruction about intervals, including examples, when later tested RP apparently can't tell the difference among these distinctive intervals. On the other hand, we RP certainly are affected differently by major and minor thirds.
To resolve this paradox, Smith and his colleagues reasoned that RP might actually have the same type of knowledge as experts. They could have acquired knowledge about intervals merely by exposure to music during daily life. For example, by this process of "enculturation" children five to ten years of age become able to judge which chords in a sequence "belong" with the others, implying knowledge of abstract concepts of harmony.(7) RP may be unaware of their own levels of knowledge and experimenters may have failed to tap into this knowledge base in the most sensitive manner.
Noting that testing is done with novel musical material, Smith et al reasoned that RP might need to hear familiar material that would "link up" with their store of interval classification information. So they devised the following approach. RPs were encouraged to associate isolated pairs of tones with the first two notes of a folk song they knew. Three songs were presented to insure familiarization: Greensleeves, Kumbahyah, Here Comes the Bride. The investigators did not choose these by chance. The first starts with a minor third, the second with a major third and the last with a perfect fourth. Pairs of tones defining one of these intervals were presented, across different octaves and occupying different places within an octave, to make certain that the RP were judging on the basis of the tonal relationship (pitch interval) rather than on the basis of absolute pitches. A control group of RPs was tested in the standard way, i.e., trained directly on intervals, without mention or prior presentation of the folk songs. The controls did poorly, as expected. But for the folk song group, the results were dramatic. The familiarity framework produced correct identification and classification of musical intervals, essentially at the level of expert musicians. That is, RP classed notes comprising a minor third as Greensleeves, a major third as Kumbahyah and a perfect fourth as Here Comes the Bride. Thus, non-musicians do learn and remember musical intervals very well. However, this information is apparently encoded within the context of melodies with which they are familiar, hence the need to provide access to this information by the use of such melodies. In short, RP displayed their unconscious knowledge.(8)
These findings illustrate several points. First, when assessing knowledge, poor test performance might not reflect a lack of knowledge. Second, asking questions in the most sensitive way is as important as the questions that are asked. Last, but foremost for this essay, you don't have to be a trained musical expert to have gained abstract knowledge of musical relationships. We RP don't necessarily "know what we know". We know more than we think we know and this includes a lot about music. So when looking for someone with musical expertise, try looking in the mirror.
-- N. M. Weinberger
(1) Ericsson, K.A. and Lehmann, A.C. (1996). Expert and exceptional performances: evidence of maximal adaptation to task constraints. Annu. Rev. Psychol., 47, 273-305.
(2) Krampe, R.T. and Ericsson, K.A. (1996). Maintaining excellence: deliberate practice and elite performance in young and older pianists. J. Exper. Psychol. (General) 125, 331-359.
(3) Discussion of these memory types can be found in several prior issues of MRN (e.g., "Music and Its Memories", Fall, 1996; "The Neurobiology of Musical Learning and Memory", Fall, 1997; see also "The Unconscious Musical Brain", same issue.)
(4) Reber, Arthur S. (1967). Implicit Learning of Artificial Grammars. J. of Verb. Learn. and Verb. Behav, 6, 855-863; see also Dienes, Z, and Berry, D. (1997). Implicit learning: Below the subjective threshold. Psychonom. Bull. and Rev., 4, 3-23.; for information on brain systems, see Knowlton, B.J. Ramus, S.J. and Squire, L.R. (1992). Intact artificial grammar learning in amnesia: Dissociation of classification learning and explicit memory for specific instances. Psych. Sci., 3, 172-179.
(5) Siegel, J.A. and Siegel, W. (1977). Absolute identification of notes and intervals by musicians. Percep. and Psychophy., 21, 143-152.
(6) Smith, J.D. (1997).The place of musical novices in music science. Mus. Percep. 14, 227-262.
(7) Sloboda, J.A. (1985). The Musical Mind: The Cognitive Psychology of Music. (pp. 209-215) Oxford University Press: New York, NY.
(8) Smith, J.D., Nelson, D.G.K, Grohskopf, L.A. and Appleton,
T. (1997). What child is this? What interval was that? Familiar
tunes and music perception in novice listeners., Cognition,
Music research affords the potential to discover new capacities and processes of the human mind. However, music cognition and behavior are often viewed merely as an instance of other, better known subjects. An example is music sight-reading, often believed to obey the laws of language reading. However, recent studies reveal that the study of sight-reading in music provides a unique window on the mind.
The human brain is perhaps the ultimate topic in all of scientific endeavor. This may seem like a preposterous overstatement; after all, the brain is just one of many interesting things. True, but reflect for a moment -- the brain is the gateway to all of these other things, to all knowledge of the world. It is only through our brains that we can perceive and learn about what is outside our brains. Thus, what we can know about anything depends upon what the human brain can do and how the brain does it all . Many people consider these two issues -- brain capability and brain function -- to be paramount questions.
Brain capability includes the competency to conceive, compose, read, perform, perceive and comprehend music. Therefore, to understand the brain means we have to understand musical competence and it neural substrates. But there are scores, maybe hundreds, perhaps thousands of brain abilities; consider the enormous variety and complexity of human behavior. Every single possible type of thought or behavior can't be understood ... there are too many. And of course this is true. But scientists don't attempt to examine every single instance of experience or of the rest of the physical world for that matter. Rather, they seek general principles and laws that explain what is known and predict what is not yet known. Specific instances of brain function, and of whatever else goes on in the universe, are individual cases of universal laws. Once the boiling point of water (at sea level) has been determined, and explained by physical laws, there is no need to check the temperature of the pot everytime water starts to boil on the stove. The same logic holds for the brain and behavior.
But why should understanding music, the brain and behavior be of particular interest or receive special attention? Music behavior could well be an instance of other forms of brain function, for which explanations and understanding are far advanced, perhaps completed. Let's accept this view, and ask how we can decide if it's true. An obvious way is to determine if the findings for a known related behavior also apply to music behavior. If they don't, then music behavior provides a new contribution to understanding the human mind and demands answers of its own.
Let's take the case of reading. This topic has been intensively studied for many years. An important line of inquiry uses precise measurements of eye-movements during reading to determine the mental strategies used to read. (The tracking of eye movements is a valid way of determining the moment-to-moment location of attention, because adequate reading requires that the central axis of the eye be directed to the text, i.e., reading is poor with peripheral vision.(1)) In reading, very rapid brief movements, called "saccades" are followed by longer periods of fixation, during which the actual perception of the text occurs. This sequence occurs several times per second and is done essentially unconsciously. Thus, analysis of eye movements can determine if people attend to individual letters, words or phrases. Eye movements also can be divided into forward, along the progressive line of text, and regressive, back toward a part of the line already covered. By noting the location, duration, direction and pattern of saccades and fixations, it is possible to obtain an objective indicator of mental strategies in reading language. For example, readers avoid fixating blank areas between words. Also, poor readers make many more regressive eye movements than do better readers.(2) In this case, comprehension would seem to depend on textual review.
A related task in music is sight-reading, that is reading an unknown score while performing the music.(3) T. W. Goolsby has pointed out that music educators have adopted language reading as the basis for instruction and evaluation of sight-reading. He questions whether this is appropriate.(4) If sight-reading music is simply another instance of language reading, then the same pattern of eye movements should occur, but if the pattern is different, then sight-reading can't be explained simply by the principles that operate in language reading.
In a study of vocalizing (e.g., humming) during sight-reading, Goolsby observed major differences between reading music and reading language. First, he found the opposite pattern of eye movements to language reading: i.e., poor sight-readers made fewer regressive eye movements while better performers had many more such movements.(5) Additionally, he reported a perceptual span that included the vertical as well as horizontal dimension (as might be demanded by attention to the staves of music), overall more eye movements in better sight-readers and less attention to details of the musical score than for language reading.(6) Regarding the last point, Goolsby found more fixations to blank spots in the score (less attention) despite the fact that music demands exact reproduction of the details whereas language reading requires obtaining only the meaning, not the physical reproduction, of the text. It seems that the mental strategy in music is to look ahead to determine where the score is "going" (obtaining the "larger picture"), making inferences about many of the details of the score (given a knowledge of e.g., harmony in Western tonal music), thus obtaining a sufficient framework within which to look back to notes that are just ahead of the notes being performed -- and repeating this complex process again. All of this occurs as often as five to six times per second! Therefore, reading music apparently is not an instance of reading text, but a process unto itself. Consequently, music seems to provide a unique window into the mind.
On the other hand Rayner and Pollatsek report strong commonalties between sight-reading and reading language, and also typing. They have noted that sight-reading requires two conflicting processes. First, there is the visual encoding of the score; one would like this to be well ahead of performance so the score in all its nuances, can be understood before "translating" the information into motor acts. However, if the eyes get too far ahead, then an overload of information can result, interfering with good performance.(7) By careful control of the presentation of a novel score by computer, synchronizing it with performance, these authors have determined the actual perceptual span of this "forward look". They note that reading aloud and typing also have similar-sized "looks", suggesting that the limiting factor in all three situations is the same, a limitation on the capacity of short term memory.
Thus, we have two apparently opposing conclusions about the uniqueness of sight-reading in music as an entry point into the mind. Rayner and Pollatsek emphasize commonalties between sight-reading and other perceptual-motor activities while Goolsby focuses on differences between reading music and language. Which is correct?
Probably both. The two views can be reconciled by realizing that the brain has general constraints; it can't do everything. So general limitations on short term memory, for example, force the same upper limitations on performance in sight-reading, reading language aloud and typing. In short, we can't transcend our brains. On the other hand, within the limitations of mental functioning, there is a great deal of flexibility, many ways to solve problems and achieve desired behaviors. Sight-reading makes demands that differ, perhaps fundamentally, from the other tasks. To be successful, mental processes and strategies must match the special demands. That the patterning of eye movements, which are a convenient "window to the mind", is unique to sight-reading forces us to enlarge the way we think about the visual encoding and understanding of symbols, and the resultant behavior. Thus, while having certain commonalties with other activities, sight-reading as a part of music seems to involve a unique combination of mental processes.
-- N. M. Weinberger
(1) Rayner, K. and Bertera, J. H. (1979). Reading without a fovea.,
Science, 206, 468-469.
(2) Rayner, K. and Pollatsek, A. (1989). The Psychology of
Reading. Englewood Cliffs, N.J.: Prentice-Hall.
(3) The performance aspect of music sight-reading is used to assess
how well the score was understood, as one can't simply give a
standard test of comprehension as in text reading.
(4) Goolsby, T. W. (1994). Eye movement in music reading: effects
of reading ability, notational complexity, and encounters. Music
Perception, 12, 77-96.
(6) Goolsby, T. W. (1994). Profiles of processing: eye movements
during sightreading. Music Perception, 12, 97-123.
(7) Rayer, K. and Pollatsek, A. (1997). Eye movements, the eye-hand
span, and the perceptual span during sight-reading of music.
Curr. Direct. Psychol. Sci., 6, 49-53.
Musicians' Memory For Tones Music seems to have many effects on cognitive processes and behavior (see several previous issues of MRN) but more basic processes often have been overlooked. One of these is the duration of memory for tones. Very recently, experimenters who were not studying music made a serendipitous finding which indicates significant differences between musicians and non-musicians. Beauvois and Meddis were investigating the way that people unconsciously group continuous streams of sounds ("Time Decay of Auditory Stream Biasing", Perception & Psychophysics, 1997, vol. 59, pages 81-86) in a general population of adults (19-30 years old). When they examined the results that indicated the duration of short term memory for tones, the authors found a strange distribution of values, many short, many long; those with longer values were musicians. Non-musicians exhibited memory for tones of roughly 1.5 seconds, while musicians memory lasted almost 8 seconds. The findings support the possibility that musical training increases memory span for at least some musical sound. Focused follow up studies are needed to clarify several issues, including the relationship between different types of sounds (musical vs. non-musical) and short term memory span.
Music Alters Children's Brainwaves
There is great interest
in the possibility that music produces reorganization of brain
function, and that such change could be detected by analysis
of the electroencephalogram (EEG, "brain waves"). Russian
investigators have provided the first evidence of these processes
in children. Writing in the journal Human Physiology (1996,
volume 22, pages 76-81), T. N. Malyarenko and his co-authors played
classical music one hour per day over six months to four year
old children in a preschool setting. A control group had no exposure
to music but simply the normal classroom sounds. The classical
music group had an increase in a part of the alpha rhythm frequency
band and, greater similarities ("coherence") between
different regions of the cerebral cortex, most pronounced in the
frontal lobes. Greater coherence is thought by some workers to
indicate better "cooperation" among brain regions but
others view it as typical of increased relaxation. A particularly
noteworthy aspect of this report is that the EEG changes occurred
in a passive listening situation, in which the children were not
required to pay attention to the music. Whether the effects are
specific to a particular type of music remains to be studied.
Also needed are controls for mere exposure to novel sounds.
Godeli, M.R., Santana, P.R., Souza, V.H., and Marquetti, G.P. (1996). Influence of background music on preschoolers' behavior: a naturalistic approach. Perceptual and Motor Skills, 82:1123-9.
Summary: The purpose of this study was to determine the
effects of background music on the behavior of preschool children.
Twenty-seven preschoolers were observed during natural classroom
activities, either with background music present (folk or rock
and roll) or no music. Behaviors were categorized in terms of
social interaction, spatial location within the classroom and
posture. The presence of music favored child-to-child social interaction.
Gregory, A. H., Worrall, L., and Sarge, A. (1996). The development of emotional responses to music in young children. Motivation & Emotion, 20: :341-348.
Summary: This study examined the development of emotional responses
to music in children. The subjects were forty 3-4 year olds and
twenty-eight 7-8 year olds. They listened to eight tunes which
were either in the major or minor mode and were either unaccompanied
melody or harmonized. They selected 1 of 2 schematic faces chosen
to depict happy or sad facial expressions for each tune, to avoid
possible problems in verbal expression. Children aged 7-8 yrs
showed a significant major-happy and minor-sad connotation, which
the same as adults. However 3-4 yr olds did not show any such
significant association between musical mode and emotional response.
Harmonic accompaniment significantly increased the frequency of
happy responses The results suggest that at least some emotional
responses to music are learned and show that by the ages of 7-8
years, children have attained the adult level of emotional associations
to major-minor modes.
Music Perception, Cognition and Behavior
Money, J. (1997) Evolutionary sexology: the hypothesis of song and sex. Medical Hypotheses, 48: 399-402.
Summary: In non human mammals sexual behavior is the same across
individuals and is more or less fixed, whereas in the human species
it is personalized and individualistic. This paper proposes the
"Evolutionary Sexological Hypothesis" that emancipation
of all fixed brain mechanisms for sexual behavior, including the
substrates for mating, was essential for and coincident with the
evolution of human speech from song. Accordingly, the first human
language was a love song rather than a howl of warning.
Hébert, S. and Peretz, I. (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory & Cognition, 25, 518-533.
Summary: This study tested the relative importance in melody and
rhythm in recognizing music. The subjects were adults who were
not musicians although some had musical experience. They listened
to the same set of musical excerpts from familiar and well-known
tunes, that were modified in two ways: (a) melody (pitch structure)
the same but rhythm eliminated (intervals between notes all the
same); (b) melody eliminated (the same note repeated) but rhythm
retained. In general, rhythm itself was found to be a poor cue
for recalling music that is stored in long term memory. However,
the most effective cue for correct identification was the proper
combination of rhythm and melody, the latter being the more informative.
Hantz, E., Marvin, E.W., Krelick, K.G. and Chapman, R.M. (1996). Sex differences in memory for timbre: an event-related potential study. Int. J. Neuroscience,, 87, 17-40.
Summary: Although female/male cognitive differences have been
studied for some time, little is known about such differences
in music. This study investigated the relationship between sex
and memory for musical timbre; additionally, event-related brain
potentials (ERPS) were recorded. The task was to listen to a set
of synthesized instrumental timbres and then determine which of
these was missing from a second set of stimuli. There were no
sex differences on the behavioral responses: males and females
performed equally well. However, their ERPs showed significant
differences. The findings suggest that sensitive measures of brain
function can reveal sex differences in the processing/recall of
Micheyl, C. Khalfa, S., Perrot, X and Collet, L. (1997). Difference in cochlear efferent activity between musicians and non-musicians. Neuroreport,, 8, 1047-1050.
Summary: The brain has a mechanism to reduce the loudness of sound by reducing neural responses within the cochlea of the inner ear, the location of auditory receptor hair cells. This "loudness adaptation" can be measured both behaviorally and by measuring the brain's actions ("efferent activity") from the ear itself. This experiment compared loudness adaptation in musicians and non-musicians. It found less reduction in the loudness of sounds in musicians, measured both behaviorally and by detecting brain efferent activity. Thus, musicians have a greater ability to maintain the perceived loudness of a sound, that is, to hear more closely the actual real levels of sound. This capability would seem to be useful in more accurately tracking music. While it is likely that this is actually learned unconsciously, the authors also entertain the possibility that if innate, this ability predisposes people to become musicians.
To improve readability, each selection includes a brief statement
of the findings. Also, instead of including published abstracts,
summaries have been written in less technical terms.
Music Therapy and Music Science: Past, Present and (?) Future
The following opinions about music are intended to provoke
thought and sometimes perhaps even argument, but ultimately to
energize and enlarge conceptions and inquiry about music.
Recently I had the opportunity to participate in the annual meeting of the National Association for Music Therapy (NAMT). Dr. Alicia Ann Clair, a leader in the field and Professor and Director of Music Therapy and Research at the University of Kansas, had organized a one day institute entitled "Music Therapy and the Brain: Cutting-Edge Applications". Speakers covered a wide range of topics, from basic neuroscience through applications of music therapy to specific patient populations. As it happens, this year's meeting was the last for NAMT. Starting in 1998, NAMT is merging with the American Association for Music Therapy (AAMT) to form the American Music Therapy Association (AMTA). This should greatly benefit the profession. My role was to provide an overview of music research and to comment on the status of research in music therapy. Any such summary is bound to be greatly oversimplified, bordering on distortion. However, if it does no serious damage and at least encourages discussion, it is probably worthwhile. So I'll present some of my comments here.
Most areas of research suffer, at one or another time in their development, from certain situations. These include (a) insufficient quantity, which fails to reach critical mass needed for substantive progress; (b) phenomenology, which enumerates individual examples of observations without relating them to each other by theory; (c) "one shot" studies, which are interesting but never followed up; (d) fragmentary or inadequate research designs, which makes it difficult to draw firm conclusions; (e) isolation from other disciplines that could provide mutual enrichment. As I said, one can find these problems throughout fields of research, so they are not limited to music therapy. But a couple of examples may prove interesting.
It is well known that music can alleviate pain and suffering. The advantages of inducing analgesia (not surgical level anesthesia) by the use of music vs. the use of drugs are many, including reduced side-effects and absence of possible drug interactions. However, a broad scan of the literature indicates many reports of failure of music to reduce pain. I would not hazard a guess as to the proportion of negative vs. positive reports but it is substantial. Every year there are additional reports on this topic. So findings are accumulating. Unfortunately, this enumeration of observations is itself insufficient to explain either the positive or negative results. Too little attention has been devoted to the development of testable theories. Aside from the lack of understanding of how music actually affects brain cells so as to produce analgesia (yes, endorphins, i.e., the brain's own "pain-killers" are probably involved), the lack of adequate theory makes it difficult for music therapists to predict the effects of a "music treatment" in a given situation. The need to go beyond observations toward explanations ought not to be controversial. Given increasing interest in alternative approaches to health care, now officially sanctioned by the Federal government, this is a particularly timely area in need of conceptualization and theory-driven research.
The second and last example concerns, what I perceive to be, the relative isolation of music therapy from allied disciplines. There are many success stories of interdisciplinary research and practice. For example, aging and dementia were once the domain of the medical specialty of geriatrics. After this field was broadened to include basic neuroscience, including anatomy, physiology, chemistry, pharmacology, molecular biology and more comprehensive behavioral assessments, it evolved into a major attack on Alzheimers and related diseases. The field was also transformed from viewing dementia as an inevitable aspect of aging to seeing it as a disease that could be treated successfully some day.
Music therapy has natural alliances with the field of neuroscience,
particularly behavioral neuroscience, as well as various areas
of clinical medicine. As I sat in the audience at the NAMT meeting,
I noted two tangible signs of an increasing interest in neuroscience.
The first was the existence of the one-day institute on brain,
itself. The second was less obvious but no less important. I watched
the audience of dedicated music therapists taking notes. The
note-taking seemed greatest whenever some fairly basic aspect
of brain function was presented. This implies to me both a great
interest in and a great need for information and education in
the field which ultimately could form the bedrock for much of
music therapy. In my closing comments at the meeting, I had the
temerity to suggest that curricula in music therapy be modified
to include at least a year of solid education in basic and behavioral
neuroscience. This would strengthen training and enable therapists
to achieve a higher degree of partnership with behavioral and
neurological medical researchers and specialists. It could also
promote new ways of thinking about how and why music is such a
-- N. M. Weinberger
->     MuSICA Home Page      Cumulative Article Index      Subject Index      Previous Issue      Next Issue