Volume 15, Number 5, October 2009
Copyright © 2009 Society for Music Theory
Daniel Shanahan*Review of Aniruddh D. Patel, Music, Language, and the Brain (New York and Oxford: Oxford University Press, 2008) |
KEYWORDS: cognition, music psychology, linguistics
Received October 2009
[1] As the story goes, theorist Fred Lerdahl and Ray Jackendoff wrote most of their classic A Generative Theory of Tonal Music (Lerdahl and Jackendoff 1983) over Jackendoff’s kitchen table in Boston while discussing the Leonard Bernstein lectures that would later become The Unanswered Question (Bernstein 1976), which they had both recently attended.(1) This is plausible, given that Bernstein’s lectures postulated a connection between music and Chomsky’s transformational grammar. Lerdahl and Jackendoff’s book is one of the first interdisciplinary studies of music and linguistics, and has been highly influential in the emerging field of music cognition. Unfortunately, interdisciplinary studies with such a thorough understanding of both of these subjects are rare. Psychologists often deal with music analysis on a somewhat superficial level, while music theorists are equally guilty of sciolistic explorations into psychology. A notable exception to these tendencies is the 2008 winner of the ASCAP Deems Taylor Award, Music, Language, and the Brain, by Aniruddh D. Patel, who is the Esther J. Burnham Fellow at the Neurosciences Institute, San Diego. The book successfully provides an in-depth background to recent studies in music cognition, linguistics, and neuropsychology in an attempt to explore the intertwining processes of musical and linguistic perception.
[2] Patel begins by explaining that while music and language differ in a number of ways, they are both particulate systems—hierarchies made up of minor elements that can be combined to create larger structures. He notes that this attribute isn’t unique to either human music or language, as mating calls of male humpback whales and many types of birdsong contain similar types of hierarchies. This confirms for Patel a sense that there are commonalities to the processing of both music and language. He begins his discussion of these commonalities with a consideration of sound elements, which can be designated musically as pitch and timbre, and linguistically as phonetic and phonemic structures. (The effect of pitch contrasts in the two domains reflects the limits of these commonalities: in music these contrasts are of enormous importance, but are generally much less meaningful in language.) He points out that although scale systems differ greatly depending on cultural and historical conditions, there are in fact some common properties. First of all, scales mostly have between five and seven tones to the octave, even in cultures with microtonal systems, such as India. Patel observes that, “importantly, this limit is not predicted by human frequency discrimination, which is capable of distinguishing many more tones per octave” (Patel 2008, 19). The author also notes that the range of any scale’s constituent intervals is relatively narrow as well, usually between one and three semitones in size, and that most scales are made up of at least two differently-sized intervals. Although investigations of scale structure have taken place for centuries, Patel has thoroughly researched the issue from a cross-cultural and multi-disciplinary perspective, presenting an old topic in a refreshing and informative manner.
[3] In his discussion of rhythm, Patel observes that while both language and music can be understood in terms of the hierarchical structures mentioned earlier, speech lacks the “temporal periodicity which is widespread in musical rhythm” (177). He examines the nature of periodicity at length, yet asserts that it is not a necessity in a composition; as “the mind is capable of organizing temporal patterns without reference to a beat” (98). It follows that beat, or pulse, is not therefore a necessary attribute of all music. While he considers this idea briefly in relation to the Chinese ch’in (or qin) repertoire, a more detailed treatment of aperiodic temporality would have been an interesting and valuable addition to the chapter. Patel’s discussion of the perception of rhythm inevitably leads to the work of Lerdahl and Jackendoff, but noticeably omits many theoretical works which have explored various psychological aspects of meter, such as those of Christopher Hasty (1997), Jonathan Kramer (1988), and Wallace Berry (1987). Patel’s exploration of language differentiation and its effect on musical phrasing and rhythm is of particular interest and discloses fascinating results. By drawing on the work of Ling, Grabe, and Nolan (2000), who measure the contrast between durations in successive speech elements in language, Patel compares themes from works by certain composers whose music is widely viewed as having a strong national character. The resulting data support the unsurprising fact that French speech has fewer “stress-timed” syllables than English speech; similarly, the main theme from the first movement of Debussy’s String Quartet has fewer durational accents than the corresponding theme from Elgar’s First Symphony. This is interesting because while many composers (such as Janáček) intentionally utilized native speech patterns in framing their melodic ideas, this study suggests that an unconscious correlation may in fact be innate.
[4] Patel engages with much of the major scholarship pertaining to melodic perception, including a well-researched discussion of the empirical testing (carried out by Krumhansl 1991 and Schellenberg et al. 2002, among others) of Narmour’s “implication-realization” theory (Narmour 1990). The chapter’s strength lies in Patel’s ability to support and substantiate a large number of theoretical concepts with empirical data. The author compares intonation in speech and melody using “autosegmental-metrical” (or AM) theory, which states that all intonation in speech is actually derived from a series of pitch events. The study of intonation is quite dependent on music as a reference point, an interesting reversal of music theory’s occasional reliance on linguistic models. For example, in 1779, Joshua Steele published An Essay Toward Establishing the Melody and Measure of Speech to be Expressed and Perpetuated by Peculiar Symbols, and, as Patel notes, he worked by ear and transcribed speech intonations by mimicking them on a bass viol (Patel 2008, 211). Even current scholarship focusing on speech intonation (such as Ladd 1996) shows strong interest in how pitch perception influences linguistic studies of intonation. Patel’s chapter quite successfully merges two separate fields that typically study similar ideas.
[5] Patel begins the chapter on syntax by discussing the cognitive bases for perceiving chord structure and harmonic movement, as well as studies of key relationships (such as Krumhansl, Bharucha, and Kessler 1982). Patel considers the spatial representation of keys in terms of perceptual experiments by Krumhansl and Kessler (1982) and Cuddy, Cohen, and Mewhort (1981), rather than from the theoretical lineage of Riemann, Schoenberg, and, more recently, Lerdahl. Patel then turns to the obvious connection between hierarchical structures in linguistic syntax and the prolongational methodology of Lerdahl and Jackendoff, leading to a discussion of syntactic processing in the brain in terms of both language and music. Patel’s fluency with the subject, coupled with his review of an array of fascinating experiments, arguably makes this chapter one of the book’s strongest contributions to the field. Herein the author explains Edward Gibson’s dependency locality theory (Gibson 1998), which argues that syntactic comprehension in language relies upon the distance between structural points. For example, the phrase “The man ate the pie” is obviously more easily comprehended than “The man who was on a diet ate the pie,” which in turn is clearer than “The man, who was on a diet which specifically forbids pie-eating, ate the pie.” This theory might easily be translated to an analysis of musical syntax, where it would offer cognitive and empirical grounding for a reductionist methodology. Patel’s examination of syntactic perception in patients afflicted with Broca’s aphasia is of particular interest for its implications of interconnected cognitive processing of musical and linguistic syntax. Participants in Patel’s study had difficulty processing linguistic syntax (most were patients who had recently suffered from strokes) and found it similarly difficult to process melodic and harmonic sequences. Patel demonstrates these connections with such fluency that the reader is left with a deeper understanding of syntactic comprehension in both fields.
[6] The chapter on meaning explores the way in which the mind deduces meaning from acoustic sound elements; Patel also weighs Hanslick’s idea of musical meaning as emanating from within the structure of a composition. He examines the effect of extrinsic elements such as social, historical, and cultural factors that influence a listener’s perception of meaning, introducing a number of studies that show cultural differences in such perception. His chapter on evolution provides a glimpse into what might be called biomusicology, while delving deeper into the anthropological concerns raised in Mithen’s The Singing Neanderthals (Mithen 2006).
[7] The strength of Patel’s book is its ability to provide in-depth studies of music and linguistic cognition in a manner that illustrates both their interconnectedness and their differences. Any student of music theory would benefit from Patel’s explanations of mental processing of music and its theoretical foundations; any student of linguistic theory would benefit from his neurological analogies. As the study of music cognition continues to grow, the present work will become an increasingly important resource. Therefore it is perhaps fitting that at the “Music, Language, and the Mind” conference at Tufts University in July of 2008, which celebrated the twenty-fifth anniversary of Lerdahl and Jackendoff’s publication, one of the most cited works by scholars from a multitude of disciplines—including both of the honorees—was Patel’s Music, Language, and the Brain.
Daniel Shanahan
Trinity College
Dublin
Works Cited
Bernstein, Leonard. 1976. The Unanswered Question: Six Talks at Harvard. Cambridge, MA: Harvard University Press.
Berry, Wallace. 1987. Structural Functions in Music. New York: Dover Books.
Cuddy, Lola L., A.J. Cohen, and D.J.K. Mewhort. 1981. “Perception of Structure in Short Melodic Sequences.” Journal of Experimental Psychology: Human Perception and Performance 7: 869–883.
Gibson, Edward. 1998. “Linguistic Complexity: Locality of Syntactic Dependencies.” Cognition 68: 1–76.
Hasty, Christopher F. 1997. Meter as Rhythm. Oxford: Oxford University Press.
Kramer, Jonathan D. 1988. The Time of Music. New York: Schirmer Books.
Krumhansl, Carol, J.J. Bharucha, and E.J. Kessler. 1982. “Perceived Harmonic Structure of Chords in Three Related Musical Keys.” Journal of Experimental Psychology: Human Perception and Performance 8: 24–36.
Krumhansl, Carol, and E.J. Kessler. 1982. “Tracing the Dynamic Changes in Perceived Tonal Organization in a Spatial Representation of Musical Keys.” Psychological Review 89: 334–368.
Krumhansl, Carol. 1991. “Melodic Structure: Theoretical and Empirical Descriptions.” In Music, Language, Speech and Brain, ed. J. Sundberg, L. Nord, and R. Carlson, 269–283. London: MacMillan.
Ladd, Robert. 1996. Intonational Phonology. Cambridge, UK: Cambridge University Press.
Lerdahl, Fred, and Ray Jackendoff. 1983. A Generative Theory of Tonal Music. Cambridge, MA: MIT Press.
Ling, Ee L., E. Grabe, and F. Nolan. 2000. “Quantitative Characterisations of Speech Rhythm: Syllable-timing in Singapore English.” Language and Speech 43: 377–401.
Mithen, Steven J. 2006. The Singing Neanderthals: The Origin of Music, Language, Mind, and Body. Cambridge, MA: Harvard University Press.
Narmour, Eugene. 1990. The Analysis and Cognition of Basic Melodic Structures. Chicago: University of Chicago Press.
Patel, Aniruddh D. 2008. Music, Language, and the Brain. Oxford: Oxford University Press.
Schellenberg, Glenn, M. Adachi, K.T. Purdy, and M.C. McKinnon. 2002 “Expectancy in Melody: Tests of Children and Adults.” Journal of Experimental Child Psychology 74: 107–127.
Footnotes
1. This story is recounted in several sources, including the 1996 reprint of the original book.
Return to text
Copyright Statement
Copyright © 2009 by the Society for Music Theory. All rights reserved.
[1] Copyrights for individual items published in Music Theory Online (MTO) are held by their authors. Items appearing in MTO may be saved and stored in electronic or paper form, and may be shared among individuals for purposes of scholarly research or discussion, but may not be republished in any form, electronic or print, without prior, written permission from the author(s), and advance notification of the editors of MTO.
[2] Any redistributed form of items published in MTO must include the following information in a form appropriate to the medium in which the items are to appear:
This item appeared in Music Theory Online in [VOLUME #, ISSUE #] on [DAY/MONTH/YEAR]. It was authored by [FULL NAME, EMAIL ADDRESS], with whose written permission it is reprinted here.
[3] Libraries may archive issues of MTO in electronic or paper form for public access so long as each issue is stored in its entirety, and no access fee is charged. Exceptions to these requirements must be approved in writing by the editors of MTO, who will act in accordance with the decisions of the Society for Music Theory.
This document and all portions thereof are protected by U.S. and international copyright laws. Material contained herein may be copied and/or distributed for research purposes only.
Prepared by Brent Yorgason, Managing Editor
Updated
28 October 2009
Number of visits: