Category Archives: language

Literature as Music

Jonathan L. Friedmann, Ph.D.

Aspects of music can be spatially represented through notation and recording, which freeze moments in time. But, as an experiential medium, which relies on performance and audition, music reveals itself in the present tense. This temporal quality is not only thought to distinguish music from spatial arts, such as illustration, sculpture, jewelry, and ceramics, but also from written language, which cements ideas and oral expression into fixed letters. However, this characterization has its limits.

Author Anthony Burgess restricts the framing of words as concrete objects to informational writing. Scientific texts, legal documents, historical records, and other types of non-fiction primarily appeal to reason rather than imagination. They are written for study, reference, and comparison to other writings in the field. Their words are artifacts to be mulled over, digested, quoted, and critiqued. Contrastingly, Burgess sees literature as a “twin of music,” which, like music, occurs in real-time, transcends physical space, and manifests in the imagination.

Burgess’s interest in the link between music and literature stems from his biography. Best known for his 1962 novel A Clockwork Orange, featuring a deranged gang leader obsessed with Beethoven’s Ninth Symphony, Burgess was also a composer of some 150 works, most of which have been lost. He wished the public would view him as a musician who writes novels, rather than a novelist who composes music on the side. Yet, in his memoir, This Man & Music, Burgess concedes: “I have practiced all my life the arts of literary and musical composition—the latter chiefly as an amateur, since economic need has forced me to spend most of my time producing fiction and literary journalism.”

Burgess’s fiction brims with musical content, from characters who are musicians or music lovers, to writing styles that consciously borrow from sonata form, symphonic form, and the like. Stressing literature’s performative essence, Burgess complains: “We have come to regard the text as the great visual reality because we confuse letters as art with letters as information.” While non-fiction works might be understood as monuments of human thought, literature is a lived experience akin to traveling through a piece of music.

This discussion has more to say about literature than it does about music. Like the poet E. T. A. Hoffmann, another composer who made his living in words, Burgess idealized creative writing as an art approaching music. Central to his argument is the conception of time as the canvas upon which both art forms take shape, and imagination as the invisible realm where their meaning is made.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

 

The Semiotics of Music

Jonathan L. Friedmann, Ph.D.

Comparisons between music and language hit a wall when the focus turns to meaning. Although both are innate modes of human expression which, in their vocalized forms, use the same mechanisms of respiration, phonation, resonance, and so on, they function differently. Whereas English speakers would agree about the meaning of a word like “chair,” there is no such consensus about the meaning of a chord or scale. Outside of song, which is basically a form of stylized speech, meaning in music tends to be subjective. As a result, some scholars have taken to limiting—or even dismissing—the possibility of shared musical meaning. However, when we look beyond direct comparisons with language, we see distinct cultural meaning assigned to all sorts of things, ranging from music and food to gestures and facial expressions. “Chair” might not have a musical equivalent, but meaning is discerned in other ways.

An appeal to semiotics, the science of signs, seems most appropriate when evaluating musical meaning. Especially helpful is C. S. Peirce’s formulation of three types of signs: symbols, indexes, and icons.

Of the three, symbols are the least instructive. Language is a system of symbols, wherein each word or phrase has a definite and consistent meaning, albeit often contextually defined. Words are a shortcut for something else; the word “angry” represents an emotional state, but the word itself is not that emotional state. Language is essential for describing and analyzing music, but as ethnomusicologist Thomas Turino explains, such symbols “fall short in the realm of feeling and experience.” Symbols are secondary or after-the-fact, and may distract from the intimacy and immediacy of the musical experience.

Musical signs are more fruitfully viewed as indexes: signs that point to objects or ideas they represent. This applies mainly to music associated with a particular concept or occasion. For example, a national anthem performed at a sporting event becomes an index of patriotism, while a Christmas song heard while shopping becomes an index of the season. Through a combination of personal and shared experiences, these pieces—with or without their lyrics—serve as repositories of cultural meaning. On a smaller scale, music can serve as an index of romantic relationships or peer group affiliations.

Musical icons resemble or imitate the things they represent. These can include naturalistic sounds, such as thunder played on kettledrums, or mental states conveyed through musical conventions, such as ascending lines signaling ascent or exuberance. Icons tend to be culturally specific, such that listeners in a music-culture develop shared understandings, even as individuals add idiosyncratic layers to those understandings.

Precision, directness, and consistency are the lofty goals of language, but these are not the only ways meaning is conveyed. Musical meaning relies on non-linguistic systems, such as signs and indexes. While these may not be as steady or specific as language, they communicate shared meaning just the same.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

The Limits of Transmission

Jonathan L. Friedmann, Ph.D.

Since at least the Romantic period, musicians and theorists have argued that musically expressed emotions cannot be fully or adequately conveyed in words or rational concepts. Instead, music is understood as a mode of communication that bypasses ordinary language and speaks directly to the ineffable realm of the “inner life.” This emotional conveyance is typically regarded as both cultural and highly personal: conventions within a music-culture determine the generalized impressions of musical qualities, such as mode, pitch range, and tempo, but specific interactions between those qualities and the listener are not predetermined. A wide and highly variable range of factors, as unique as the listener herself, fundamentally shapes the experience.

Deryck Cooke’s influential treatise, The Language of Music (1959), proposes a more systematic approach. Through an examination of hundreds of examples of Common Practice tonality (Western tonal music since 1400), Cooke developed a lexicon of musical phrases, patterns, and rhythms linked to specific emotional meanings. In his analysis, recurrent devices are used to effect more or less identical emotional arousals, thus yielding a predictable, idiomatic language.

This theory, while helpful in identifying and organizing norms of Western music, has been criticized for omitting the role of syntax. There might be a standard musical vocabulary, but without rules for arranging constituent elements into “sentences,” there can be no consistent or independent meanings. For even the most over-used idiom, the performance and listening contexts ultimately determine the actual response.

This observation casts doubt on another of Cooke’s central claims. If, as Cooke argued, musical elements comprise a precise emotional vocabulary, then a composer can use those elements to excite his or her own emotions in the listener. This is achievable in emotive writing, such as a heartfelt poem or autobiographical account, which uses the syntactic and semantic structures of language to reference ideas, images, and experiences. However, because music lacks these linguistic features, direct emotional transmission is hardly a sure thing.

Philosopher Malcolm Budd adds an aesthetic argument to this criticism. By locating the value of a musical experience in the reception of the composer’s emotions, the piece loses its own aesthetic interest; it becomes a tool for transmitting information, rather than an opening for individually shaped emotional-aesthetic involvement. According to Budd, Cooke’s thesis, which he dubs “expression-transmission theory,” misrepresents the motivation for listening: “It implies that there is an experience which a musical work produces in the listener but which in principle he could undergo even if he were unfamiliar with the work, just as the composer is supposed to have undergone the experience he wishes to communicate before he constructs the musical vehicle which is intended to transmit it to others; and the value of the music, if it is an effective instrument, is determined by the value of this experience. But there is no such experience.”

The enduring appeal of musical language is its multivalence. Idiomatic figures may be commonplace in tonal music, but their appearance and reappearance in different pieces does not carry definite or monolithic information, whether from the composer or the vocabulary employed.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Listener as Context

Jonathan L. Friedmann, Ph.D.

Reading and writing were not generally accessible until Gutenberg unveiled the printing press around 1440. Fewer than six centuries have passed since then—a blip in the 200,000-year existence of anatomically modern Homo sapiens. When written languages emerged in antiquity, they were the province of elites. In Iron Age Israel (c. 1200-500 BCE), for example, roughly one percent of the population was literate, and most of them were merely “functionally literate”: they knew just enough to manage daily living and employment tasks. The complex poetry and prose in the Hebrew Bible were unintelligible to all but the most privileged classes. Only in the last twenty generations has “literacy for all” become a human possibility.

The rise of literate societies introduced new ways of sharing and digesting information. With texts in hand, people could spend time interpreting, pondering, analyzing, comparing, re-reading, and questioning. Philosophers and storytellers could externalize, revise, and catalogue their thoughts. Authors and readers could communicate without interacting face-to-face. Ideas and information could be technical and logically argued.

For all of its benefits, literacy could not capture or replicate the intimacy of orality. Whereas oral cultures foster immediacy and social connections, written communication tends to be impersonal and removed. Oral traditions are experiential and spontaneous, while written forms are passive and fixed. Spoken words are colored by mannerisms and inflections; written words are static and comparatively emotionless. There are exceptions: love letters and poems can approach the vividness of an interpersonal exchange. But, as a rule, writing lacks presence.

Fortunately, no society is (or really can be) exclusively literate. We cannot evolve beyond the need or propensity for oral expression, which is encoded in our genes. Speaking and listening are innate; writing and reading are add-on abilities. Thus, as print-saturated as our society is, it remains cemented in an oral foundation.

Among other things, this has ensured the persistence of the original meaning-making context: the individual. The listener’s role is crucial in an oral culture. Without ears to hear, information cannot be received or spread. As noted, this mode of communication is far more immersive and immediate than the written word. Interpretation is likewise instantaneous: meaning is extracted from the largely unconscious workings of memory, conditioning, feelings, education, experience, and the like. There is no need to pore over a detached text. Meaning manifests inside the person.

This is amply demonstrated in musical listening. As an auditory medium, music cannot be understood—or even really exist—without listening. Hints of music can be written in notation or other visual symbols, but these are, ultimately, abstractions. Words are written in letters, objects are photographed, images are drawn, but music evades visualization. It requires the type of information exchange characteristic of oral societies.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Whistled Speech

Jonathan L. Friedmann, Ph.D.

The line between speaking and singing is often blurred. In the Hebrew Bible, poetry and song are both called shir, suggesting that poetry was performed in speech-song. A similar simultaneity of song and poetry is present in human cultures across time and geography. Part of this owes to the shared mechanism of sound production: the human voice. It is commonly observed that infants “sing” before they speak. Expressive speech has qualities homologous with sprechstimme. Intense emotions are vocalized in shouts and groans verging on the musical. Even ordinary verbal communication lends itself to musical notation.

This points to a basic principle: Where there is speech, there is song. William A. Aikin touched on this in his article on singing in Grove’s Dictionary of Music and Musicians (1939): “It is part of our natural condition to possess organs for the production of sound, and perceptions to make them musical, and, being thus equipped, it is but natural that the art of music should be intimately associated with human life.” Because the impulse to communicate manifests in both speech and song, there is a natural spillover: speech tends toward song and song is shaped by speech.

Intonation variation is used in every language to mark emphases, differences, and emotional color. There are also many tonal languages, which utilize contrasting tones—rises and falls in pitch—to distinguish words and their grammatical functions. Roughly seventy percent of languages are tonal, accounting for about a third of the world’s human population. They are most prevalent in Central America, Africa, and East Asia. Mandarin Chinese, for instance, has four distinct tones: flat, rising, falling, and falling then rising.

A few tonal languages take speech-song a step further. They feature a whistling counterpart, or a whistled mode of speech. These melodic dialects are based on the spoken language: words are simplified and represented, syllable-by-syllable, contour-by-contour, through whistled tunes. Such communication is typically a musical-linguistic adaption to mountainous or heavily forested areas where daily work is performed in relative isolation. The whistles carry over great distances and can be heard over environmental noises. The practice is found in remote towns and villages in various parts of the globe, including Turkey, France, Mexico, Nepal, New Guinea, and the Canary Islands.

The instinctive and effective translation of spoken words into whistled melodies highlights the bond between speech and song. There is a modicum of musicality in English and other non-tonal languages. Tonal languages display more explicit musical aspects. Whistled languages make music the audible center. Yet, for all their diversity, the relationship of all the world’s languages to song is a difference of degree more than of kind.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Music Is As Music Does

Jonathan L. Friedmann, Ph.D.

Part of the difficulty of defining “music” is the implicit notion that music is a thing. Music is erroneously conceived as a sort of organism that can be taxonomically defined by a set of fixed morphological properties. Not only is there a multiplicity of divergent elements that can constitute music, especially when viewed cross-culturally, but those elements also need to be in motion in order for music to exist. Unlike static objects, like a statue, painting, or table, music becomes music through active relationships.

This dynamic quality is encoded in the term “composition,” perhaps music’s closest equivalent to a concrete thing. Composition means “putting together.” It is a noun that is really a verb. The composer (“one who puts together”) combines notes, beats, rests, articulations, and other audible components. The musicians (“ones who make music”) put these components into active relationship.

This process is most obvious in improvisation—spontaneous composition—where the acts of composing and music-making occur in the same moment. But it is also evident in the most meticulously written scores. As Sartre and others have observed, the printed page cannot be called music until and unless it is translated into sound. Thus, even music publishing, the best attempt at musical reification (“thing-making”), cannot force music into noun status.

The foregoing discussion is not limited to music. Other actions are often misconstrued as things, thereby obscuring their active essence. This is true of such lofty concepts as love, hate, good, and evil. Such terms are as convenient as they are misleading. They are, fundamentally, abstract nouns applied to a dynamic amalgam of sentiments expressed through action: loving, hating, doing good, and doing evil. Love is not a tangible or definite thing; evil does not exist as a concrete entity. Like the elements of music, they are multi-layered chains of events that unfold in real-time and in the context of relationships.

These observations could be expanded to include all of life, which itself is a deceptive term for the active process of living. From a certain point of view, everything is a verb. But philosophical maneuverings are not needed to appreciate music in this way. To paraphrase Forrest Gump, “Music is as music does.”

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Musical Meanings

Jonathan L. Friedmann, Ph.D.

Theories about meaning in music are divided into two main camps: absolutist and referentialist. Absolutists holds that meaning is autonomously generated by the music. Responses stirred are secondary and independent from the music itself, which can only express musicality. Referentialists, on the other hand, contend that music is a shorthand for concepts, actions, images, and mood states. Music legitimately refers to things outside of itself. Whether the truth lies at either pole or resides somewhere in between, this debate usually grants a pass to song. By virtue of incorporating the comparatively straightforward symbolism of language, even the most obscure song is thought to have clearer signification than music without words.

Words substantially relieve music of the burden of generating meaning. They instantaneously imbue sound with an essence, which can change as quickly as the words are switched out for others. Still, it would be a mistake to think that lyrics are the ultimate decider of a song’s meaning. For every song that gives a more or less uniform impression, there are at least as many that leave room for interpretation. This is not only true for lyrics featuring ambiguity or metaphor; even lucid songs can be multivalent.

This is partly because songs typically originate from a personal place. The songwriter writes about experiences and sentiments tied to specific people, settings, moments, and so on. Listeners tend to personalize these themes and make them their own, with all the subjectivity that implies. Another complicating factor is association. The meaning of a song can be formed and re-formed depending on when, where, and with whom it is heard. This is exemplified in the “our song” phenomenon, when strong connections create a sense of ownership, and the “recycled song” phenomenon, when a tune begins on the radio, makes its way into a movie, becomes a wedding song, gets used in a commercial, etc. New meanings accumulate with each new usage.

There are also listeners who pay little attention to song lyrics, either because of thematic non-resonance, linguistic incomprehensibility, or an attraction to something else in the performance. This nullifies any clarity the words may have provided.

In the end, vagueness is a unifying aspect of music with and without words. Lyrics can mitigate uncertainty, but the fuzziness of musical meaning remains.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Musical Ambiguity

Jonathan L. Friedmann, Ph.D.

The dictionary defines its subject matter, words, as distinct meaningful elements of writing or speech. This could imply that a single word—isolated from a linguistic or real-life setting—maintains a rigid meaning. However, all but the most technical dictionary terms show that, while a word may exist as a “distinct meaningful element,” precisely what that meaning is depends on how, when, and where the word is used. The further removed it is from a relationship with other words, the less confidently it possesses monosemy, or a single basic meaning. The Oxford English Dictionary abundantly demonstrates this point, providing 464 definitions for “set,” 396 for “run,” 368 for “take,” 343 for “stand,” and so on.

The presence of two or more possible meanings within a single word, known as lexical ambiguity or homonymy, is a natural and widespread aspect of language. Perhaps the most instructive (and amusing) examples are auto-antonyms: words that contain opposite meanings. “Custom,” for instance, means both standard and one-of-a-kind. “Cleaving” means both clinging and splitting apart. “Sanction” means both permit and punish. Related to these are words whose meanings have changed over time, like “awful,” which used to mean awe-inspiring, and “resentment,” which used to mean gratitude. Merriam-Webster recently authorized the colloquial (mis)use of “literally” by listing “figuratively” among its possible meanings (much to the chagrin of grammar-snobs).

All of this points to what linguist Alan Cruse calls the “contextual variability of word meaning.” Words in cooperation with their surroundings receive a particular meaning at a particular time. This phenomenon is even more pronounced in music.

A single note sounded in seclusion has virtually no signification. It can have an abundance of qualities—pitch, color, dynamic, vibrato (or lack thereof), etc.—but these are too neutral to impart a meaning. Whereas a multivalent term like “set” has intrinsic possibilities in the hundreds, the potential meaning of a single note is almost entirely extrinsic. It is a tabula rasa awaiting the impress of simultaneous pitches (harmony) and/or a succession of pitches (melody).

To some extent, this puts language and music in alignment. Both words and notes receive meaning from the rules of usage. In different types of sentences, words are used differently and carry different senses. In different types of musical phrases, notes are used differently and give different impressions. Both instances require a level of fluency to detect the intended syntactical meaning. Yet, while this tends to shape words into a clear and generally understood message, musical communication retains a certain vagueness. This is not just because music affects people in varying ways, even within a fluency group—something that can also occur with language. What is key is that music, unlike language, has no concrete or factual reference point. “Bank” takes on a direct meaning from its context; a musical note does not. True, music’s abstractness can be restrained by sonic and social contexts; but its implications remain variable.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Musical Dialects

Jonathan L. Friedmann, Ph.D.

Charles Darwin received a package in 1858 from Herbert Spencer, a philosopher and evolutionary theorist whose reputation rivaled that of Darwin himself. Spencer’s gift was a collection of essays on wide-ranging topics, including “The Origin and Function of Music.” Darwin wrote Spencer a letter of gratitude, noting, “Your article on Music has also interested me much, for I had often thought on the subject and had come to nearly the same conclusion with you, though unable to support the notion in any detail.” The idea proposed was that music developed from the rhythm and pitch contours of emotional speech.

As the years went by, Darwin remained “unable to support” this intuitive hypothesis, and eventually flipped the scenario. Rather than putting speech before music, he proposed that biological urges gave rise to musical sounds, which then developed into speech. Specifically, he situated music’s origins in courtship displays, when our ancestors, like “animals of all kinds [were] excited not only by love, but by the strong passions of jealousy, rivalry, and triumph.” The cries that sprang forth, presumably akin to animal mating calls, were the precursors of language. Darwin’s theory had the benefit of rooting music (and subsequently language) in an adaptive process: “[I]t appears probable that the progenitors of man, either the males or females or both sexes, before acquiring the power of expressing their mutual love in articulate language, endeavored to charm each other with musical notes and rhythm.”

The issue is far from conclusively decided. Contemporary theorists are split between Spencerians, who view music as an outgrowth of language, and Darwinians, who view language as a byproduct of music. This chicken-or-the-egg debate is likely to remain unsettled, in part because of the absence of the proverbial time machine, and in part because music and language are so inextricably intertwined.

However music and language came about, it is clear that they mirror one another. Both Spencer and Darwin based their theories on evidence of musical characteristics in expressive speech. Similarly, those who study global musics often find the syntactic and tonal patterns of regional dialects reflected in the phrasings, cadences, inflections, and intonations of regional songs. Indeed, distinct language forms help explain the variability of timbre, modal, and structural preferences from place to place. The folk melodies of Algeria and Zambia may not have much in common, but each is tied to speech patterns used in those countries.

A good illustration of the speech-song convergence is Steve Reich’s three-movement piece, Different Trains (1988). The melodic content of each movement derives from interviews recorded in the United States and Europe. Looped spoken phrases, drawn from recollections about the years leading up to, during, and immediately after the Second World War, are paralleled and developed by a string quartet—an effect that simultaneously highlights and enhances the musicality of the spoken words.

Yet, none of this tells us which came first in the history of our species. Music and language have existed side by side for eons. Musical norms have affected speech organization, just as speech organization has affected musical norms. In the end, the question of evolutionary sequence is less important than the very indispensability and interdependence of music and language.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Speech as Byproduct

Jonathan L. Friedmann, Ph.D.

Human beings like to celebrate the uniqueness of our species. Of all the terrestrial creatures, we are the ones who have built civilizations, developed science and technology, invented philosophy and sport, produced art and medicine. Exactly how we attained the presumed mantle of superiority is not as clear. The usual list of explanations leads us to similarities with other animals, rather than to exclusively human traits.

For instance, our “big brains” have roughly the same brain-to-body mass ratio as mice, and are outsized by dolphins and some small birds. Tool use is present among various animals, including primates, elephants, ants, wasps, certain birds and some octopi. We share an opposable thumb with koalas, opossums, several primates, certain frogs, and a few dinosaurs. We are not the only animals to walk on two legs—just look at any bird. Even the gene largely responsible for language (FOXP2) is found in other species, like chimpanzees and songbirds, albeit in different variants.

It could be that what makes us human is not one of these traits, but all of them in combination. For example, the anatomical emergence of the opposable thumb facilitated tool culture, and large brains enabled the development of seemingly endless devices, including written language. Indeed, many scientists contend that language advancement—built from the convergence of other human characteristics—is what makes us unique.

Dr. Charles Limb recently challenged this conventional view. Limb, an otolaryngological surgeon and saxophonist, was intrigued by musical conversations that take place between improvising jazz players. Using an MRI machine, he and a team of researchers mapped the “jazz brain.” First, they instructed a musician to play a memorized piece of music. Next, they asked him to improvise with another musician, who played in another room. Their findings show that collaborative improvisation stimulates robust activity in brain areas traditionally linked with spoken language. Moreover, it appears that the uninhibitedness and spontaneity of improvisation is closer to a dream state than to self-conscious conversation.

As a mode of communication, music is more complex and intuitive than the comparatively straightforward systems of verbal and written language. For jazz improvisers, the back-and-forth is both plainly understood and impossible to put into words. The fact that the brain can process this acoustic information, which is far more complicated than speech, suggests that musical capacity—not language—is the distinctive human-identifying trait.

To be sure, “musicality” is also present in songbirds, whales and a handful of other animals. But the complexity of music perception in humans is so advanced that modern science cannot fully comprehend it. Limb says it best: “If the brain evolved for the purpose of speech, it’s odd that it evolved to a capacity way beyond speech. So a brain that evolved to handle musical communication—there has to be a relationship between the two. I have reason to suspect that the auditory brain may have been designed to hear music and speech is a happy byproduct.”

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.