Tag Archives: Language

The Semiotics of Music

Jonathan L. Friedmann, Ph.D.

Comparisons between music and language hit a wall when the focus turns to meaning. Although both are innate modes of human expression which, in their vocalized forms, use the same mechanisms of respiration, phonation, resonance, and so on, they function differently. Whereas English speakers would agree about the meaning of a word like “chair,” there is no such consensus about the meaning of a chord or scale. Outside of song, which is basically a form of stylized speech, meaning in music tends to be subjective. As a result, some scholars have taken to limiting—or even dismissing—the possibility of shared musical meaning. However, when we look beyond direct comparisons with language, we see distinct cultural meaning assigned to all sorts of things, ranging from music and food to gestures and facial expressions. “Chair” might not have a musical equivalent, but meaning is discerned in other ways.

An appeal to semiotics, the science of signs, seems most appropriate when evaluating musical meaning. Especially helpful is C. S. Peirce’s formulation of three types of signs: symbols, indexes, and icons.

Of the three, symbols are the least instructive. Language is a system of symbols, wherein each word or phrase has a definite and consistent meaning, albeit often contextually defined. Words are a shortcut for something else; the word “angry” represents an emotional state, but the word itself is not that emotional state. Language is essential for describing and analyzing music, but as ethnomusicologist Thomas Turino explains, such symbols “fall short in the realm of feeling and experience.” Symbols are secondary or after-the-fact, and may distract from the intimacy and immediacy of the musical experience.

Musical signs are more fruitfully viewed as indexes: signs that point to objects or ideas they represent. This applies mainly to music associated with a particular concept or occasion. For example, a national anthem performed at a sporting event becomes an index of patriotism, while a Christmas song heard while shopping becomes an index of the season. Through a combination of personal and shared experiences, these pieces—with or without their lyrics—serve as repositories of cultural meaning. On a smaller scale, music can serve as an index of romantic relationships or peer group affiliations.

Musical icons resemble or imitate the things they represent. These can include naturalistic sounds, such as thunder played on kettledrums, or mental states conveyed through musical conventions, such as ascending lines signaling ascent or exuberance. Icons tend to be culturally specific, such that listeners in a music-culture develop shared understandings, even as individuals add idiosyncratic layers to those understandings.

Precision, directness, and consistency are the lofty goals of language, but these are not the only ways meaning is conveyed. Musical meaning relies on non-linguistic systems, such as signs and indexes. While these may not be as steady or specific as language, they communicate shared meaning just the same.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

The Limits of Transmission

Jonathan L. Friedmann, Ph.D.

Since at least the Romantic period, musicians and theorists have argued that musically expressed emotions cannot be fully or adequately conveyed in words or rational concepts. Instead, music is understood as a mode of communication that bypasses ordinary language and speaks directly to the ineffable realm of the “inner life.” This emotional conveyance is typically regarded as both cultural and highly personal: conventions within a music-culture determine the generalized impressions of musical qualities, such as mode, pitch range, and tempo, but specific interactions between those qualities and the listener are not predetermined. A wide and highly variable range of factors, as unique as the listener herself, fundamentally shapes the experience.

Deryck Cooke’s influential treatise, The Language of Music (1959), proposes a more systematic approach. Through an examination of hundreds of examples of Common Practice tonality (Western tonal music since 1400), Cooke developed a lexicon of musical phrases, patterns, and rhythms linked to specific emotional meanings. In his analysis, recurrent devices are used to effect more or less identical emotional arousals, thus yielding a predictable, idiomatic language.

This theory, while helpful in identifying and organizing norms of Western music, has been criticized for omitting the role of syntax. There might be a standard musical vocabulary, but without rules for arranging constituent elements into “sentences,” there can be no consistent or independent meanings. For even the most over-used idiom, the performance and listening contexts ultimately determine the actual response.

This observation casts doubt on another of Cooke’s central claims. If, as Cooke argued, musical elements comprise a precise emotional vocabulary, then a composer can use those elements to excite his or her own emotions in the listener. This is achievable in emotive writing, such as a heartfelt poem or autobiographical account, which uses the syntactic and semantic structures of language to reference ideas, images, and experiences. However, because music lacks these linguistic features, direct emotional transmission is hardly a sure thing.

Philosopher Malcolm Budd adds an aesthetic argument to this criticism. By locating the value of a musical experience in the reception of the composer’s emotions, the piece loses its own aesthetic interest; it becomes a tool for transmitting information, rather than an opening for individually shaped emotional-aesthetic involvement. According to Budd, Cooke’s thesis, which he dubs “expression-transmission theory,” misrepresents the motivation for listening: “It implies that there is an experience which a musical work produces in the listener but which in principle he could undergo even if he were unfamiliar with the work, just as the composer is supposed to have undergone the experience he wishes to communicate before he constructs the musical vehicle which is intended to transmit it to others; and the value of the music, if it is an effective instrument, is determined by the value of this experience. But there is no such experience.”

The enduring appeal of musical language is its multivalence. Idiomatic figures may be commonplace in tonal music, but their appearance and reappearance in different pieces does not carry definite or monolithic information, whether from the composer or the vocabulary employed.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Whistled Speech

Jonathan L. Friedmann, Ph.D.

The line between speaking and singing is often blurred. In the Hebrew Bible, poetry and song are both called shir, suggesting that poetry was performed in speech-song. A similar simultaneity of song and poetry is present in human cultures across time and geography. Part of this owes to the shared mechanism of sound production: the human voice. It is commonly observed that infants “sing” before they speak. Expressive speech has qualities homologous with sprechstimme. Intense emotions are vocalized in shouts and groans verging on the musical. Even ordinary verbal communication lends itself to musical notation.

This points to a basic principle: Where there is speech, there is song. William A. Aikin touched on this in his article on singing in Grove’s Dictionary of Music and Musicians (1939): “It is part of our natural condition to possess organs for the production of sound, and perceptions to make them musical, and, being thus equipped, it is but natural that the art of music should be intimately associated with human life.” Because the impulse to communicate manifests in both speech and song, there is a natural spillover: speech tends toward song and song is shaped by speech.

Intonation variation is used in every language to mark emphases, differences, and emotional color. There are also many tonal languages, which utilize contrasting tones—rises and falls in pitch—to distinguish words and their grammatical functions. Roughly seventy percent of languages are tonal, accounting for about a third of the world’s human population. They are most prevalent in Central America, Africa, and East Asia. Mandarin Chinese, for instance, has four distinct tones: flat, rising, falling, and falling then rising.

A few tonal languages take speech-song a step further. They feature a whistling counterpart, or a whistled mode of speech. These melodic dialects are based on the spoken language: words are simplified and represented, syllable-by-syllable, contour-by-contour, through whistled tunes. Such communication is typically a musical-linguistic adaption to mountainous or heavily forested areas where daily work is performed in relative isolation. The whistles carry over great distances and can be heard over environmental noises. The practice is found in remote towns and villages in various parts of the globe, including Turkey, France, Mexico, Nepal, New Guinea, and the Canary Islands.

The instinctive and effective translation of spoken words into whistled melodies highlights the bond between speech and song. There is a modicum of musicality in English and other non-tonal languages. Tonal languages display more explicit musical aspects. Whistled languages make music the audible center. Yet, for all their diversity, the relationship of all the world’s languages to song is a difference of degree more than of kind.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Musical Ambiguity

Jonathan L. Friedmann, Ph.D.

The dictionary defines its subject matter, words, as distinct meaningful elements of writing or speech. This could imply that a single word—isolated from a linguistic or real-life setting—maintains a rigid meaning. However, all but the most technical dictionary terms show that, while a word may exist as a “distinct meaningful element,” precisely what that meaning is depends on how, when, and where the word is used. The further removed it is from a relationship with other words, the less confidently it possesses monosemy, or a single basic meaning. The Oxford English Dictionary abundantly demonstrates this point, providing 464 definitions for “set,” 396 for “run,” 368 for “take,” 343 for “stand,” and so on.

The presence of two or more possible meanings within a single word, known as lexical ambiguity or homonymy, is a natural and widespread aspect of language. Perhaps the most instructive (and amusing) examples are auto-antonyms: words that contain opposite meanings. “Custom,” for instance, means both standard and one-of-a-kind. “Cleaving” means both clinging and splitting apart. “Sanction” means both permit and punish. Related to these are words whose meanings have changed over time, like “awful,” which used to mean awe-inspiring, and “resentment,” which used to mean gratitude. Merriam-Webster recently authorized the colloquial (mis)use of “literally” by listing “figuratively” among its possible meanings (much to the chagrin of grammar-snobs).

All of this points to what linguist Alan Cruse calls the “contextual variability of word meaning.” Words in cooperation with their surroundings receive a particular meaning at a particular time. This phenomenon is even more pronounced in music.

A single note sounded in seclusion has virtually no signification. It can have an abundance of qualities—pitch, color, dynamic, vibrato (or lack thereof), etc.—but these are too neutral to impart a meaning. Whereas a multivalent term like “set” has intrinsic possibilities in the hundreds, the potential meaning of a single note is almost entirely extrinsic. It is a tabula rasa awaiting the impress of simultaneous pitches (harmony) and/or a succession of pitches (melody).

To some extent, this puts language and music in alignment. Both words and notes receive meaning from the rules of usage. In different types of sentences, words are used differently and carry different senses. In different types of musical phrases, notes are used differently and give different impressions. Both instances require a level of fluency to detect the intended syntactical meaning. Yet, while this tends to shape words into a clear and generally understood message, musical communication retains a certain vagueness. This is not just because music affects people in varying ways, even within a fluency group—something that can also occur with language. What is key is that music, unlike language, has no concrete or factual reference point. “Bank” takes on a direct meaning from its context; a musical note does not. True, music’s abstractness can be restrained by sonic and social contexts; but its implications remain variable.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Musical Dialects

Jonathan L. Friedmann, Ph.D.

Charles Darwin received a package in 1858 from Herbert Spencer, a philosopher and evolutionary theorist whose reputation rivaled that of Darwin himself. Spencer’s gift was a collection of essays on wide-ranging topics, including “The Origin and Function of Music.” Darwin wrote Spencer a letter of gratitude, noting, “Your article on Music has also interested me much, for I had often thought on the subject and had come to nearly the same conclusion with you, though unable to support the notion in any detail.” The idea proposed was that music developed from the rhythm and pitch contours of emotional speech.

As the years went by, Darwin remained “unable to support” this intuitive hypothesis, and eventually flipped the scenario. Rather than putting speech before music, he proposed that biological urges gave rise to musical sounds, which then developed into speech. Specifically, he situated music’s origins in courtship displays, when our ancestors, like “animals of all kinds [were] excited not only by love, but by the strong passions of jealousy, rivalry, and triumph.” The cries that sprang forth, presumably akin to animal mating calls, were the precursors of language. Darwin’s theory had the benefit of rooting music (and subsequently language) in an adaptive process: “[I]t appears probable that the progenitors of man, either the males or females or both sexes, before acquiring the power of expressing their mutual love in articulate language, endeavored to charm each other with musical notes and rhythm.”

The issue is far from conclusively decided. Contemporary theorists are split between Spencerians, who view music as an outgrowth of language, and Darwinians, who view language as a byproduct of music. This chicken-or-the-egg debate is likely to remain unsettled, in part because of the absence of the proverbial time machine, and in part because music and language are so inextricably intertwined.

However music and language came about, it is clear that they mirror one another. Both Spencer and Darwin based their theories on evidence of musical characteristics in expressive speech. Similarly, those who study global musics often find the syntactic and tonal patterns of regional dialects reflected in the phrasings, cadences, inflections, and intonations of regional songs. Indeed, distinct language forms help explain the variability of timbre, modal, and structural preferences from place to place. The folk melodies of Algeria and Zambia may not have much in common, but each is tied to speech patterns used in those countries.

A good illustration of the speech-song convergence is Steve Reich’s three-movement piece, Different Trains (1988). The melodic content of each movement derives from interviews recorded in the United States and Europe. Looped spoken phrases, drawn from recollections about the years leading up to, during, and immediately after the Second World War, are paralleled and developed by a string quartet—an effect that simultaneously highlights and enhances the musicality of the spoken words.

Yet, none of this tells us which came first in the history of our species. Music and language have existed side by side for eons. Musical norms have affected speech organization, just as speech organization has affected musical norms. In the end, the question of evolutionary sequence is less important than the very indispensability and interdependence of music and language.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Speech as Byproduct

Jonathan L. Friedmann, Ph.D.

Human beings like to celebrate the uniqueness of our species. Of all the terrestrial creatures, we are the ones who have built civilizations, developed science and technology, invented philosophy and sport, produced art and medicine. Exactly how we attained the presumed mantle of superiority is not as clear. The usual list of explanations leads us to similarities with other animals, rather than to exclusively human traits.

For instance, our “big brains” have roughly the same brain-to-body mass ratio as mice, and are outsized by dolphins and some small birds. Tool use is present among various animals, including primates, elephants, ants, wasps, certain birds and some octopi. We share an opposable thumb with koalas, opossums, several primates, certain frogs, and a few dinosaurs. We are not the only animals to walk on two legs—just look at any bird. Even the gene largely responsible for language (FOXP2) is found in other species, like chimpanzees and songbirds, albeit in different variants.

It could be that what makes us human is not one of these traits, but all of them in combination. For example, the anatomical emergence of the opposable thumb facilitated tool culture, and large brains enabled the development of seemingly endless devices, including written language. Indeed, many scientists contend that language advancement—built from the convergence of other human characteristics—is what makes us unique.

Dr. Charles Limb recently challenged this conventional view. Limb, an otolaryngological surgeon and saxophonist, was intrigued by musical conversations that take place between improvising jazz players. Using an MRI machine, he and a team of researchers mapped the “jazz brain.” First, they instructed a musician to play a memorized piece of music. Next, they asked him to improvise with another musician, who played in another room. Their findings show that collaborative improvisation stimulates robust activity in brain areas traditionally linked with spoken language. Moreover, it appears that the uninhibitedness and spontaneity of improvisation is closer to a dream state than to self-conscious conversation.

As a mode of communication, music is more complex and intuitive than the comparatively straightforward systems of verbal and written language. For jazz improvisers, the back-and-forth is both plainly understood and impossible to put into words. The fact that the brain can process this acoustic information, which is far more complicated than speech, suggests that musical capacity—not language—is the distinctive human-identifying trait.

To be sure, “musicality” is also present in songbirds, whales and a handful of other animals. But the complexity of music perception in humans is so advanced that modern science cannot fully comprehend it. Limb says it best: “If the brain evolved for the purpose of speech, it’s odd that it evolved to a capacity way beyond speech. So a brain that evolved to handle musical communication—there has to be a relationship between the two. I have reason to suspect that the auditory brain may have been designed to hear music and speech is a happy byproduct.”

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Song to Speech

Jonathan L. Friedmann, Ph.D.

The acquisition of language in human infants usually begins with song. Mothers and other caregivers address infants in a singsong version of the native tongue, known variously as infant-directed speech, musical speech, and motherese. Pitch contours are exaggerated, phrasings are overemphasized, and stress patterns are overstated. Sounds are repeated, vocal pitch is high, vowels are exaggerated, tones range widely, and tempo is relaxed. More than the vocabulary itself, these extra-linguistic qualities set the foundation for language development.

The central ingredients of infant-directed speech, pitch and rhythmic structure, are also the essential elements of song. It is thus no coincidence that the singing of lullabies and playsongs is also a human universal. Such songs are a natural outgrowth or twin sibling of motherese, and, like musical-speech, their impact is more emotive than linguistic. Long before the child understands the meaning of words, she detects and imitates these vocal patterns of expression. Singing comes before speech.

These observations are familiar to anyone with child-rearing experience. They are about as revelatory as a step-by-step description of diaper changing. However, new research suggests that the connection between song and speech development runs deeper than previously intuited.

A massive study involving over a hundred international researchers, nine supercomputers, and the genomes of forty-eight species of birds recently culminated in the publication of twenty-eight articles. Among the findings are genetic signatures in the brains of songbirds that correspond to the genetics of human speech.

Humans and songbirds undergo a similar progression from “baby talk” to complex vocalizations, and both learn vocal content from their elders. This is something shared with only a few other species (“vocal learners,” like dolphins, sea lions, bats, and elephants), and makes us unique among the primates (the grunts of old and young chimps sound basically identical). What the new research shows is that humans and songbirds share fifty-five genes in the vocal-learning regions of the brain. Thus, even as the ability to vocalize developed independently in these species, it has similar molecular underpinnings.

Scientists hope to use this data to better understand and treat human speech disorders. (People cannot be subjected to the same experiments as birds.) There are also implications in the realm of music. Ethnomusicologists often claim that music is as important to humans as speech—a view drawn from the cross-cultural use of musical sounds in asserting individual and collective identity, conveying and retaining information, expressing and receiving emotional signals, and a host of other functions. “We need music to be human” is the discipline’s unofficial slogan. The fact that a child is first exposed to musical speech and first takes to musical babbling supports the notion of music as a human fundamental. New discoveries connecting bird songs and human speech could bolster that position. On a genetic level, it seems, singing and speaking are essentially variants of the same thing.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Terrestrial Sounds

 Jonathan L. Friedmann, Ph.D.

On September 5, 1977, NASA sent a probe to study the outer Solar System and continue on to interstellar space. Named Voyager 1, the sixteen hundred pound craft is now approximately twelve billion miles from Earth. An identical spacecraft, Voyager 2, was launched two weeks before its interstellar twin, but Voyager 1 moved faster and eventually passed it. Both probes carry a golden phonograph record containing sounds and images meant to convey the diversity of terrestrial life and human culture. The hope is that, should intelligent extraterrestrials find one of these infinitesimal records in infinite space, they would be able to decipher its contents.

The record includes 116 images and an array of earthly sounds: greetings in fifty-five languages, volcanoes, a chimpanzee, a heartbeat, a train, Morse code, a wild dog, a mother and child, rain, and much more. It also has ninety minutes of music, ranging from a Pygmy girl’s initiation song to Indonesian gamelan music to the first movement of Bach’s Brandenburg Concerto No. 2 to the “Sacrificial Dance” from Stravinsky’s Rite of Spring.

The possibility of an extraterrestrial species obtaining, playing, and comprehending the Golden Record is minuscule. Not only is it a tiny object moving in the vastness of space, but the sounds it includes are utterly earthbound. In striving to portray sundry soundscapes, the record reveals a certain, if subtle, unity: every sound on this planet bears the imprint of this planet. Such earthliness would surely fall on deaf alien ears (if they even have an auditory mechanism). The sounds we make or perceive have an evolutionary history unique to our orb.

In the decades since the Voyager space pods were set in motion, much has come to light about the natural origins of music. Bernie Krause’s groundbreaking work on non-human “musical” proclivities suggests, among other things, the millennia-spanning influence of geophony (Earth sounds) and biophony (non-human animal sounds) on anthrophony (human sounds). Other theories of music’s origins point to environmental imprints in one way or another. A rough amalgamation of these nuanced hypotheses shows music as a combination of the imitation of nature and the exploration of human capacities.

Added to this is mounting evidence of the interconnectedness of Earth’s living creatures. As Neil Shubin explains in his popular book, Your Inner Fish, the close examination of fossils, embryos, genes, and anatomical structures indicates that all animals, prehistoric and modern, are variations of the same blueprint—hence the fish within us all. (Shubin remarked in a lecture that he could have just as easily called the book, Your Inner Fly.) What this means musically is that creaturely sounds of all sorts emanate from the same extended biological family, and are thus shaped by variations of the same constraints. The reason why researchers have been able to explore musical vocabularies of songbirds and bugs, and their probable influence on early humans, is because, despite surface dissimilarities, animals are people too (or, more accurately, humans are animals).

The extraterrestrial species that happens upon the Golden Record will almost certainly be nothing like us. Life on Earth shares an anatomical makeup that could have only developed here; other habitable planets would have other ingredients. This is a major criticism of popular depictions of aliens, which, aside from The Blob (1958) and a few others, invariably appear as insects, reptiles, humanoids, or a combination of the three. Genes on another planet would give rise to species beyond our Earth-born imaginations. And our sounds—musical, linguistic, animal, or otherwise—would be unlike anything they’ve ever heard.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Art Everywhere

Jonathan L. Friedmann, Ph.D.

Some assert that it is a fallacy to compare cultural elements cross-culturally. Sometimes called the “incommensurability thesis,” this position posits that because objects, concepts and behaviors tend to have very specific meanings for the groups that produce them, they must therefore be utterly unique. Variety negates universality. Basically a version of cultural relativism, this attitude emanates from three circles (or, rather, minorities within three circles): philosophers who attack commonalities in human experience; critics who over-emphasize outlier phenomena in order to challenge conventional assumptions; and ethnographers who argue for the absolute uniqueness of the populations they study, in part to elevate their own stature as privileged experts. Yet, just because human activities take heterogeneous forms does not eliminate the possibility of shared motivations.

Steven Pinker argues this point as it relates to the human capacity for language. He concludes in The Language Instinct: “Knowing about the ubiquity of complex language across individuals and cultures and the single mental design underlying them all, no speech seems foreign to me, even if I cannot understand a word.” This observation seems indisputable: language is a biological characteristic of the human species.

Philosopher of art Denis Dutton expands on Pinker’s claim in The Art Instinct. He asks: “Is it also true that, even though we might not receive a pleasurable, or even immediately intelligible, experience from art of other cultures, still, beneath the vast surface variety, all human beings have essentially the same art?” Dutton contends that, like language, artistic behaviors have spontaneously appeared throughout recorded human history. Almost always, observers across cultures recognize these behaviors as artistic, and there is enough commonality between them that they can be placed within tidy categories: painting, jewelry, dance, sculpture, music, drama, architecture, etc. To Dutton, this suggests that the arts, again like language, possess a general omnipresent structure beneath the varied grammar and vocabulary.

It should be noted that Pinker himself has elsewhere challenged this assumption. Most famously, he dubbed music “auditory cheesecake,” or a non-adaptive by-product (of language, pattern recognition, emotional calls, etc.) that serves no fundamental role in human evolution. It is not my intention here to place that hypothesis under a microscope or investigate the many arguments against it. (Perhaps, being a linguist, Pinker sees language as a sort of holy ground that mustn’t be stepped on by “lesser” human activities.) Wherever the evolutionary debates travel and whatever clues or counter-clues they accumulate, one thing is convincing: art appears rooted in universal human psychology.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

 

Sound in Wax

Jonathan L. Friedmann, Ph.D.

The earliest wax cylinder phonographs—the first commercial medium for recording and reproducing sound—were entirely mechanical. They were hand-cranked and needed no electrical power. All that was required was a lathe, a waxy surface, a sharp point for a stylus, and a resonating table. To impress sound waves onto wax, the voice or instrument was positioned closely to the large end of a horn. The vibrations moved a needle, which carved a groove on the rotating wax. According to Walter Murch, an acclaimed film editor and sound designer, everything used in these early machines was available to the ancient Greeks and Egyptians. But it took until the middle of the nineteenth century, and the genius of Thomas Edison and his team, to execute the recording process.

Why did it take so long to capture sound? Musician David Byrne has informally speculated that maybe it didn’t. Perhaps someone in antiquity invented a similar device and later abandoned it; or perhaps the device itself was simply demolished in the ruins of history. While conceivable on a technological level, this hypothesis is unlikely considering the prevailing ethos of the ancient world. The ephemerality of sound was part of its attraction: it was momentary, mysterious, transient and transcendent. As this fleetingness was highly valued, there was little or no inclination to record. Murch puts it this way: “Poetically, the beauty of music and the human voice was used as a symbol of all that’s evanescent. So the idea that you could trap it in any physical medium never occurred to [them] . . .”

This contrasts with the rush to develop written systems that enshrined language. The ancients recognized that certain things should be documented, like governmental records, priestly decrees, royal chronicles, philosophical treatises, etc. What these shared in common was a silent beginning: they were soundless thoughts committed to paper (or papyrus or parchment or tablets or wood). Writing gave concrete form to facts and concepts that, while often referencing observable phenomena, had no tangibility of their own. In contrast, sound was understood as being completely formed. It was received sensually, experienced kinesthetically, and processed emotionally. It existed in the moment it was made.

It is worth noting that Edison first thought of wax cylinder recorders as dictation machines. They were to record the owners’ ideas and messages and, ideally, preserve the great speeches of the day. This limited purpose reflected the limitations of the early devices: they were too crude and imprecise to capture the nuances of musical performance. True, music recording and playback were in Edison’s long-term plan, and they became major functions as the machines advanced. But it is feasible to consider that Edison’s initial goal of preserving dictation was—and arguably still is—a worthier and more practical goal than detaining music.

Musicians commonly lament that they are slaves to their own recordings. The version that appears on an album is the version that fans want to hear, and deviations are typically received as imperfections, inaccuracies or unwanted departures from the “authoritative” source.  Some improvising musicians even feel obliged to give their audiences note-for-note reproductions of recorded solos. This is not to negate the enormous benefits and incalculable cultural impact of musical recordings. Our understanding of music as a diverse human enterprise owes mightily to the proliferation of recorded sounds, and musical creativity thrives when there is access to other music. But something of music’s temporality is lost in recording. Imprinting sound in wax or digital audio creates the illusion of permanence.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.