Tag Archives: Listening

Music Everlasting

Jonathan L. Friedmann, Ph.D.

“There is no time like the present,” “once in a lifetime,” and other such clichés highlight an obvious truth: each moment is unrepeatable. At any point in time, we have the ability to do one thing and one thing alone. Nothing that we do, say, think, or feel can, in the strictest sense, be compared to any other. Regrets about missed opportunities are purely theoretical. Judgments and self-inventories can only be based on actual occurrences, not “what ifs.” Jean-Paul Sartre makes this point in his treatise Existentialism: “There is no genius other than one which is expressed in works of art; the genius of Proust is the sum of Proust’s work; the genius of Racine is his series of tragedies. Outside of that, there is nothing. Why say that Racine could have written another tragedy, when he didn’t write it? A man is involved in life, leaves his impress on it, and outside of that there is nothing.”

The deterministic worldview draws a similar conclusion. All facts in the physical universe—including human history—are inescapably dependent upon and conditioned by their causes. The choices we make, big and small, fit in a chain of cause and effect that yields a single outcome. Meteorologist Edward Lorenz imagined the classic example with his “butterfly effect,” wherein the distant flapping of butterfly wings influences a tornado several weeks later. The resulting chaos theory holds that the universe operates by unpredictable determinism: everything happens in an orderly pattern, but we cannot know with certainty how things will turn out until they actually happen.

Live music gives sonic expression to the unrelenting yet unpredictable uniqueness of each passing moment. In his erudite tome, A Composer’s World: Horizons and Limitations, Paul Hindemith muses on the individuality of each performance. Sound, he contends, is music’s least stable quality: “An individual piece of music, being many times reborn and going through ever renewed circles of resonant life, through repeated performances, dies as many deaths at the end of each of its phoenixlike resurrections: no stability here, but a stumbling progression from performance to performance.” Hindemith connects the frailty of sound to the fleetingness of life itself, suggesting that musical moments are just as unrepeatable as other moments. Like the passage of time, each performance is one of a kind, and each iteration evaporates as soon as it occurs.

The impression of permanence is stronger in recorded music. Listening to recordings is, of course, subject to the same forces as live performances: sounds come and go in accordance with time’s progression. The crucial difference is that the same performance can be heard again, creating a sort of conditional eternality. Rather than living, dying, and resurrecting with each performance, recorded music exists in a perpetual present tense.

This semblance of stability is wholly at variance with life’s ephemeral, deterministic trajectory. Recordings allow us to simulate everlasting moments; life pushes ahead but the music remains the same. This psychological gratification, rooted in a desire to obtain the unobtainable, accounts in part for our attraction to recorded music.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

The Limits of Transmission

Jonathan L. Friedmann, Ph.D.

Since at least the Romantic period, musicians and theorists have argued that musically expressed emotions cannot be fully or adequately conveyed in words or rational concepts. Instead, music is understood as a mode of communication that bypasses ordinary language and speaks directly to the ineffable realm of the “inner life.” This emotional conveyance is typically regarded as both cultural and highly personal: conventions within a music-culture determine the generalized impressions of musical qualities, such as mode, pitch range, and tempo, but specific interactions between those qualities and the listener are not predetermined. A wide and highly variable range of factors, as unique as the listener herself, fundamentally shapes the experience.

Deryck Cooke’s influential treatise, The Language of Music (1959), proposes a more systematic approach. Through an examination of hundreds of examples of Common Practice tonality (Western tonal music since 1400), Cooke developed a lexicon of musical phrases, patterns, and rhythms linked to specific emotional meanings. In his analysis, recurrent devices are used to effect more or less identical emotional arousals, thus yielding a predictable, idiomatic language.

This theory, while helpful in identifying and organizing norms of Western music, has been criticized for omitting the role of syntax. There might be a standard musical vocabulary, but without rules for arranging constituent elements into “sentences,” there can be no consistent or independent meanings. For even the most over-used idiom, the performance and listening contexts ultimately determine the actual response.

This observation casts doubt on another of Cooke’s central claims. If, as Cooke argued, musical elements comprise a precise emotional vocabulary, then a composer can use those elements to excite his or her own emotions in the listener. This is achievable in emotive writing, such as a heartfelt poem or autobiographical account, which uses the syntactic and semantic structures of language to reference ideas, images, and experiences. However, because music lacks these linguistic features, direct emotional transmission is hardly a sure thing.

Philosopher Malcolm Budd adds an aesthetic argument to this criticism. By locating the value of a musical experience in the reception of the composer’s emotions, the piece loses its own aesthetic interest; it becomes a tool for transmitting information, rather than an opening for individually shaped emotional-aesthetic involvement. According to Budd, Cooke’s thesis, which he dubs “expression-transmission theory,” misrepresents the motivation for listening: “It implies that there is an experience which a musical work produces in the listener but which in principle he could undergo even if he were unfamiliar with the work, just as the composer is supposed to have undergone the experience he wishes to communicate before he constructs the musical vehicle which is intended to transmit it to others; and the value of the music, if it is an effective instrument, is determined by the value of this experience. But there is no such experience.”

The enduring appeal of musical language is its multivalence. Idiomatic figures may be commonplace in tonal music, but their appearance and reappearance in different pieces does not carry definite or monolithic information, whether from the composer or the vocabulary employed.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Sound as Object

Jonathan L. Friedmann, Ph.D.

After World War II, audio recordings gradually replaced sheet music as the dominant means of distributing music to consumers. As a result, the musical centerpiece of the family home moved from the piano to the hi-fi system and, consequently, from active music-making to audition and record collecting. The LP (3313 rpm, long-playing, microgroove vinyl disc), introduced by Columbia Records in 1948, revolutionized the music industry. Along with changing habits of consumption, records transformed basic perceptions about music. Fleeting sounds became fixed objects.

Recordings had been around since Thomas Edison’s mechanical phonograph cylinder, patented in 1878. Within two decades, commercial recordings and distribution grew into an international industry. Popular titles at the beginning of the twentieth century sold millions of units. Gramophone records, which were easier to manufacture, ship, and store, hit the shelves around 1910, and subsequent advances in technology made audio recordings increasingly accessible. Still, sheet music—and the piano playing it depended on—remained king. The wholesale value of printed sheet music more than tripled between 1890 and 1909. 25,000 songs were copyrighted in the U.S. Sheet music sales totaled 30 million copies in 1910. The popularity of printed music continued through the 1940s. An article in Variety on October 4, 1944 boasted “Sheet Music Biz at 15-Year Crest.”

Sales declined precipitously as the 1940s moved into the 1950s. The days when hit songs were fueled by a combination of sheet music and, secondarily, record sales gave way to our recording-dominated era. A Variety article from November 21, 1953 captured the turning point: “Publishing Industry Alarmed by Pop Sheet Music Decline.”

The current ubiquity of recordings is the culmination of a centuries-long effort to mechanically reproduce sound—an evolution that began with musical notation and continued with programmable devices (hydro-powered organs, musical clocks, music boxes, player pianos, and the like). However, earlier inventions still required manual engagement and/or autonomous real-time devices/instruments. With recordings, sounds disembodied from their performance could be played back at any time. Music itself became the object.

Michel Chion details seven ways recording technology facilitated the objectification of music: (1) capturing ephemeral sound vibrations and converting them into a permanent medium; (2) facilitating telephony, or the retransmission of sounds at a distance from their original source; (3) enabling new ways of systematic acousmatization, or the ability to hear without seeing; (4) allowing sounds to be amplified and de-amplified through electronic manipulation, as opposed to the crescendo or decrescendo of live instruments; (5) affording phonofixation, or the fixing of sounds and reuse of fixed sounds in the recording studio; (6) paving the path toward phonogeneration, or the creation of sound “out of nothing” by way of synthesizers and computers; (7) giving engineers the ability to reshape sounds through editing, processing, and manipulation.

This last effect, in particular, contributes to Chion’s view of sounds converted into objects: “recording has been—above all from the moment that it enabled editing—the first means ever in history to treat sounds, fleeting things, as objects: that is to say, both in order to grasp them as objects of observation and in order to modify them—to act on their fixed traces.” Likewise, the listener’s control over recordings—through pausing, skipping forward, changing volume, using multiple devices, etc.—furthers the impression of music’s “thing-ness.”

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Goal-Directed Movement

Jonathan L. Friedmann, Ph.D.

Music listening is an unfolding experience. Without prompting, the listener naturally follows the direction of a piece, traveling through its curves and contours in a linear progression toward completion. In both the Republic and Laws, Plato comments on the ability of this temporal movement to “charm” the inner life of the listener. Roger Scruton contends that the mind moves sympathetically with motion perceived in music, such that it is felt as physical motion. These and other observations address the goal-directed movement of music. The whole piece is not revealed at once or in an order or manner that the listener chooses. Musical developments, whether simple or complex, lead auditors from beginning to end.

In contrast to print communication, which can be read and reread at any pace the reader wishes, music imposes its own duration and agenda. In pre-recording days, this necessitated formalized repetitions and recapitulations to get certain messages across, hence the use of sonata form (exposition, development, recapitulation), the doubling schema of keyboard partitas (AA/BB), the verse/chorus form of folksongs (and later commercial songs), and so on. Michel Chion notes: “This enormous redundancy—which means that if we buy a recording of Bach’s English Suites that lasts an hour, we only get thirty minutes of ‘pure’ musical information—clearly has no equivalent in the visual arts of the period.” Audio recordings afford greater freedom in terms of playback and repeated listening, but each listening remains a temporal experience.

The situation is not sidestepped with printed notation. Although a score can be read and studied, similar to a book or article, the notes on a page are essentially illusory. The paper is not the music. Jean-Paul Sartre argued in L’Imaginaire, a treatise on imagination and the nature of human consciousness, that music is never located in the silent symbols of a musical score, however detailed. Using Beethoven’s Seventh Symphony as an example, Sartre explained that the inability of written notes to capture music is rooted in the nature of sound itself. Unlike something that is empirically real—defined by Sartre as having a past, present, and future—music evaporates as soon as it is heard. Each performance is basically a new creation, and, we might add, each exposure to a recording is a new experience, due to changes in the listener and her surroundings from one hearing to the next.

Time, not paper, is the fundamental surface upon which music is made. Music involves a linear succession of impulses converging toward an end. Whereas a painting or sculpture conveys completeness in space, music’s totality is gradually divulged, sweeping up the listener—and the listener’s inner life—in the process.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

The Original Echo Chamber

Jonathan L. Friedmann, Ph.D.

“A temple is a landscape of the soul. When you walk into a cathedral, you move into a world of spiritual images. It is the mother womb of your spiritual life—mother church.” These words from mythologist Joseph Campbell touch on the primitive spatial and acoustic appeal of Medieval and Renaissance cathedrals. Campbell connects the sensation to that of pictograph-adorned Paleolithic caves, which were also likely used for mystical and spiritual ceremonies. The melodic conventions and vocal techniques adapted to these acoustically active stone-walled spaces—epitomized by the straight, drawn-out, and separated tones of Latin ecclesiastical chant—exploit the echo chamber effect, creating an all-encompassing sonic and physical experience. As I explain in an earlier blog post, these ethereal sounds became synonymous with the cosmic voice.

The impression of safety and repose these spaces provide is captured in Campbell’s phrase, “the mother womb.” This image can be taken a step further. The sonically induced, archaic feelings take us back to the literal womb: the original acoustic envelope where direct and indirect sounds are experienced as an undifferentiated gestalt. Psychoanalyst Didier Anzieu describes it as a “sonorous bath”: a lulling sense of weightlessness, rebirth, and being transported.

The ear awakens during the fourth month of fetal development. By week twenty-five, the cochlea—the ear’s frequency analyzer—reaches adult size. From that point forward, the fetus receives, processes, and responds to a growing array of amalgamated sounds, including pressure variations in the bodily walls, two cycles of heartbeats (the mother’s and her own), and acoustic input from outside the womb. The unfiltered sounds are presumably analogous to those heard in a reverberating space, such as a cave or cathedral.

Only in early childhood does the ear begin to categorize different sounds. Following R. Murray Schafer’s concept of the “soundscape,” or the combination of acoustic signals heard in an immersive environment, normally functioning ears automatically distinguish between background and foreground signals, both natural and human-made. This behavior, which combines innate capacity and cultural conditioning, is not present in the echoing womb. The lively reverberations, so closely associated with sacred spaces, recall that original echo chamber. Indeed, conceptions of God (or gods) as compassionate, protecting, loving, comforting, and so forth may even be rooted in this simulated return to the womb.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Ignoring Noise

Jonathan L. Friedmann, Ph.D.

As a rule, musical sounds are more clearly distinguished from non-musical sounds (the sounds of “reality”) than visual arts are distinguished from the shapes and colors of the visible world. What makes a photograph, abstract painting, or found object distinct from non-art is more difficult to pinpoint than what makes music sound like music. Satirist Ambrose Bierce addressed this in The Devil’s Dictionary, which defines painting as “The art of protecting flat surfaces from the weather and exposing them to the critic.” The viewing venue, in other words, plays a central role in the creation and perception of visual arts. (Marcel Duchamp’s Fountain, a porcelain urinal signed “R. Mutt,” is an extreme example.) Contrastingly, music is invisible, and thus cannot be confused with visible forms; it has no direct analog in the physical world.

Music is a culturally defined sonic phenomenon that, while impossible to define universally, is immediately recognized when heard in its cultural setting. Historically in the West, this has included a division between “pure” tones and “disordered” or “unwanted” sounds, generally called “noise.” Physics seems to support this bifurcation. While the various sound waves produced by music can be isolated into individual frequencies, with some being more dominant than others, noise contains jumbled frequencies of sound without a dominant frequency. However, ambiguity lurks beneath this observation. Despite Western music’s self-perception of “noiselessness,” such sounds do exist within the organized matrix of frequencies.

Performers, scholars, and aficionados have long understood Western music (esp. concert music) as purified of noise. This assumption surfaces in descriptions of non-European musics. As Dena J. Epstein chronicles in her article “The Folk Banjo: A Documentary History,” European travelers and missionaries regularly described the timbres of African vocals and instruments as “crude,” “wild,” “peculiar,” strange,” “weird,” or “noise.” Contemporary ethnomusicologists credit “ethnic” musics for retaining noisy elements, and eschewing—or never developing—the Western affinity for “pure” tones. The African mbira, or thumb piano, is a favorite example. Bottle caps and snail shells are attached to the soundboard and resonator, creating a buzz that muddies the otherwise focused timbre of the plucked idiophone. Efforts to reintroduce “noisiness” into Western music, notably with fuzz and overdrive guitar distortion, is sometimes heard as an aspirational return to naturalistic sound, albeit through electronic means.

All of this overlooks the presence of noise in even the most cleaned-up Western musical forms. The scraping of the bow against a violin string; the clacking of the keys on a clarinet; the sliding on the fingerboard of an acoustic guitar. According to filmmaker and composer Michel Chion, author of Sound: An Acoulogical Treatise, the Western listener tends to “scotomize,” or mentally delete, these sounds. Moreover, studio recordings tend to minimize or mute out such idiosyncrasies. “On the other hand,” writes Chion, “recordings of so-called traditional musics are often made by and for people who find something charming about such noises, and such documentations strive to preserve them and even to emphasize them in the recording process.”

Chion’s compositional medium, musique concrète, places all sorts of sounds into a musically organized framework. Compositions consist of multifarious field recordings, which are modified by altering pitch and intensity, extending or cutting off, adding echo effects, playing backwards, and so on. [Listen to Chion’s Requiem]. The finished piece is an artistic unity that challenges standard ideas about music. It can also train us to hear assembled noises as musical, and to listen for noise elements in conventional music.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Accuracy and Soul

Jonathan L. Friedmann, Ph.D.

“I may say that in the studio accuracy is more readily manageable than ‘soul.’” This statement appears in master pianist Alfred Brendel’s 1983 essay, “A Case for Live Recordings.” Brendel, who played his last concert in 2008 at age 78, is no stranger to the recording studio, and appreciates its technological advantages. However, he opines that studio perfection is merely mechanical, not musical. More is lost than gained when the tension and risk of the concert hall is replaced with the purification of numerous takes.

Brendel notes several differences between live concerts and studio recordings. The live performer has one chance to convince the audience; the studio allows multiple playthroughs. The concert is only experienced once; the recording is repeatable. The concert performer imagines, plays, projects, and listens all at once; the studio player can hear it again and react accordingly. The concert atmosphere is raw and often nerve-racking; the studio allows for loosening up. The concert involves audience-performer interaction; the recording is made in virtual solitude. The live performance includes unscripted coughs and chirps; the studio offers manicured silence. The concert has a physical presence; the recording is a disembodied sound. The concert does not value absolute perfection; the studio is “ruled by the aesthetics of compulsive cleanliness.”

Although both sides of the dichotomy have pluses and minuses, Brendel contends that the controlled studio environment adversely impacts listening habits and performance approaches. Pristine recordings condition listeners to expect technical precision, even in the unfiltered concert setting. Performers try to replicate what fans have heard over and over on the recordings. As Brendel puts it: “[A] concert has a different message and a different way of delivering it. Now that we listeners to records and studio troglodytes have learned so much from studio recordings, it seems time to turn back and learn from concerts once again.”

He recommends live recordings as a middle ground between the unfettered electricity of the concert hall and the artificial sterility of the studio. Specifically, he prefers live recordings that come about by chance and without the artist’s knowledge (but sold later with the artist’s permission). This oft-neglected “stepchild” stands between the one-shot concert, which takes place on a certain day in front of a particular audience, and the recording, which can be heard anywhere at any time, paused, and played again. The live recording is portable and fossilized, yet it captures the spontaneity of the performance and the presence of an audience. The quality may suffer compared to a studio version, but the aura of being there is worth the imperfections.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Tastemaking

Jonathan L. Friedmann, Ph.D.

In The Barring-gaffner of Bagnialto, or This Year’s Masterpiece—one of several story synopses in Kurt Vonnegut’s novel, Breakfast of Champions—a government official spins a wheel to assign cash value to works of art submitted by the citizenry. The wheel lands on a painting of a house cat by Gooz, a humble cobbler who had never painted before. The simplistic portrait is appraised at eighteen thousand lambos, or one billion earth dollars. Crowds flock to see it at the National Gallery. Meanwhile, a bonfire consumes all the statues, paintings, and books the wheel has deemed worthless.

This satirical vignette highlights the disproportionate and arbitrary role of industry officials (governmental and corporate) in determining aesthetic values and tastes. The top-down model lampooned in the parable is not distant from commercial radio stations that weed out music before it ever reaches our ears. Cultural critics contend that decisions to promote or bury certain songs too often rely on extra-musical factors: image, celebrity, markets, studio backing, etc. This results in a homogenized soundscape, where listeners have limited volition over the music they hear. In Vonnegut’s hyper-cynical scenario, a completely random process shapes the masses’ artistic sensibilities. They flock to see an amateur painting of someone’s pet, and think nothing of other works—no doubt many of high quality—going up in flames.

To an extent, Vonnegut’s bleak parable was more applicable in 1973, when Breakfast of Champions hit the shelves, than it is today. The online availability of music, access to independent radio stations, and platforms for compiling digital playlists provide unprecedented opportunities to short circuit the music industry’s control. Democratization has dented the industry’s historic role in pre-selecting sounds. Individuals more directly determine what they hear and what becomes popular. Adrian C. North and David J. Hargreaves are optimistic in their essay, “Music and Marketing”: “the digitization of music means that psychological factors will become more important than economic factors in explaining the music that people listen to on a day-to-day level. In decades to come we…suspect that the importance of economic explanations [for listening preferences] will diminish” (from Oxford’s Handbook of Music and Emotion, 2010).

We are not there yet; the old tastemakers still operate. As the digital age has broadened listening options, corporate interests have narrowed their palettes. In a high-stakes industry faced with escalating costs, intense competition, and a perpetually volatile youth demographic, safe bets overwhelm the airwaves. The complaint that “everything sounds the same on the radio” seems truer now than ever before. Listeners who do not explore digital or other options, either by choice or by circumstance, are left wading in an undifferentiated pool of cookie-cutter consumerism. They are stuck gazing at the cat.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Childlike Ears

Jonathan L. Friedmann, Ph.D.

Childlike wonder is for many an idealized virtue. Aristotle’s inquiries often begin with innocent amazement. Poet and scholar Kathleen Raine advised, “rather than understanding nature better by learning more, we have to unlearn, to un-know, if we hope to recapture a glimpse of that paradisal vision.” J. Krishnamurti, the self-styled twentieth-century sage, was moved to tears at the sight of withering branches. These approaches simulate a pre-jaded, pre-cluttered stage of life, when openness and sensitivity are natural conditions. The shiny new brain is neither capable of boredom nor stress. It is receptive to all shades of experience, unconcerned with the illusion of self, and attentive to the world as it is.

Intellectual maturation and social conditioning quickly do away with this pristine state. The schoolchild is taught to label and conform. A grown man weeping at a tree is abnormal. But, say the romantics, by retrieving (or reconstructing) childlike innocence, we can salvage a life-enhancing sense of awe.

The distance between the child’s perception and our own can be demonstrated musically. Unlike adults, young children do not typically describe or define music. They derive benefits from the music they make and listen to—joy, solace, safety—but to them, music just is. Infants instinctively move to the beat and respond wide-eyed to lullabies and infant-directed song-speech. However, as children mature, their ears become more discerning, and the external influence of family, peers, and consumer culture narrow tastes and heighten judgments. By middle childhood (ages 6 to 12), spontaneous engagement is typically replaced with self-consciousness. Words begin interfering with experience.

Vladimir Jankélévitch romanticizes infant ears in his 1961 classic, La Musique et l’Ineffable (Music and the Ineffable). An exceedingly perceptive and prolific contributor to the philosophy of music, Jankélévitch nevertheless admits the uneasy application of words to the musical experience: “Directly, in itself, music signifies nothing, unless by convention or association. Music means nothing and yet means everything.” He espouses “a great nostalgia for innocence,” promotes “a return to the spirit of childhood,” and reminds us that “music was not invented to be talked about.” This is not a contradictory position. Musical subtleties were of great interest to Jankélévitch; he was captivated by the slightest gradations of sound. Yet, his responses were more testimonial than analytical or explanatory. Study led him to a profound gratitude best expressed in silence. He encouraged readers to enter the “mystery” for themselves.

Like Aristotle, Raine, and Krishnamurti, Jankélévitch was a deep thinker aware of both the merits and demerits of the thinking brain, which affords exploration and reflection, but obstructs the purity of experience. His desire was to reenact the clean exposure we unconsciously sweep aside with accumulating years. From such a state, fresh and novel insights are possible.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Listener as Context

Jonathan L. Friedmann, Ph.D.

Reading and writing were not generally accessible until Gutenberg unveiled the printing press around 1440. Fewer than six centuries have passed since then—a blip in the 200,000-year existence of anatomically modern Homo sapiens. When written languages emerged in antiquity, they were the province of elites. In Iron Age Israel (c. 1200-500 BCE), for example, roughly one percent of the population was literate, and most of them were merely “functionally literate”: they knew just enough to manage daily living and employment tasks. The complex poetry and prose in the Hebrew Bible were unintelligible to all but the most privileged classes. Only in the last twenty generations has “literacy for all” become a human possibility.

The rise of literate societies introduced new ways of sharing and digesting information. With texts in hand, people could spend time interpreting, pondering, analyzing, comparing, re-reading, and questioning. Philosophers and storytellers could externalize, revise, and catalogue their thoughts. Authors and readers could communicate without interacting face-to-face. Ideas and information could be technical and logically argued.

For all of its benefits, literacy could not capture or replicate the intimacy of orality. Whereas oral cultures foster immediacy and social connections, written communication tends to be impersonal and removed. Oral traditions are experiential and spontaneous, while written forms are passive and fixed. Spoken words are colored by mannerisms and inflections; written words are static and comparatively emotionless. There are exceptions: love letters and poems can approach the vividness of an interpersonal exchange. But, as a rule, writing lacks presence.

Fortunately, no society is (or really can be) exclusively literate. We cannot evolve beyond the need or propensity for oral expression, which is encoded in our genes. Speaking and listening are innate; writing and reading are add-on abilities. Thus, as print-saturated as our society is, it remains cemented in an oral foundation.

Among other things, this has ensured the persistence of the original meaning-making context: the individual. The listener’s role is crucial in an oral culture. Without ears to hear, information cannot be received or spread. As noted, this mode of communication is far more immersive and immediate than the written word. Interpretation is likewise instantaneous: meaning is extracted from the largely unconscious workings of memory, conditioning, feelings, education, experience, and the like. There is no need to pore over a detached text. Meaning manifests inside the person.

This is amply demonstrated in musical listening. As an auditory medium, music cannot be understood—or even really exist—without listening. Hints of music can be written in notation or other visual symbols, but these are, ultimately, abstractions. Words are written in letters, objects are photographed, images are drawn, but music evades visualization. It requires the type of information exchange characteristic of oral societies.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.