Sound as Object

Jonathan L. Friedmann, Ph.D.

After World War II, audio recordings gradually replaced sheet music as the dominant means of distributing music to consumers. As a result, the musical centerpiece of the family home moved from the piano to the hi-fi system and, consequently, from active music-making to audition and record collecting. The LP (3313 rpm, long-playing, microgroove vinyl disc), introduced by Columbia Records in 1948, revolutionized the music industry. Along with changing habits of consumption, records transformed basic perceptions about music. Fleeting sounds became fixed objects.

Recordings had been around since Thomas Edison’s mechanical phonograph cylinder, patented in 1878. Within two decades, commercial recordings and distribution grew into an international industry. Popular titles at the beginning of the twentieth century sold millions of units. Gramophone records, which were easier to manufacture, ship, and store, hit the shelves around 1910, and subsequent advances in technology made audio recordings increasingly accessible. Still, sheet music—and the piano playing it depended on—remained king. The wholesale value of printed sheet music more than tripled between 1890 and 1909. 25,000 songs were copyrighted in the U.S. Sheet music sales totaled 30 million copies in 1910. The popularity of printed music continued through the 1940s. An article in Variety on October 4, 1944 boasted “Sheet Music Biz at 15-Year Crest.”

Sales declined precipitously as the 1940s moved into the 1950s. The days when hit songs were fueled by a combination of sheet music and, secondarily, record sales gave way to our recording-dominated era. A Variety article from November 21, 1953 captured the turning point: “Publishing Industry Alarmed by Pop Sheet Music Decline.”

The current ubiquity of recordings is the culmination of a centuries-long effort to mechanically reproduce sound—an evolution that began with musical notation and continued with programmable devices (hydro-powered organs, musical clocks, music boxes, player pianos, and the like). However, earlier inventions still required manual engagement and/or autonomous real-time devices/instruments. With recordings, sounds disembodied from their performance could be played back at any time. Music itself became the object.

Michel Chion details seven ways recording technology facilitated the objectification of music: (1) capturing ephemeral sound vibrations and converting them into a permanent medium; (2) facilitating telephony, or the retransmission of sounds at a distance from their original source; (3) enabling new ways of systematic acousmatization, or the ability to hear without seeing; (4) allowing sounds to be amplified and de-amplified through electronic manipulation, as opposed to the crescendo or decrescendo of live instruments; (5) affording phonofixation, or the fixing of sounds and reuse of fixed sounds in the recording studio; (6) paving the path toward phonogeneration, or the creation of sound “out of nothing” by way of synthesizers and computers; (7) giving engineers the ability to reshape sounds through editing, processing, and manipulation.

This last effect, in particular, contributes to Chion’s view of sounds converted into objects: “recording has been—above all from the moment that it enabled editing—the first means ever in history to treat sounds, fleeting things, as objects: that is to say, both in order to grasp them as objects of observation and in order to modify them—to act on their fixed traces.” Likewise, the listener’s control over recordings—through pausing, skipping forward, changing volume, using multiple devices, etc.—furthers the impression of music’s “thing-ness.”

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Advertisements

Goal-Directed Movement

Jonathan L. Friedmann, Ph.D.

Music listening is an unfolding experience. Without prompting, the listener naturally follows the direction of a piece, traveling through its curves and contours in a linear progression toward completion. In both the Republic and Laws, Plato comments on the ability of this temporal movement to “charm” the inner life of the listener. Roger Scruton contends that the mind moves sympathetically with motion perceived in music, such it is felt as physical motion. These and other observations address the goal-directed movement of music. The whole piece is not revealed at once or in an order or manner that the listener chooses. Musical developments, whether simple or complex, lead auditors from beginning to end.

In contrast to print communication, which can be read and reread at any pace the reader wishes, music imposes its own duration and agenda. In pre-recording days, this necessitated formalized repetitions and recapitulations to get certain messages across, hence the use of sonata form (exposition, development, recapitulation), the doubling schema of keyboard partitas (AA/BB), the verse/chorus form of folksongs (and later commercial songs), and so on. Michel Chion notes: “This enormous redundancy—which means that if we buy a recording of Bach’s English Suites that lasts an hour, we only get thirty minutes of ‘pure’ musical information—clearly has no equivalent in the visual arts of the period.” Audio recordings afford greater freedom in terms of playback and repeated listening, but each listening remains a temporal experience.

The situation is not sidestepped with printed notation. Although a score can be read and studied, similar to a book or article, the notes on a page are essentially illusory. The paper is not the music. Jean-Paul Sartre argued in L’Imaginaire, a treatise on imagination and the nature of human consciousness, that music is never located in the silent symbols of a musical score, however detailed. Using Beethoven’s Seventh Symphony as an example, Sartre explained that the inability of written notes to capture music is rooted in the nature of sound itself. Unlike something that is empirically real—defined by Sartre as having a past, present, and future—music evaporates as soon as it is heard. Each performance is basically a new creation, and, we might add, each exposure to a recording is a new experience, due to changes in the listener and her surroundings from one hearing to the next.

Time, not paper, is the fundamental surface upon which music is made. Music involves a linear succession of impulses converging toward an end. Whereas a painting or sculpture conveys completeness in space, music’s totality is gradually divulged, sweeping up the listener—and the listener’s inner life—in the process.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

The Original Echo Chamber

Jonathan L. Friedmann, Ph.D.

“A temple is a landscape of the soul. When you walk into a cathedral, you move into a world of spiritual images. It is the mother womb of your spiritual life—mother church.” These words from mythologist Joseph Campbell touch on the primitive spatial and acoustic appeal of Medieval and Renaissance cathedrals. Campbell connects the sensation to that of pictograph-adorned Paleolithic caves, which were also likely used for mystical and spiritual ceremonies. The melodic conventions and vocal techniques adapted to these acoustically active stone-walled spaces—epitomized by the straight, drawn-out, and separated tones of Latin ecclesiastical chant—exploit the echo chamber effect, creating an all-encompassing sonic and physical experience. As I explain in an earlier blog post, these ethereal sounds became synonymous with the cosmic voice.

The impression of safety and repose these spaces provide is captured in Campbell’s phrase, “the mother womb.” This image can be taken a step further. The sonically induced, archaic feelings take us back to the literal womb: the original acoustic envelope where direct and indirect sounds are experienced as an undifferentiated gestalt. Psychoanalyst Didier Anzieu describes it as a “sonorous bath”: a lulling sense of weightlessness, rebirth, and being transported.

The ear awakens during the fourth month of fetal development. By week twenty-five, the cochlea—the ear’s frequency analyzer—reaches adult size. From that point forward, the fetus receives, processes, and responds to a growing array of amalgamated sounds, including pressure variations in the bodily walls, two cycles of heartbeats (the mother’s and her own), and acoustic input from outside the womb. The unfiltered sounds are presumably analogous to those heard in a reverberating space, such as a cave or cathedral.

Only in early childhood does the ear begin to categorize different sounds. Following R. Murray Schafer’s concept of the “soundscape,” or the combination of acoustic signals heard in an immersive environment, normally functioning ears automatically distinguish between background and foreground signals, both natural and human-made. This behavior, which combines innate capacity and cultural conditioning, is not present in the echoing womb. The lively reverberations, so closely associated with sacred spaces, recall that original echo chamber. Indeed, conceptions of God (or gods) as compassionate, protecting, loving, comforting, and so forth may even be rooted in this simulated return to the womb.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Ignoring Noise

Jonathan L. Friedmann, Ph.D.

As a rule, musical sounds are more clearly distinguished from non-musical sounds (the sounds of “reality”) than visual arts are distinguished from the shapes and colors of the visible world. What makes a photograph, abstract painting, or found object distinct from non-art is more difficult to pinpoint than what makes music sound like music. Satirist Ambrose Bierce addressed this in The Devil’s Dictionary, which defines painting as “The art of protecting flat surfaces from the weather and exposing them to the critic.” The viewing venue, in other words, plays a central role in the creation and perception of visual arts. (Marcel Duchamp’s Fountain, a porcelain urinal signed “R. Mutt,” is an extreme example.) Contrastingly, music is invisible, and thus cannot be confused with visible forms; it has no direct analog in the physical world.

Music is a culturally defined sonic phenomenon that, while impossible to define universally, is immediately recognized when heard in its cultural setting. Historically in the West, this has included a division between “pure” tones and “disordered” or “unwanted” sounds, generally called “noise.” Physics seems to support this bifurcation. While the various sound waves produced by music can be isolated into individual frequencies, with some being more dominant than others, noise contains jumbled frequencies of sound without a dominant frequency. However, ambiguity lurks beneath this observation. Despite Western music’s self-perception of “noiselessness,” such sounds do exist within the organized matrix of frequencies.

Performers, scholars, and aficionados have long understood Western music (esp. concert music) as purified of noise. This assumption surfaces in descriptions of non-European musics. As Dena J. Epstein chronicles in her article “The Folk Banjo: A Documentary History,” European travelers and missionaries regularly described the timbres of African vocals and instruments as “crude,” “wild,” “peculiar,” strange,” “weird,” or “noise.” Contemporary ethnomusicologists credit “ethnic” musics for retaining noisy elements, and eschewing—or never developing—the Western affinity for “pure” tones. The African mbira, or thumb piano, is a favorite example. Bottle caps and snail shells are attached to the soundboard and resonator, creating a buzz that muddies the otherwise focused timbre of the plucked idiophone. Efforts to reintroduce “noisiness” into Western music, notably with fuzz and overdrive guitar distortion, is sometimes heard as an aspirational return to naturalistic sound, albeit through electronic means.

All of this overlooks the presence of noise in even the most cleaned-up Western musical forms. The scraping of the bow against a violin string; the clacking of the keys on a clarinet; the sliding on the fingerboard of an acoustic guitar. According to filmmaker and composer Michel Chion, author of Sound: An Acoulogical Treatise, the Western listener tends to “scotomize,” or mentally delete, these sounds. Moreover, studio recordings tend to minimize or mute out such idiosyncrasies. “On the other hand,” writes Chion, “recordings of so-called traditional musics are often made by and for people who find something charming about such noises, and such documentations strive to preserve them and even to emphasize them in the recording process.”

Chion’s compositional medium, musique concrète, places all sorts of sounds into a musically organized framework. Compositions consist of multifarious field recordings, which are modified by altering pitch and intensity, extending or cutting off, adding echo effects, playing backwards, and so on. [Listen to Chion’s Requiem]. The finished piece is an artistic unity that challenges standard ideas about music. It can also train us to hear assembled noises as musical, and to listen for noise elements in conventional music.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

The Evolution of Song

Jonathan L. Friedmann, Ph.D.

The earliest rudiments of musical expression were most likely vocal. This basic premise connects diverse speculations about music’s origins. Whether music—broadly defined as structured, controlled, and purposeful sound—began with grunts of aggression, wails of pain, mating howls, or infant-directed communication, the vocal instrument was the source from which it sprang. Despite the lack of records stretching back hundreds of thousands of years, speculative musicologists have sketched cursory evolutions of vocal music. According to Alfred Einstein, the eon-spanning process had three stages: pathogenic (emotion-born), logogenic (language-born), and melogenic (melody-born). This hypothesis, presented in his 1954 essay “Words and Music,” is unique for its qualitative editorializing. In Einstein’s view, the combination of voice and music becomes increasingly problematic as the stages unfold.

The first stage, pathogenic music, represents the “starkest expression of pure emotion.” Einstein viewed the spontaneous, wordless tones of so-called “primitives” as the most pristine type of vocal music. Beyond romanticizing the “noble savage,” he argued that “the meaningful word weakens rather than strengthens such pure expression, since convention tends to attenuate it.” The union of word and music pollutes the original purity.

The degrading effect is less pronounced in stage two: logogenic music. In word-born song, melodic shape, movement, phrasing, and cadences are directed by the ebb and flow of a text, rather than a consistent beat or meter. It is a form of musical grammar—sometimes called speech-melody or stylized speaking—wherein accents and inflections are stressed through unobtrusive, arhythmic, word-serving melodic figures. Such is the mode of Greek epic poetry, Gregorian plainchant, and Jewish scriptural cantillation. Logogenic music has its own disadvantage—namely, the neutralizing of emotion. Because the music serves the text with formulaic motives (described by Einstein as a “minimum of music”), the same sounds are invariably used to transmit texts of varying thematic and emotional content. In this sense, it is the opposite of pathogenic vocalizing.

The third stage is song proper: a short poem or set of words fitted to a metrical tune. By and large, musical considerations, like rhythm and melody, outweigh textual concerns. Although songs often grow from or reflect upon emotional states, the rules of style and form tend to restrain raw feelings. The structure limits the syllables available, and measured phrases and poetic devices reduce word options. The result is filtered sentiment—a contrast to both unfettered pathogenic music and text-first logogenic music.

Without doubt, Einstein’s scheme has its weaknesses. Not only is the evolution of song non-linear (all three forms still exist today), but blending is also not uncommon. For instance, blues singing, which adheres to highly conventional forms, is known for its “pure emotion.” Within a strict melogenic framework, short phrases and repeated words convey rich layers of emotional content. Even so, Einstein’s three-stage outline raises awareness of the potential impediments of the various types of vocal music. Knowledge of these built-in barriers can help the performer or songwriter transcend them in their own musical quests.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Seeking Noise

Jonathan L. Friedmann, Ph.D.

“The twentieth century is, among other things, the Age of Noise.” Aldous Huxley included this statement in The Perennial Philosophy, a comparative study of world mysticisms, published in 1944. Huxley’s complaints centered on organized noise: “indiscriminate talk” and the radio, which he described as “nothing but a conduit through which pre-fabricated din can flow into our homes.” The “assault against silence” has continued unabated as the twentieth century has rolled into the twenty-first. The ubiquity of televisions, personal computers, and mobile phones has only exacerbated the problem. Such technologies present conscious and unconscious barriers to the spiritual ideal of inner calm and clear-minded contemplation.

Arguably more damaging than the intentional sound sources Huxley bemoaned are the byproduct noises of human activities. Especially intrusive are noises fitting naturalist Bernie Krause’s definition: “an acoustic event that clashes with expectation.” The tranquil lake is spoiled by buzzing jet skis and motorboats. The pristine forest is tarnished by chainsaws and overhead airplanes. According to composer and environmentalist R. Murray Schafer, who coined the term “soundscape” to describe the ever-present array of noises in our sonic environment, human beings make such noises, in part, to remind ourselves and others that we are not alone. The absence of overt human-generated sounds is for many a painful signal of solitude. Think of the person who keeps the radio or television on for companionship.

An extreme of this view equates excessive noise with human dominance and modern progress. According to Schafer, Ronald Reagan’s secretary of the interior James G. Watt declared that the more noise Americans make, the more powerful the country will appear. This perception has deep roots: cannon blasts and booming fireworks have long been associated with muscular patriotism. Schafer even remarked to Krause that if the ear-pounding decibels of the U.S. Navy Blue Angels were muted, attendance at their air shows would drop by ninety percent.

Nothing could be further from the quietude desired by mystics, who not only strive to muzzle external sounds, but also to cultivate silence of mind. This is hardly the default mode of modernity. As Huxley put it: “Physical noise, mental noise and noise of desire—we hold history’s record for all of them.” Instead of seeking silence, most people seek its opposite.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Gesture Toward the Infinite

Jonathan L. Friedmann, Ph.D.

The gradual decrease in volume toward silence, known as the fade-out, was once a ubiquitous part of popular music. One of the earliest fade-outs took place during a 1918 concert of Gustav Holst’s The Planets. The women’s choir sang in a room offstage for the concluding “Neptune” movement. As the piece neared its end, a door to the room was slowly closed. The contrivance was effective: the celestial chorus drifted into silence, conjuring the expansiveness of the cosmos and the remoteness of the gas giant—then thought to be the furthest planet from the Sun (an honor Neptune reclaimed in 2006 when Pluto was demoted to a “dwarf planet”).

A similarly “organic” fade-out is heard on an 1894 recording of the “Spirit of ’76,” during which a fife and drum band seem to get closer and then march away. The effect was achieved by carrying the phonograph toward and away from the sound’s source. With the advent of electrical recordings in the 1920s, engineers were able to decrease amplification, a process made easier with magnetic tape recordings beginning in the 1940s. The first pop hit to end with a fade was the R&B crossover song “Open the Door, Richard!” (1946), by saxophonist Jack McVea. The technique became commonplace between the 1950s and 80s. Each of Billboard’s top ten songs from 1985 ended with a fade-out.

The fade-out initially served a practical aim. In the 1940s and 50s, engineers often used the device to shorten songs that exceeded radio’s “three-minute rule,” or to fit them on one side of a vinyl single. The 1960s saw the fade-out as a creative avenue, especially in psychedelic and electronic music. The ending of the Beatles’ “Hey Jude” (1968) fades over four minutes of repeated choruses. Other artists, like Stevie Wonder, used fade-outs to cut loose with ad-lib lyrics and extended jam sessions.

David Huron, an expert in music cognition, appreciates the fade-out as something beyond a practical solution or creative outlet. Commenting on Holst’s “Neptune” in his book, Sweet Anticipation: Music and the Psychology of Expectation, Huron notes: “With the fade-out, music manages to delay closure indefinitely. The ‘end’ is predictable, even though the music doesn’t ‘stop.’ The ‘stop’ gesture is replaced by a gesture toward the ‘infinite.’”

The fade-out, with its impression of unresolved infiniteness, fell out of favor during the 1990s. (The only recent hit featuring the device is Robin Thicke’s retro homage “Blurred Lines,” 2013.) Popular music historian William Weir connects the decline to the development of the Need for Closure Scale (1993) and psychology’s wider embrace of the concept of closure—a goal better achieved when a song concludes with a “cold ending.” Weir concedes that this explanation may be a stretch, pointing to the rise of iPods and DJs, which have created a “skip culture” (using songwriter/producer Itaal Shur’s term), where we are accustomed to skipping from song to song before they end. Why bother with the last few seconds if nobody ever hears them? Yet, even then, we experience a kind of infinity: the never-ending medley.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.