Tag Archives: Music

Music Interconnected

Jonathan L. Friedmann, Ph.D.

The study of music is usually approached with a narrow goal or point of view. An instrument is learned, a compositional technique is analyzed, a movement is surveyed, a vocal style is practiced, and so on. These different paths intersect from time to time, such that knowledge of a composer’s chronological and geographical setting informs the interpretation of a piece. Yet, by and large, “specialized studies of this type cut music off from its natural connection with the spiritual and material world, and leave out of consideration the fact that [music] is only one part of general culture.” This reminder, from Hugo Leichtentritt’s introduction to his book of Harvard University lectures, Music, History, and Ideas (1938), urges a recognition of music’s interaction with things and forces outside of it.

Not only does a piece of music reflect a cultural backdrop—which itself is informed by physical setting, political climate, social position, local language(s), etc.—but it also encompasses wide-ranging disciplines: physics, mathematics, acoustics, psychology, anatomy, physiology, literature, poetry, dancing, acting, philosophy, metaphysics—just to name a handful. This is the essence of Leichtentritt’s title Music, History, and Ideas: the three broad categories cannot be separated. Viewing music through a microscope—as isolated techniques, pieces, or genres—we neglect the many threads that stitch sound into a complex cultural and scientific fabric.

Of course, interconnectivity is not limited to music. Naturalist John Muir expanded the notion in his reflective tome, My First Summer in the Sierra (1911), about his experiences in Yosemite in 1869. In that all-encompassing environment, surrounded by intricately vibrant meadows and mountain ranges, Muir realized: “When we try to pick out anything by itself, we find it hitched to everything else in the Universe.” (An earlier version, from his journal dated July 27, 1869, records: “When we try to pick out anything by itself we find that it is bound fast by a thousand invisible cords that cannot be broken, to everything in the universe.”)

Muir’s takeaway from that first summer applies equally to music: “the lessons of unity and inter-relation.” Every rock, tree, insect, bird, stream, lake, and flower is at the same time distinct yet inextricable. None of these elements can exist independent from the others, and each invites us “to come and learn something of its history and relationship.” As a generative art form, constantly modified by interactions between musicians and musical ideas, music has a history and genealogy extending far beyond any single note, phrase, pattern, or tune. As a product of human activity and an element of human culture, music is “hitched” to everything that constitutes life itself—physically, intellectually, and spiritually.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

 

 

 

 

Music Everlasting

Jonathan L. Friedmann, Ph.D.

“There is no time like the present,” “once in a lifetime,” and other such clichés highlight an obvious truth: each moment is unrepeatable. At any point in time, we have the ability to do one thing and one thing alone. Nothing that we do, say, think, or feel can, in the strictest sense, be compared to any other. Regrets about missed opportunities are purely theoretical. Judgments and self-inventories can only be based on actual occurrences, not “what ifs.” Jean-Paul Sartre makes this point in his treatise Existentialism: “There is no genius other than one which is expressed in works of art; the genius of Proust is the sum of Proust’s work; the genius of Racine is his series of tragedies. Outside of that, there is nothing. Why say that Racine could have written another tragedy, when he didn’t write it? A man is involved in life, leaves his impress on it, and outside of that there is nothing.”

The deterministic worldview draws a similar conclusion. All facts in the physical universe—including human history—are inescapably dependent upon and conditioned by their causes. The choices we make, big and small, fit in a chain of cause and effect that yields a single outcome. Meteorologist Edward Lorenz imagined the classic example with his “butterfly effect,” wherein the distant flapping of butterfly wings influences a tornado several weeks later. The resulting chaos theory holds that the universe operates by unpredictable determinism: everything happens in an orderly pattern, but we cannot know with certainty how things will turn out until they actually happen.

Live music gives sonic expression to the unrelenting yet unpredictable uniqueness of each passing moment. In his erudite tome, A Composer’s World: Horizons and Limitations, Paul Hindemith muses on the individuality of each performance. Sound, he contends, is music’s least stable quality: “An individual piece of music, being many times reborn and going through ever renewed circles of resonant life, through repeated performances, dies as many deaths at the end of each of its phoenixlike resurrections: no stability here, but a stumbling progression from performance to performance.” Hindemith connects the frailty of sound to the fleetingness of life itself, suggesting that musical moments are just as unrepeatable as other moments. Like the passage of time, each performance is one of a kind, and each iteration evaporates as soon as it occurs.

The impression of permanence is stronger in recorded music. Listening to recordings is, of course, subject to the same forces as live performances: sounds come and go in accordance with time’s progression. The crucial difference is that the same performance can be heard again, creating a sort of conditional eternality. Rather than living, dying, and resurrecting with each performance, recorded music exists in a perpetual present tense.

This semblance of stability is wholly at variance with life’s ephemeral, deterministic trajectory. Recordings allow us to simulate everlasting moments; life pushes ahead but the music remains the same. This psychological gratification, rooted in a desire to obtain the unobtainable, accounts in part for our attraction to recorded music.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Catchiness in Music

Jonathan L. Friedmann, Ph.D.

The term “popular music” originated with songs emanating from Manhattan’s Tin Pan Alley and competing publishers in other major American cities. From the 1880s to the early twentieth century, these mass produced songs captured the ears and hearts of American consumers. Almost as soon as the tunes penetrated the market, another term was coined: “catchy.” An ad for A. G. Henderson’s “No More Parting, Norah Darling” (1889) hypes the song’s “easy, sweet, and catchy melody, set to pretty and effective words. A very striking and well-arranged chorus. Sing this song ONCE, and the air will haunt you.” A review of “My Jenny’s Shelling Peas” (1892), by Chicago music publisher S. W. Straub, opines: “It has an interesting story, and has a beautiful, catchy melody with a superb chorus. It will become very popular, we predict.”

Tin Pan Alley tunesmiths sold thousands of songs to fast-pace publishers, who, in turn, fed sheet music to hungry household pianos. As John Shepherd writes in his definitive book, Tin Pan Alley, “The faster the songs…could be produced, the more money there was to be made.” One consequence of this assembly line approach was that “catchiness” became the norm, rather than a quality reserved for especially well-crafted melodies. Originality fell victim to the rapid-fire ethos. For expedience, melodists turned to modifying and piecing together bits of pre-existing melodies. Lyricists returned again and again to well-worn themes and clichés. Of course, a legitimately clever hit occasionally rose above the homogenous whole. But, for the most part, every song possessed some catchiness by virtue of sharing variations of the same rhythms, verse-chorus forms, melodic phrases, and sentiments. To this day, recycling of this kind is a defining aspect of popular music.

A handful of scientific papers have sought a formula for musical catchiness. These include a study of the UK’s top-ten sing-along songs and an analysis of musical “hooks” (memorable musical fragments). These studies, which investigate why some recordings are seemingly catchier than others, tend to leave out salient factors, such as radio play and promotion from the music industry. Lesser-known or overlooked songs often have the same features, but lack the popularizing platforms.

Rather than tracing catchiness to a unique trait or set of traits, it is perhaps better to think of catchiness as a synonym for “familiar,” or even “familiar before it is ever heard.” Catchy songs trigger musical information already stored in the brain. Other elements, such as a magnetic performance or generational sentiment, certainly play a role. But a truly catchy melody—one that resonates beyond a recording or performer—requires high levels of musical déjà vu. Otherwise, it won’t catch hold of the listener.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Literature as Music

Jonathan L. Friedmann, Ph.D.

Aspects of music can be spatially represented through notation and recording, which freeze moments in time. But, as an experiential medium, which relies on performance and audition, music reveals itself in the present tense. This temporal quality is not only thought to distinguish music from spatial arts, such as illustration, sculpture, jewelry, and ceramics, but also from written language, which cements ideas and oral expression into fixed letters. However, this characterization has its limits.

Author Anthony Burgess restricts the framing of words as concrete objects to informational writing. Scientific texts, legal documents, historical records, and other types of non-fiction primarily appeal to reason rather than imagination. They are written for study, reference, and comparison to other writings in the field. Their words are artifacts to be mulled over, digested, quoted, and critiqued. Contrastingly, Burgess sees literature as a “twin of music,” which, like music, occurs in real-time, transcends physical space, and manifests in the imagination.

Burgess’s interest in the link between music and literature stems from his biography. Best known for his 1962 novel A Clockwork Orange, featuring a deranged gang leader obsessed with Beethoven’s Ninth Symphony, Burgess was also a composer of some 150 works, most of which have been lost. He wished the public would view him as a musician who writes novels, rather than a novelist who composes music on the side. Yet, in his memoir, This Man & Music, Burgess concedes: “I have practiced all my life the arts of literary and musical composition—the latter chiefly as an amateur, since economic need has forced me to spend most of my time producing fiction and literary journalism.”

Burgess’s fiction brims with musical content, from characters who are musicians or music lovers, to writing styles that consciously borrow from sonata form, symphonic form, and the like. Stressing literature’s performative essence, Burgess complains: “We have come to regard the text as the great visual reality because we confuse letters as art with letters as information.” While non-fiction works might be understood as monuments of human thought, literature is a lived experience akin to traveling through a piece of music.

This discussion has more to say about literature than it does about music. Like the poet E. T. A. Hoffmann, another composer who made his living in words, Burgess idealized creative writing as an art approaching music. Central to his argument is the conception of time as the canvas upon which both art forms take shape, and imagination as the invisible realm where their meaning is made.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

 

The Semiotics of Music

Jonathan L. Friedmann, Ph.D.

Comparisons between music and language hit a wall when the focus turns to meaning. Although both are innate modes of human expression which, in their vocalized forms, use the same mechanisms of respiration, phonation, resonance, and so on, they function differently. Whereas English speakers would agree about the meaning of a word like “chair,” there is no such consensus about the meaning of a chord or scale. Outside of song, which is basically a form of stylized speech, meaning in music tends to be subjective. As a result, some scholars have taken to limiting—or even dismissing—the possibility of shared musical meaning. However, when we look beyond direct comparisons with language, we see distinct cultural meaning assigned to all sorts of things, ranging from music and food to gestures and facial expressions. “Chair” might not have a musical equivalent, but meaning is discerned in other ways.

An appeal to semiotics, the science of signs, seems most appropriate when evaluating musical meaning. Especially helpful is C. S. Peirce’s formulation of three types of signs: symbols, indexes, and icons.

Of the three, symbols are the least instructive. Language is a system of symbols, wherein each word or phrase has a definite and consistent meaning, albeit often contextually defined. Words are a shortcut for something else; the word “angry” represents an emotional state, but the word itself is not that emotional state. Language is essential for describing and analyzing music, but as ethnomusicologist Thomas Turino explains, such symbols “fall short in the realm of feeling and experience.” Symbols are secondary or after-the-fact, and may distract from the intimacy and immediacy of the musical experience.

Musical signs are more fruitfully viewed as indexes: signs that point to objects or ideas they represent. This applies mainly to music associated with a particular concept or occasion. For example, a national anthem performed at a sporting event becomes an index of patriotism, while a Christmas song heard while shopping becomes an index of the season. Through a combination of personal and shared experiences, these pieces—with or without their lyrics—serve as repositories of cultural meaning. On a smaller scale, music can serve as an index of romantic relationships or peer group affiliations.

Musical icons resemble or imitate the things they represent. These can include naturalistic sounds, such as thunder played on kettledrums, or mental states conveyed through musical conventions, such as ascending lines signaling ascent or exuberance. Icons tend to be culturally specific, such that listeners in a music-culture develop shared understandings, even as individuals add idiosyncratic layers to those understandings.

Precision, directness, and consistency are the lofty goals of language, but these are not the only ways meaning is conveyed. Musical meaning relies on non-linguistic systems, such as signs and indexes. While these may not be as steady or specific as language, they communicate shared meaning just the same.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Wrong Notes

Jonathan L. Friedmann, Ph.D.

Fidelity to the score is a defining characteristic of classical music. Pitches, values, tempi, volumes, and articulations are clearly written for meticulous enactment. In translating these symbols into sound, the musician ensures the piece’s survival even centuries after the composer’s death. There is, of course, room for (slight) variation. Because elements such as dynamics and tempo markings are at least moderately open to interpretation, no two performances will be exactly the same. Still, the faithful and accurate rendering of notes is key to the integrity—and the very existence—of a classical piece.

The foregoing outlines the nominalist theory of classical music, which defines a work in terms of concrete particulars relating to it, such as scores and performances. Because a musical piece is an audible and experiential phenomenon, which is symbolically represented in the score, it can only truly exist in performance.

This position raises two issues. The first concerns “authentic” performance. Is it enough to simply play the notes as indicated, or do those notes have to be played on the instrument(s) the composer intended? Does a cello suite played on double bass or a reduction of a symphony played on the piano qualify as an instance of the same work? How essential is the use of appropriate period instruments? These questions look for elements beyond the written notes.

The second issue centers on the notes themselves. Most performances of concert works include several wrong notes. However, we generally do not discount these performances for that reason (and we may not register the wrong notes as they are played). If all of the notes are wrong, then the work has not been performed, even if the intention is sincere. But what percentage of the notes can be wrong for the performance to qualify as the work? We might argue that the work is independent from any performance of it; but that does not satisfy the nominalist’s position.

Most discussions of musical ontology—addressing the big question, “Do musical works exist?”—are confined to classical music. Score-dependent arguments do not lend themselves to jazz, for instance, where the improvising performer composes on the spot, or certain kinds of folk music, where embellishments are commonplace and written notation is absent.

Questions about music’s ontological reality do not have easy answers, and the various philosophical camps have their weaknesses: nominalists, Platonists (who view musical works as abstract objects), idealists (who view musical works as mental entities), and so on. Whatever fruits such discourse might bear, it points to the uniquely “other” nature of music, which is both recognizable and ineffable, repeatable and singular.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

The Limits of Transmission

Jonathan L. Friedmann, Ph.D.

Since at least the Romantic period, musicians and theorists have argued that musically expressed emotions cannot be fully or adequately conveyed in words or rational concepts. Instead, music is understood as a mode of communication that bypasses ordinary language and speaks directly to the ineffable realm of the “inner life.” This emotional conveyance is typically regarded as both cultural and highly personal: conventions within a music-culture determine the generalized impressions of musical qualities, such as mode, pitch range, and tempo, but specific interactions between those qualities and the listener are not predetermined. A wide and highly variable range of factors, as unique as the listener herself, fundamentally shapes the experience.

Deryck Cooke’s influential treatise, The Language of Music (1959), proposes a more systematic approach. Through an examination of hundreds of examples of Common Practice tonality (Western tonal music since 1400), Cooke developed a lexicon of musical phrases, patterns, and rhythms linked to specific emotional meanings. In his analysis, recurrent devices are used to effect more or less identical emotional arousals, thus yielding a predictable, idiomatic language.

This theory, while helpful in identifying and organizing norms of Western music, has been criticized for omitting the role of syntax. There might be a standard musical vocabulary, but without rules for arranging constituent elements into “sentences,” there can be no consistent or independent meanings. For even the most over-used idiom, the performance and listening contexts ultimately determine the actual response.

This observation casts doubt on another of Cooke’s central claims. If, as Cooke argued, musical elements comprise a precise emotional vocabulary, then a composer can use those elements to excite his or her own emotions in the listener. This is achievable in emotive writing, such as a heartfelt poem or autobiographical account, which uses the syntactic and semantic structures of language to reference ideas, images, and experiences. However, because music lacks these linguistic features, direct emotional transmission is hardly a sure thing.

Philosopher Malcolm Budd adds an aesthetic argument to this criticism. By locating the value of a musical experience in the reception of the composer’s emotions, the piece loses its own aesthetic interest; it becomes a tool for transmitting information, rather than an opening for individually shaped emotional-aesthetic involvement. According to Budd, Cooke’s thesis, which he dubs “expression-transmission theory,” misrepresents the motivation for listening: “It implies that there is an experience which a musical work produces in the listener but which in principle he could undergo even if he were unfamiliar with the work, just as the composer is supposed to have undergone the experience he wishes to communicate before he constructs the musical vehicle which is intended to transmit it to others; and the value of the music, if it is an effective instrument, is determined by the value of this experience. But there is no such experience.”

The enduring appeal of musical language is its multivalence. Idiomatic figures may be commonplace in tonal music, but their appearance and reappearance in different pieces does not carry definite or monolithic information, whether from the composer or the vocabulary employed.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Sound as Object

Jonathan L. Friedmann, Ph.D.

After World War II, audio recordings gradually replaced sheet music as the dominant means of distributing music to consumers. As a result, the musical centerpiece of the family home moved from the piano to the hi-fi system and, consequently, from active music-making to audition and record collecting. The LP (3313 rpm, long-playing, microgroove vinyl disc), introduced by Columbia Records in 1948, revolutionized the music industry. Along with changing habits of consumption, records transformed basic perceptions about music. Fleeting sounds became fixed objects.

Recordings had been around since Thomas Edison’s mechanical phonograph cylinder, patented in 1878. Within two decades, commercial recordings and distribution grew into an international industry. Popular titles at the beginning of the twentieth century sold millions of units. Gramophone records, which were easier to manufacture, ship, and store, hit the shelves around 1910, and subsequent advances in technology made audio recordings increasingly accessible. Still, sheet music—and the piano playing it depended on—remained king. The wholesale value of printed sheet music more than tripled between 1890 and 1909. 25,000 songs were copyrighted in the U.S. Sheet music sales totaled 30 million copies in 1910. The popularity of printed music continued through the 1940s. An article in Variety on October 4, 1944 boasted “Sheet Music Biz at 15-Year Crest.”

Sales declined precipitously as the 1940s moved into the 1950s. The days when hit songs were fueled by a combination of sheet music and, secondarily, record sales gave way to our recording-dominated era. A Variety article from November 21, 1953 captured the turning point: “Publishing Industry Alarmed by Pop Sheet Music Decline.”

The current ubiquity of recordings is the culmination of a centuries-long effort to mechanically reproduce sound—an evolution that began with musical notation and continued with programmable devices (hydro-powered organs, musical clocks, music boxes, player pianos, and the like). However, earlier inventions still required manual engagement and/or autonomous real-time devices/instruments. With recordings, sounds disembodied from their performance could be played back at any time. Music itself became the object.

Michel Chion details seven ways recording technology facilitated the objectification of music: (1) capturing ephemeral sound vibrations and converting them into a permanent medium; (2) facilitating telephony, or the retransmission of sounds at a distance from their original source; (3) enabling new ways of systematic acousmatization, or the ability to hear without seeing; (4) allowing sounds to be amplified and de-amplified through electronic manipulation, as opposed to the crescendo or decrescendo of live instruments; (5) affording phonofixation, or the fixing of sounds and reuse of fixed sounds in the recording studio; (6) paving the path toward phonogeneration, or the creation of sound “out of nothing” by way of synthesizers and computers; (7) giving engineers the ability to reshape sounds through editing, processing, and manipulation.

This last effect, in particular, contributes to Chion’s view of sounds converted into objects: “recording has been—above all from the moment that it enabled editing—the first means ever in history to treat sounds, fleeting things, as objects: that is to say, both in order to grasp them as objects of observation and in order to modify them—to act on their fixed traces.” Likewise, the listener’s control over recordings—through pausing, skipping forward, changing volume, using multiple devices, etc.—furthers the impression of music’s “thing-ness.”

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Goal-Directed Movement

Jonathan L. Friedmann, Ph.D.

Music listening is an unfolding experience. Without prompting, the listener naturally follows the direction of a piece, traveling through its curves and contours in a linear progression toward completion. In both the Republic and Laws, Plato comments on the ability of this temporal movement to “charm” the inner life of the listener. Roger Scruton contends that the mind moves sympathetically with motion perceived in music, such that it is felt as physical motion. These and other observations address the goal-directed movement of music. The whole piece is not revealed at once or in an order or manner that the listener chooses. Musical developments, whether simple or complex, lead auditors from beginning to end.

In contrast to print communication, which can be read and reread at any pace the reader wishes, music imposes its own duration and agenda. In pre-recording days, this necessitated formalized repetitions and recapitulations to get certain messages across, hence the use of sonata form (exposition, development, recapitulation), the doubling schema of keyboard partitas (AA/BB), the verse/chorus form of folksongs (and later commercial songs), and so on. Michel Chion notes: “This enormous redundancy—which means that if we buy a recording of Bach’s English Suites that lasts an hour, we only get thirty minutes of ‘pure’ musical information—clearly has no equivalent in the visual arts of the period.” Audio recordings afford greater freedom in terms of playback and repeated listening, but each listening remains a temporal experience.

The situation is not sidestepped with printed notation. Although a score can be read and studied, similar to a book or article, the notes on a page are essentially illusory. The paper is not the music. Jean-Paul Sartre argued in L’Imaginaire, a treatise on imagination and the nature of human consciousness, that music is never located in the silent symbols of a musical score, however detailed. Using Beethoven’s Seventh Symphony as an example, Sartre explained that the inability of written notes to capture music is rooted in the nature of sound itself. Unlike something that is empirically real—defined by Sartre as having a past, present, and future—music evaporates as soon as it is heard. Each performance is basically a new creation, and, we might add, each exposure to a recording is a new experience, due to changes in the listener and her surroundings from one hearing to the next.

Time, not paper, is the fundamental surface upon which music is made. Music involves a linear succession of impulses converging toward an end. Whereas a painting or sculpture conveys completeness in space, music’s totality is gradually divulged, sweeping up the listener—and the listener’s inner life—in the process.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

The Original Echo Chamber

Jonathan L. Friedmann, Ph.D.

“A temple is a landscape of the soul. When you walk into a cathedral, you move into a world of spiritual images. It is the mother womb of your spiritual life—mother church.” These words from mythologist Joseph Campbell touch on the primitive spatial and acoustic appeal of Medieval and Renaissance cathedrals. Campbell connects the sensation to that of pictograph-adorned Paleolithic caves, which were also likely used for mystical and spiritual ceremonies. The melodic conventions and vocal techniques adapted to these acoustically active stone-walled spaces—epitomized by the straight, drawn-out, and separated tones of Latin ecclesiastical chant—exploit the echo chamber effect, creating an all-encompassing sonic and physical experience. As I explain in an earlier blog post, these ethereal sounds became synonymous with the cosmic voice.

The impression of safety and repose these spaces provide is captured in Campbell’s phrase, “the mother womb.” This image can be taken a step further. The sonically induced, archaic feelings take us back to the literal womb: the original acoustic envelope where direct and indirect sounds are experienced as an undifferentiated gestalt. Psychoanalyst Didier Anzieu describes it as a “sonorous bath”: a lulling sense of weightlessness, rebirth, and being transported.

The ear awakens during the fourth month of fetal development. By week twenty-five, the cochlea—the ear’s frequency analyzer—reaches adult size. From that point forward, the fetus receives, processes, and responds to a growing array of amalgamated sounds, including pressure variations in the bodily walls, two cycles of heartbeats (the mother’s and her own), and acoustic input from outside the womb. The unfiltered sounds are presumably analogous to those heard in a reverberating space, such as a cave or cathedral.

Only in early childhood does the ear begin to categorize different sounds. Following R. Murray Schafer’s concept of the “soundscape,” or the combination of acoustic signals heard in an immersive environment, normally functioning ears automatically distinguish between background and foreground signals, both natural and human-made. This behavior, which combines innate capacity and cultural conditioning, is not present in the echoing womb. The lively reverberations, so closely associated with sacred spaces, recall that original echo chamber. Indeed, conceptions of God (or gods) as compassionate, protecting, loving, comforting, and so forth may even be rooted in this simulated return to the womb.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.