Category Archives: sound

Sound as Object

Jonathan L. Friedmann, Ph.D.

After World War II, audio recordings gradually replaced sheet music as the dominant means of distributing music to consumers. As a result, the musical centerpiece of the family home moved from the piano to the hi-fi system and, consequently, from active music-making to audition and record collecting. The LP (3313 rpm, long-playing, microgroove vinyl disc), introduced by Columbia Records in 1948, revolutionized the music industry. Along with changing habits of consumption, records transformed basic perceptions about music. Fleeting sounds became fixed objects.

Recordings had been around since Thomas Edison’s mechanical phonograph cylinder, patented in 1878. Within two decades, commercial recordings and distribution grew into an international industry. Popular titles at the beginning of the twentieth century sold millions of units. Gramophone records, which were easier to manufacture, ship, and store, hit the shelves around 1910, and subsequent advances in technology made audio recordings increasingly accessible. Still, sheet music—and the piano playing it depended on—remained king. The wholesale value of printed sheet music more than tripled between 1890 and 1909. 25,000 songs were copyrighted in the U.S. Sheet music sales totaled 30 million copies in 1910. The popularity of printed music continued through the 1940s. An article in Variety on October 4, 1944 boasted “Sheet Music Biz at 15-Year Crest.”

Sales declined precipitously as the 1940s moved into the 1950s. The days when hit songs were fueled by a combination of sheet music and, secondarily, record sales gave way to our recording-dominated era. A Variety article from November 21, 1953 captured the turning point: “Publishing Industry Alarmed by Pop Sheet Music Decline.”

The current ubiquity of recordings is the culmination of a centuries-long effort to mechanically reproduce sound—an evolution that began with musical notation and continued with programmable devices (hydro-powered organs, musical clocks, music boxes, player pianos, and the like). However, earlier inventions still required manual engagement and/or autonomous real-time devices/instruments. With recordings, sounds disembodied from their performance could be played back at any time. Music itself became the object.

Michel Chion details seven ways recording technology facilitated the objectification of music: (1) capturing ephemeral sound vibrations and converting them into a permanent medium; (2) facilitating telephony, or the retransmission of sounds at a distance from their original source; (3) enabling new ways of systematic acousmatization, or the ability to hear without seeing; (4) allowing sounds to be amplified and de-amplified through electronic manipulation, as opposed to the crescendo or decrescendo of live instruments; (5) affording phonofixation, or the fixing of sounds and reuse of fixed sounds in the recording studio; (6) paving the path toward phonogeneration, or the creation of sound “out of nothing” by way of synthesizers and computers; (7) giving engineers the ability to reshape sounds through editing, processing, and manipulation.

This last effect, in particular, contributes to Chion’s view of sounds converted into objects: “recording has been—above all from the moment that it enabled editing—the first means ever in history to treat sounds, fleeting things, as objects: that is to say, both in order to grasp them as objects of observation and in order to modify them—to act on their fixed traces.” Likewise, the listener’s control over recordings—through pausing, skipping forward, changing volume, using multiple devices, etc.—furthers the impression of music’s “thing-ness.”

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

The Original Echo Chamber

Jonathan L. Friedmann, Ph.D.

“A temple is a landscape of the soul. When you walk into a cathedral, you move into a world of spiritual images. It is the mother womb of your spiritual life—mother church.” These words from mythologist Joseph Campbell touch on the primitive spatial and acoustic appeal of Medieval and Renaissance cathedrals. Campbell connects the sensation to that of pictograph-adorned Paleolithic caves, which were also likely used for mystical and spiritual ceremonies. The melodic conventions and vocal techniques adapted to these acoustically active stone-walled spaces—epitomized by the straight, drawn-out, and separated tones of Latin ecclesiastical chant—exploit the echo chamber effect, creating an all-encompassing sonic and physical experience. As I explain in an earlier blog post, these ethereal sounds became synonymous with the cosmic voice.

The impression of safety and repose these spaces provide is captured in Campbell’s phrase, “the mother womb.” This image can be taken a step further. The sonically induced, archaic feelings take us back to the literal womb: the original acoustic envelope where direct and indirect sounds are experienced as an undifferentiated gestalt. Psychoanalyst Didier Anzieu describes it as a “sonorous bath”: a lulling sense of weightlessness, rebirth, and being transported.

The ear awakens during the fourth month of fetal development. By week twenty-five, the cochlea—the ear’s frequency analyzer—reaches adult size. From that point forward, the fetus receives, processes, and responds to a growing array of amalgamated sounds, including pressure variations in the bodily walls, two cycles of heartbeats (the mother’s and her own), and acoustic input from outside the womb. The unfiltered sounds are presumably analogous to those heard in a reverberating space, such as a cave or cathedral.

Only in early childhood does the ear begin to categorize different sounds. Following R. Murray Schafer’s concept of the “soundscape,” or the combination of acoustic signals heard in an immersive environment, normally functioning ears automatically distinguish between background and foreground signals, both natural and human-made. This behavior, which combines innate capacity and cultural conditioning, is not present in the echoing womb. The lively reverberations, so closely associated with sacred spaces, recall that original echo chamber. Indeed, conceptions of God (or gods) as compassionate, protecting, loving, comforting, and so forth may even be rooted in this simulated return to the womb.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Fleeting Effect

Jonathan L. Friedmann, Ph.D.

By the evening of December 30, 1862, Confederate and Union armies were positioned for battle in Murfreesboro, Tennessee. They were so close to one another that bugle calls could be heard from the opposing camp. Just before tattoo—the bugle signal for lights to be extinguished and loud talking and other disturbances to cease—army bands from each side began playing their favorite tunes. The music carried over the wintery air. “Yankee Doodle” from the North was answered by “The Bonnie Blue Flag” from the South. “Dixie” from the South was replied with “Hail Columbia” from the North. The back-and-forth culminated with the rival bands joining together in “Home, Sweet Home,” a song dear to soldiers on both sides. Thousands of homesick voices rose above the blaring brass instruments. It was a poignant reminder of their shared American culture and shared humanity. Then the music stopped. The men went to sleep and rose the next morning to slaughter each other. Of the major battles of the Civil War, the Battle of Murfreesboro (a.k.a. The Battle of Stones River) had the highest percentage of casualties on both sides.

This episode is a stark illustration of music’s fleeting effect. Music is rightly called the most emotional of the arts. In a matter of seconds or less, it can transform the listener’s mood and demeanor. The animosities of warring factions can be disarmed, their sentiments united, and their pulse-rates joined as one. But music’s intoxicating potential lasts only as long as the stimulus itself. Once the sounds evaporate, behaviors generally return to their pre-music-influenced state. As Susanne K. Langer observed in her landmark treatise, Philosophy in a New Key, “the behavior of concert audiences after even the most thrilling performances makes the traditional magical influence of music on human actions very dubious. Its somatic effects are transient, and its moral hangovers or uplifts seem to be negligible.”

Langer’s observation, along with the Civil War example, contrasts with claims prominent in the eighteenth century. Books such as Richard Brocklesby’s Reflections on Ancient and Modern Musick (1749) came with bold subtitles, like “Applications to the Cure of Diseases.” Modern thinkers and researchers refrain from claims that music somehow permanently impacts temperament or disposition. This is why, for instance, music therapy (both active and receptive) tends to be periodic and ongoing, and is typically administered in conjunction with other therapeutic and medicinal treatments.

None of this challenges the fact that music is strongly connected to feelings. If anything, the fleetingness of music-induced sensations sustains our attraction to the art form. It is largely why we return to the same music again and again, and long for musical interludes in our busy lives. These brief mood changes and moments of escape play a revitalizing role, temporarily recharging or redirecting our emotions without causing lingering distractions.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Sound and Spirit

Jonathan L. Friedmann, Ph.D.

Music is considered the most spiritual of the arts. The designation refers equally to music’s substance and impact. Music is revelation: it manifests in ethereal air. Music is boundless: it transcends physical constraints. Music is invisible: its essence cannot be seen. Music reaches inward: it communes with the inner life. Music conjures: it stirs vivid memories and associations. Music alters: it changes moods and frames of mind. These observations point to the music’s immateriality. Although it abides by the laws of physics and follows a traceable line of causation, it somehow extends beyond them.

Music embodies the fundamental meaning of spirituality: “of, relating to, or affecting the human spirit or soul as opposed to material or physical things.” Unlike the visual arts, which manipulate tangible matter, music lacks a physical presence. It is force without mass.

This is not to suggest supernaturalism, which is often confused with spirituality. The life of the spirit is not dependent upon an otherworldly plane. From a scientific perspective, everything—including sound—is part of the natural world. The separation of music from material existence is more perception than objective fact. Just as science has demystified the once-taken-for-granted duality of soul and body, the perceived disconnect between music and material reality would not pass scientific muster. Yet, insofar as art is expression and impression, the feeling of otherness is enough to sustain the mystery of music.

Musical responses can be attributed to chemical and neurological mechanisms. For example, dopamine release is the primary inducement of musical “highs.” But, just as scientific explanations of why and how we come to believe in the supernatural do not prevent people from doing so, laboratory studies of music’s effect on the brain do not compel us to pause, analyze, and dismiss musical-spiritual sensations as they occur. We are wired to feel and conceive of music the way we do.

How can these rational/scientific and non-rational/spiritual views be reconciled? One way is by appreciating music’s ability to meet incorporeal needs distinct from the material necessities of food, shelter, clothing, possessions, and the like. The fact that music is a natural phenomenon (like everything else) does not make it any less spiritual. What music accomplishes more than the other arts is a sense of going outside the measurable world, even while being a part of it.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Revising the Triangle

Jonathan L. Friedmann, Ph.D.

Music-making is sometimes depicted as a triangle consisting of composer, performer, and listener. It is a triangle in constant motion, with each side responding to one another. The interplay might go something like this: The composer interprets herself, the performer interprets the composer, the listener interprets the performer, the composer reinterprets herself, the performer reinterprets the composer, the listener reinterprets the performer, etc. As this clumsy illustration suggests, there is no one type of triangle or order of interaction that works for all scenarios.

It doesn’t take much to warp the triangle’s dimensions. When the composer is dead or was never identified to begin with (as with most folk music), one corner of the shape is inactive. When the music is improvised, the composer and performer are one and the same. Sound recording can freeze a one-time performance, leaving the listener to interpret an inanimate artifact. Electronic music can eliminate the need for a performer’s mediation.

These and other iterations require a revision of the triangle, the conventional version of which survives solely under strict conditions: a living composer writes music that is performed by living players for a live audience. The only side that remains constant in all cases is the listener—so much so that the model should be redrawn to favor the perceiver’s corner. One possibility is a tetrahedron (a three-dimensional triangle) that funnels sounds toward the listener. At one end is a wide opening, which receives music of all sorts: live, recorded, electronic, manual, composed, improvised. At the other end is a narrow opening, through which the music empties into the ear.

The advantage of this revised triangle is threefold. First, it does not discriminate against performance modalities. An orchestra premiering a new work in a concert hall is on equal footing with a turn-of-the-twentieth-century folksong recording. Second, it emphasizes that music is always heard/interpreted in the moment. This is true whether the performance is live, recorded, or a combination of the two (e.g., someone singing along to a karaoke track). Third, it reminds us that music is fundamentally audience-dependent. Painting, sculpture, and other concrete arts are affairs between artist and tangible materials. Once the work is finished, the creative process is complete; whether anyone sees the work is, in absolute terms, irrelevant. Not so with the immaterial art of music. If nobody hears it, it cannot be said to exist.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Terrestrial Sounds

 Jonathan L. Friedmann, Ph.D.

On September 5, 1977, NASA sent a probe to study the outer Solar System and continue on to interstellar space. Named Voyager 1, the sixteen hundred pound craft is now approximately twelve billion miles from Earth. An identical spacecraft, Voyager 2, was launched two weeks before its interstellar twin, but Voyager 1 moved faster and eventually passed it. Both probes carry a golden phonograph record containing sounds and images meant to convey the diversity of terrestrial life and human culture. The hope is that, should intelligent extraterrestrials find one of these infinitesimal records in infinite space, they would be able to decipher its contents.

The record includes 116 images and an array of earthly sounds: greetings in fifty-five languages, volcanoes, a chimpanzee, a heartbeat, a train, Morse code, a wild dog, a mother and child, rain, and much more. It also has ninety minutes of music, ranging from a Pygmy girl’s initiation song to Indonesian gamelan music to the first movement of Bach’s Brandenburg Concerto No. 2 to the “Sacrificial Dance” from Stravinsky’s Rite of Spring.

The possibility of an extraterrestrial species obtaining, playing, and comprehending the Golden Record is minuscule. Not only is it a tiny object moving in the vastness of space, but the sounds it includes are utterly earthbound. In striving to portray sundry soundscapes, the record reveals a certain, if subtle, unity: every sound on this planet bears the imprint of this planet. Such earthliness would surely fall on deaf alien ears (if they even have an auditory mechanism). The sounds we make or perceive have an evolutionary history unique to our orb.

In the decades since the Voyager space pods were set in motion, much has come to light about the natural origins of music. Bernie Krause’s groundbreaking work on non-human “musical” proclivities suggests, among other things, the millennia-spanning influence of geophony (Earth sounds) and biophony (non-human animal sounds) on anthrophony (human sounds). Other theories of music’s origins point to environmental imprints in one way or another. A rough amalgamation of these nuanced hypotheses shows music as a combination of the imitation of nature and the exploration of human capacities.

Added to this is mounting evidence of the interconnectedness of Earth’s living creatures. As Neil Shubin explains in his popular book, Your Inner Fish, the close examination of fossils, embryos, genes, and anatomical structures indicates that all animals, prehistoric and modern, are variations of the same blueprint—hence the fish within us all. (Shubin remarked in a lecture that he could have just as easily called the book, Your Inner Fly.) What this means musically is that creaturely sounds of all sorts emanate from the same extended biological family, and are thus shaped by variations of the same constraints. The reason why researchers have been able to explore musical vocabularies of songbirds and bugs, and their probable influence on early humans, is because, despite surface dissimilarities, animals are people too (or, more accurately, humans are animals).

The extraterrestrial species that happens upon the Golden Record will almost certainly be nothing like us. Life on Earth shares an anatomical makeup that could have only developed here; other habitable planets would have other ingredients. This is a major criticism of popular depictions of aliens, which, aside from The Blob (1958) and a few others, invariably appear as insects, reptiles, humanoids, or a combination of the three. Genes on another planet would give rise to species beyond our Earth-born imaginations. And our sounds—musical, linguistic, animal, or otherwise—would be unlike anything they’ve ever heard.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Music Shapes the World

Jonathan L. Friedmann, Ph.D.

As he grew older, Thomas Edison (1847-1931) became increasingly fascinated with the alleged mystical powers of sound and music. Inspired by the spiritualism and paranormal craze of the decades surrounding the turn of last century, Edison announced in 1920 that he was developing a machine that could communicate with the dead. He reasoned that if a spirit world actually existed, an extremely sensitive device was needed to converse with it. A little closer to reality, Edison conducted a series of Mood Change Parties, in which participants listened to recordings and filled out charts documenting their responses. The goal was to link mood changes—worried to carefree, nervous to composed, etc.—with corresponding musical stimuli.

One of these “parties” took place in a Yale University psychology class. As a newspaper described it, it aimed “toward alleviating neurotic conditions, with a view of discovering psychological antidotes for depressed conditions of mind whether due to fatigue or disappointment.” Similar experiments were conducted at other Ivy League schools, giving an air of legitimacy to the proceedings despite company documents showing little serious interest in the project’s scientific merits or lack thereof. Not surprisingly, both the séance device and the Mood Change Parties were, more than anything, elaborate marketing ploys.

Whatever the motives, the machine designed for the deceased and the parties intended for the living grew from Edison’s awareness that sound could manipulate the psychological atmosphere. Pseudoscientific claims aside, it is clear that certain tone patterns used in certain environments can cause us to feel as if something otherworldly is occurring (hence the effect of science fiction film scores). Likewise, a group of people with common cultural backgrounds (such as Yale students in the 1920s) usually have shared reactions to changes in tone sequences—the differences being only in degree.

In both cases, too, sound-triggered transformations are perceived not just in the internal realm of emotions, but also in the surrounding environment. The room itself is felt to shift from heavy to light, tense to relaxed, sterile to active, etc. But these are really psychological shifts. From a philosophical standpoint, this adds support to the notion that the mind shapes the world around us. Before we can begin to apply rational thought, subconscious processes organize data coming to us through our senses, and largely determine what it is we are experiencing. Musical sounds strike us on such an all-consuming and mind-altering level that the emotions stirred interiorly tend to influence how we perceive the exterior world.

In Edison’s experiments, this was demonstrated both in the presumed way that aural changes could create an ambience conducive to communicating with the dead, and the more realistic idea that the mood of a party—not just those in attendance—could change in accordance with listening selections. In this modest sense, music can be said to shape the world around us.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Sound in Wax

Jonathan L. Friedmann, Ph.D.

The earliest wax cylinder phonographs—the first commercial medium for recording and reproducing sound—were entirely mechanical. They were hand-cranked and needed no electrical power. All that was required was a lathe, a waxy surface, a sharp point for a stylus, and a resonating table. To impress sound waves onto wax, the voice or instrument was positioned closely to the large end of a horn. The vibrations moved a needle, which carved a groove on the rotating wax. According to Walter Murch, an acclaimed film editor and sound designer, everything used in these early machines was available to the ancient Greeks and Egyptians. But it took until the middle of the nineteenth century, and the genius of Thomas Edison and his team, to execute the recording process.

Why did it take so long to capture sound? Musician David Byrne has informally speculated that maybe it didn’t. Perhaps someone in antiquity invented a similar device and later abandoned it; or perhaps the device itself was simply demolished in the ruins of history. While conceivable on a technological level, this hypothesis is unlikely considering the prevailing ethos of the ancient world. The ephemerality of sound was part of its attraction: it was momentary, mysterious, transient and transcendent. As this fleetingness was highly valued, there was little or no inclination to record. Murch puts it this way: “Poetically, the beauty of music and the human voice was used as a symbol of all that’s evanescent. So the idea that you could trap it in any physical medium never occurred to [them] . . .”

This contrasts with the rush to develop written systems that enshrined language. The ancients recognized that certain things should be documented, like governmental records, priestly decrees, royal chronicles, philosophical treatises, etc. What these shared in common was a silent beginning: they were soundless thoughts committed to paper (or papyrus or parchment or tablets or wood). Writing gave concrete form to facts and concepts that, while often referencing observable phenomena, had no tangibility of their own. In contrast, sound was understood as being completely formed. It was received sensually, experienced kinesthetically, and processed emotionally. It existed in the moment it was made.

It is worth noting that Edison first thought of wax cylinder recorders as dictation machines. They were to record the owners’ ideas and messages and, ideally, preserve the great speeches of the day. This limited purpose reflected the limitations of the early devices: they were too crude and imprecise to capture the nuances of musical performance. True, music recording and playback were in Edison’s long-term plan, and they became major functions as the machines advanced. But it is feasible to consider that Edison’s initial goal of preserving dictation was—and arguably still is—a worthier and more practical goal than detaining music.

Musicians commonly lament that they are slaves to their own recordings. The version that appears on an album is the version that fans want to hear, and deviations are typically received as imperfections, inaccuracies or unwanted departures from the “authoritative” source.  Some improvising musicians even feel obliged to give their audiences note-for-note reproductions of recorded solos. This is not to negate the enormous benefits and incalculable cultural impact of musical recordings. Our understanding of music as a diverse human enterprise owes mightily to the proliferation of recorded sounds, and musical creativity thrives when there is access to other music. But something of music’s temporality is lost in recording. Imprinting sound in wax or digital audio creates the illusion of permanence.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

(Not) Defining Music

Jonathan L. Friedmann, Ph.D.

A universally applicable definition of music will never be constructed. As an ever-present and ever-malleable aspect of human life, music, it seems, has taken as many forms, shades and variations as humanity itself. A truly objective view of what music is (or can be) would be so inclusive as to be almost useless. Every aspect of the musical entity is open to challenge and reconfiguration: devices used to produce sounds (instruments, found objects, electronic sampling, vocals, etc.); modes of transmission (oral tradition, written notation, live performance, recordings, etc.); means of reception (speakers, headphones, classroom, concert hall, etc.); the sounds themselves (tones, rhythms, consonances, dissonances, etc.).

Yet, at the same time, sources like the Encyclopædia Britannica remind us that, while no sounds can be described as inherently unmusical, “musicians in each culture have tended to restrict the range of sounds they will admit.” Philosopher Lewis Rowell likewise defers to the role of convention: “let music signify anything that is normally called music.” In both cases, monolithism is discarded in favor of relativism: an awareness that ideas about music depend more on one’s location and exposure than on sonic properties themselves. And now, with the aid of technology and global connectivity, it is possible to cultivate an ever-expanding musical vocabulary that reaches far beyond one’s own cultural milieu.

But, even if we embrace globally diverse musical offerings (or, at minimum, acknowledge that what one culture accepts as music is not the final word), it is still the case that music is a cultural product, and, as such, comes to us through a long and multi-actor process of experimenting, selecting, sculpting, modifying and normalizing. Indeed, while abstract considerations may lead us to abandon hard and fast rules about what constitutes a musical sound, whatever music can be said to be is the result of a cultural process. Music, in other words, is defined for us. (It bears noting that even “rule-breaking” systems like twelve-tone serialism and free jazz draw their raw materials from pre-established tools and conceptions.)

To perhaps state the obvious, we do not begin with the view that music is a loose and inclusive category. Rather, it is the existence of musical variants within and between cultures that forces us to recognize that music is a loose and inclusive category. What we are left with, then, is a formulation that is not entirely satisfactory, but is at least defensible: cultures organize sounds in such a way that they are heard as music.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.

Before Speech

Jonathan L. Friedmann, Ph.D.

Music and speech are not the same thing. One is abstract and arbitrary; the other is concrete and absolute. One uses sound as its subject matter; the other as a vehicle for logos. The grammar of one is built on pitch, key, rhythm, harmony and technique; the grammar of the other is based on morphemes, phonemes, words, syntax and sentences. One stimulates imprecise affective states; the other imparts precise information. One stems from emotion; the other from reason. Despite these dissimilarities, both music and speech grew from the primal necessity for self-expression.

In the evolution of human communication, wordless vocal music—as distinct from song—is speculated to have preceded structured language. Part of this view is rooted in observation. As anyone familiar with infants knows, our earliest attempts to communicate vocally involve singsong patterns of mostly vowel sounds. Although indefinite, this “naked language” is unmistakable in its desire to relay specific thoughts and needs (often intelligible only to the parent). The result is an emotive sequence of tones approaching, though not identical to, music.

This could lead us to the now-defunct theory of recapitulation (or biogenetic law), popularized by Ernst Haeckel (1834-1919), in which the stages of child development are thought to encompass developmental stages of the species as a whole, which extended over millennia. In that old theory, the infant’s progress from nonsense vocables to coherent speech is a repetition of what our prehistoric ancestors went through, only in quick time. Modern biology has dumped this idea into the dustbin of mythology. However, the premise that music-speech predated language-speech has been revived, though in a more limited way.

One intriguing example is Steven Mithen’s 2005 book, The Singing Neanderthals: The Origins of Music, Language, Mind and Body. Mithen, a professor of archaeology at the University of Reading, has traced pseudo-singing to Neanderthals, a Middle to Late Pleistocene species closely related to modern humans. According to Mithen, while Neanderthals lacked the neural circuitry for language, they did have a proto-musical form of communication that incorporated sound and gesture, influenced emotional states and behavior, and was rhythmic, melodic and temporally controlled—that is, “a prelinguistic musical mode of thought and action.” He has coined a cumbersome neologism to describe the phenomenon: “Hmmmmm,” for holistic, multi-modal, manipulative, musical, and mimetic.

Although the title of the book suggests that Neanderthals “sang,” Mithen is careful to state that their vocalization was neither language nor music as we know them today. This implies a more nuanced and complex line of evolution than the earlier simplistic formula of song to speech. Of course, it is impossible to know for sure whether a music-like activity evolved prior to and/or gave rise to language. Without the aid of a time machine, we are reliant on the sophisticated, yet ultimately limited, tools of archaeology, anthropology, psychology and neuroscience. But speculate we can.

Visit Jonathan’s website to keep up on his latest endeavors, browse his book and article archives, and listen to sample compositions.