Category Archives: Reviews

Listening to the Wood

Resonators made of wood enhance overtones in many instruments, offering big auditory rewards.

In our distracted lives we get fewer chances to let any single source of human innovation saturate our senses. To be sure, opera fans, Marvel movie addicts and book readers may stay with a single source for an extended period of time. The rest of us are usually on to several other things in short order, allowing music to turn into sonic wallpaper that fades into the background.

But more recently, and with the time made possible by COVID isolation, perhaps more of us have taken the time to notice that the right music reproduced on the right sound system can offer new rewards. It can be a thrill to hear what was never noticed before.  And what we frequently hear if we listen with care are the amazing sounds of wood resonators that exist in most acoustic instruments. As a start, think of oboes, violins, violas, basses, harps, pianos, acoustic guitars and cellos.

There’s no doubt that It is easy to get lost in the weeds when describing how music yields pleasure. We are still offered a range of explanations about what those are. What can be said about a kind of “language” that is all expressive and mostly non-stipulative?  It’s anybody’s guess what Beethoven was “saying” in the first four notes of his iconic Fifth Symphony. What it represents is mostly in us.

That is equally true for the mental associations that surface from any single musical phrase.  But the wood of an instrument provides some useful hints. What seems clear over a lifetime of listening is that part of the joy of music lies in the ever-unfolding timbres of the singers and instruments themselves: specifically the aural colors heard in the fundamentals and overtones of that emerge from acoustic sources. The overtones that double and triple the frequency of a fundamental are subordinate in volume, but add to the richness of sound is shaped by the instrument itself.  Timbre is frequently what we describing when we say will “like the sound” of a particular instrument. This quality explains why two different instruments playing the same note can sound so different.  The note C2 played on a synthesizer can be pretty uninteresting. Add in the playing of the open C2 string played on a cello, and suddenly the instrument fills our ears with multiples of that fundamental, often making the wood itself strangely visible to the ear. The wood amplifies and resonates at complementary frequencies.

 

Sound From Organic Sources

It’s no coincidence that the bone in the skull is also a resonator with some of the density and rigidity of wood. It’s a big part of what makes up a singer’s voice. These natural materials put to work in the service of music creates a kind of synesthesia, where we “see” the sound within the materials set in motion by a musician. Perhaps we should consider mahogany, spruce, willow, maple and rosewood as our organic twins: trees outside that match the flexing tree that resonates within us.

Again, the cello is a good as an example. Pluck a string on an unamplified guitar and you get. . .well. . . not much:  a thin, flat sound. But strings tied to the resonator of a cello come alive, sounding frequencies in the neighborhood of the human voice, and capable of adding richness when doubled with the music someone is singing.  Aurally, we hear this pairing of the two kindred sources that can blossom into expressiveness.  It’s little surprise that audio engineers and musicians sometimes describe what they hear as a “bloom” of sound that has been enhanced by resonances from wooden instruments and sometimes buildings.

A good example is a Roger’s and Hammerstein’s ballad from Carousel (1945).  A YouTube clip featuring the John Wilson Orchestra at a BBC Proms concert. “If I Loved You” seems nearly perfect as musical conversation between singers Sierra Boggess and Julian Ovenden. Boggess’ crystalline voice first carries the words of this curious un-love song mostly by herself, with the orchestra quietly backfilling.  But at 3:50 Ovenden takes his turn, and the orchestra starts to double his singing, matching his version of the theme note for note. And the effect is stunning. Doubling instruments with a voice exists in virtually every musical form, usually adding musical weight and power. And the cello is a special case, because it duplicates the range of a male voice. The resulting synchronicity drives home the peculiar pathos within the song.

All  of this is worth mentioning because so much contemporary music has been stripped of its birthright of natural resonance.  Harmonics and overtones are easily swamped by synthesizers, amplifiers and audio processing that takes the sound of wood out of a recording. As I note in The Sonic Imperative (2021), the electric guitar and the use of audio effects introduced by Les Paul and others in the early 1950s dealt a blow to the acoustic origins of music. I still like the sounds of Les Paul and Mary Ford.  But their constructed sonics also make me want to go back to performances where air and wood are the auditory attractions, as in a modern BBC/Linn recording of Italian Baroque concertos.  In that performance 17th century wood instruments fill the room with a glow of natural resonance.

A.I. and the Mastery of Spoken Language

The question isn’t just whether we are capable of making simulations of human speech but, rather, if bots can replicate the singular mind that gives form to all speech.

In Steven Spielberg’s dystopian film, A. I. Artificial Intelligence, a software designer played by William Hurt explains to a group of younger colleagues that it may be possible to make a robot that can love. He imagines a machine that can learn and use the language of “feelings.” The full design would create a “mecha”—a mechanized robot–nearly indistinguishable from a person. His goal in the short term was to make a test-case of a young boy who could be a replacement for a couple grieving their own child’s extended coma.

The film throws out a lot to consider. There are the stunning Spielberg effects of New York City drowning in ice and water several decades in to the future. But the core focus of the film is the experiment of creating a lifelike robot that could be something more than a “supertoy.”  As the story unfolds, it touches on the familiar subject of the Turing Test: the long-standing challenge to make language-based artificial intelligence that is good enough to be indistinguishable from the real thing.

Should we become attached to a machine packaged as one of us? Even without any intent to deceive, can spoken language be refined with algorithms to leap over the usual trip wires of learning a complex grammar, syntax and vocabulary?  It takes humans years to master their own language.

The long first act of the film lets us see an 11-year old Haley Joel Osment as “David,” effectively ingratiating himself to the Swinton family.  In my classes pondering the effects of A.I., this first segment was enough  to stop the film and ask members what seemed plausible and what looked like wild science fiction. I always hoped to encourage the view that no “bot” could converse in ordinary language with the ease and fluency of a normal kid.  That was my bias, but time has proven me wrong. If anything, David’s reactions were a bit too stiff to reflect the loquacious chatter bots around today. Using Siri, Alexa or IBM’s Watson as simple reference points, it is clear that we now have computer- generated language that has mostly mastered the challenges of formulating everyday speech. There’s no question current examples of synthetic varieties are remarkable.

Here’s an example you can try. I routinely have these short essays “read” back to me by Microsoft Word’s “Read Aloud” bot, which comes in the form of a younger male or female voice that can be activated from the “review” section in the top ribbon. Not having an editor, it helps to hear what I’ve written, often letting me hear garbled prose that my eyes have missed. I recall the first version of this addition to Word was pretty choppy: words piled on words without much of attention to their  intonation, or how they might fit within the arc of a complete sentence. Now the application reads with pauses and inflictions that mostly sound right, especially within the narrower realms of word usage focused on formal rather than idiomatic English.  Here is the second paragraph of this piece as read back to me via this Word function:

Of course, language “means” when it is received and interpreted by a person.  An individual has what artificial Intelligence does not: a personality, likes and dislikes, and a biography tied to a life cycle. Personality develops over time and shapes our intentions. It creates chapters of detail revealing our social and chronological histories as biological creatures. A key question isn’t whether we are capable of making simulations of human speech. And that begs an even bigger question about whether bots can replicate the unique mind within each of us that gives form to human speech.

Even tied to advanced machine-learning software, chatterbots easily use similarity to falsely suggest authenticity. And there’s the rub. Generating speech that implies preferences, complex feelings or emotions makes sense only when there is an implied “I.” For lack of a better word, with Siri or Watson there is no kindred soul at home. The language of a bot is a simulacrum: a copy of a natural artifact, but not a natural artifact itself.

Even so, we should celebrate what we have: machines that can verbalize fluently and–with complex algorithms–might speak to our own unique interests.