TIL that Marconi believed, late on in life, that no sound ever dies completely. He dreamt of building a device strong enough to pick up the actual words of Jesus at the 'Sermon on the Mount'.

Authored by americanscientist.org and submitted by We-are-straw-dogs

Thus begins Perfecting Sound Forever, Greg Milner’s cultural and technological history of the sound-recording industry. As far as I know, the original-cast album of the Sermon on the Mount has not yet been released on CD, but plenty of acoustic waves emitted in our own era have been captured and preserved, to become the golden oldies of future generations. Neil Young said it: Rock and roll will never die.

Living in an age of ubiquitous recorded audio, it can be hard to appreciate that sound was once the most evanescent of sensory experiences. Faces could live on in portraiture (even before photography), and words could be written down, but until Edison dreamed up his phonograph, the human voice never survived except in memory and imagination.

Edison thought he had invented a dictation machine; his business model was to sell recording equipment and blank media on which people would make spoken memos to themselves or perhaps to posterity. The recording of music was an afterthought; almost 25 years passed between the first version of the phonograph and the release of the first commercial music recordings. After that, though, it wasn’t long before the “phonograph” became the “record player.” This was not to be an instrument with which we would record our own voices; instead, a few star performers—from Enrico Caruso to Hannah Montana—would sell millions of copies of recordings, which the rest of us would listen to over and over. The process of creating those sound recordings became an art, a science and an engineering profession.

Edison’s early phonographs recorded on wax-coated cylinders; the rival gramophone machines of the Victor company played shellac-coated discs. The competition between these two recording formats was the first of many contests for market share that occupy much of Milner’s history. Over the years, consumers of recorded music have been confronted with a long series of choices: 78s versus 45s versus 33s, mono versus stereo, tubes versus transistors, tapes versus discs, cassettes versus eight-track, CDs versus vinyl, analog versus digital, and now MP3s versus WAVs and a dozen other digital file formats. Behind the scenes, equally contentious issues have divided the community of producers and sound engineers. Should the studio be a performance hall that contributes ambience to the sound, or an anechoic chamber? Do microphones belong out in the auditorium where a listener would sit or close to the voices and instruments? Should a performance be recorded all in one take or assembled from bits and pieces?

My own exposure to recorded music began around the time that the “record player” turned into the “hi-fi.” That term “high fidelity” made the aims of the enterprise seem simple and obvious: A recording should capture the sound of the original performance and reproduce it faithfully in the listening room. When you closed your eyes, the cabinet full of glowing and blinking equipment was supposed to disappear, to be replaced by Leonard Bernstein and the New York Philharmonic, or by Buddy Holly and the Crickets. If this illusion was hard to achieve, that meant you needed to work on your turntable’s rumble and wow and flutter, or suppress your amplifier’s interharmonic distortion, or get yourself some better woofers and tweeters.

Milner traces the idea of the hi-fi illusion back to an invitation-only performance in Montclair, New Jersey, in 1915. Three musicians, including contralto Christine Miller, shared the stage with a new Edison Diamond Disc Phonograph. At one point Miller sang a duet with her own recording of an aria from Mendelssohn’s Elijah.

The record began, and Miller let it play for a while. She began singing along with it, and then stopped. There were audible gasps from the audience. It was uncanny how closely Miller’s recorded voice mirrored the sounds coming from her mouth onstage.

The stunt was so effective that such “tone tests” became a popular road show, with Miller and others playing hundreds of towns across the country over the next 10 years. And the notion has been revived many times since then, as in the long-running advertising slogan, “Is it live, or is it Memorex?”

Can we really believe that a phonograph in 1915 reproduced sound so accurately that it fooled a theater audience? Milner points out that if the tone tests were not exactly fraudulent, they were very carefully staged. The record always played continuously; it was the singer who stopped and started. In effect, what was being tested was not the ability of the phonograph to mimic the live human voice but the ability of the singer to imitate the tonal characteristics of the recording device, “such as the ‘pinched’ quality it lent to voices.”

Today, ironically, musicians are again struggling to imitate their own recordings, often with less success. In modern studio practice, a piece of music is sliced into separate tracks for each instrument and diced into multiple takes for each phrase or even each note. The sounds are digitally processed and enhanced. Software can rescue a vocalist who wanders off key or a drummer who can’t keep the beat. The final product is a seemingly flawless performance, which the musicians may be hard pressed to duplicate on stage without all the technological aids. Hence the recent flurry of controversies over lip-synching—or, in the case of Yo-Yo Ma at the Obama inauguration, bow-synching.

The 1950s quest for perfect audio fidelity—for the illusion of the concert hall in the living room—was doubtless naive, but there are worse alternatives. Milner discusses several of them at length. He gives the overall impression that records have been getting worse and worse even though the tools for making them have become steadily more powerful and more widely available.

ViskerRatio on November 26th, 2020 at 13:42 UTC »

The reason this doesn't work is due to something called "noisy channel coding theory", which demonstrates - under certain assumptions - that there is a limit to the information content of a signal in any environment with noise (the path from the millennia-old Sermon on the Mount and a modern listener certainly qualifies).

However, the mathematical basis for this wasn't demonstrate until decades after Marconi dreamed.

blewsyboy on November 26th, 2020 at 13:00 UTC »

I just watched Devs from FX, I can’t say anything so as not to spoil it, but watch it.

rb6k on November 26th, 2020 at 12:42 UTC »

Imagine if he was right and suddenly every thing you’ve ever said can be heard by an app on your phone just by going back to the place you said it and turning the volume and frequency to the right place.

All those times I told my wife I didn’t hear her and she claims I responded/agreed at the time would become very interesting. One of us is wrong! (It’s probably me!)