I’ve always had an interest in the myriad ways art and science intersect: not surprisingly, Leonardo da Vinci is a hero of mine.
Few arts have been altered more by advances in science and technology than music, a point made by New Scientist’s Technology Blog recently when it listed five milestones in music technology (and found all the great links I’ve included below!).
The list is in chronological order, so it might surprise you to know that the first thing on it is the synthesizer. We think of synthesizers as hallmarks of modern pop music, but Elisha Gray invented the first one back in 1876.
It wasn’t what he set out to invent: he was working on a telephone (but lost out to Alexander Graham Bell for the patent). He attached steel reeds of various lengths to a self-vibrating electromagnetic circuit. Each reed produced a distinct note. Gray transmitted the results down a telephone line. He called it a Musical Telegraph.
Next on New Scientist’s list is the 1930 invention of the first electric pickup: magnetic coils mounted under the strings of electric guitars. When the strings vibrate, they disturb the coils’ magnetic fields. This generates small currents that can be sent to an amplifier and converted into sound.
George Beauchamp, a vaudeville musician, is credited with making the first pickup on his dining room table; he wound the coils’ strings with a sewing machine. The electric guitar that resulted was marketed as the Rickenbacker.
If synthesizers aren’t new technology, drum machines surely must be–right?
Wrong! Leon Theremin invented the first drum machine, the Rhythmicon, in 1930. It did more than just make drum sounds: it created different notes, which could be repeated at different frequencies and with different timbres, such as “woodblock” and “triangle.”
Next on the New Scientist list is multi-track recording, which allows instruments and voices to be recorded separately and then mixed together. Les Paul (also famous as a guitarist and guitar-inventor) first used multi-track recording in 1947 for “Lover (When You’re Near Me),” working in his garage and recording onto wax disks rather than magnetic tape.
The final item on the list is the distorted guitar sound, which results when amplifiers and loudspeakers are pushed past their limits. The first recording to use it, Link Wray’s 1958 “Rumble,” got the sound in a very direct way: they pushed pencils through the amplifier’s speaker.
The march of music technology continues: a couple of new items popped up just this week.
The squeal of feedback occurs when a particular frequency (it varies with the instruments and room involved) reaches a critical volume that allows the microphones to pick it up and route it back through the speakers. The resulting vicious circle produces a painful screech.
Sound checks are aimed, in part, at identifying troublesome frequencies so the sound engineer can keep an eye on them. Existing automatic feedback filters also remove some non-feedback sounds, and sometimes fail to catch all the feedback. But researchers at the University of London have created software that doesn’t try to catch feedback after it happens, but instead prevents it from even beginning. The software, programmed during the sound check, lowers the volume of feedback-threatening frequencies when they rise above a critical level, and also lowers all the other frequencies’ volumes at the same time so the balance isn’t affected.
The other interesting new music technology that popped up this week is something called MySong, created by Microsoft. (Follow that link to see a video of it in action and hear some samples.)
MySong can take a sung vocal melody and generate appropriate chords to accompany it: in other words, it’s an instant backing band. It creates a variety of possible accompaniments which the user can select from by moving a slider that determines the “happy factor” and “jazz factor.”
It’s not on the market yet, but I find it a frightening thing to contemplate, because apparently it requires very little computing power, which means it could even run on a cell phone: which means we’ll not only have to listen to other people’s overly loud cell-phone conversations in public places, we’ll have to listen to them singing into their phones, as well.
Brrr. Maybe it’s possible to take this art/science mixing thing too far.