And what rough beast, its hour come round at last, slouches towards pop pickers to be born…
Welcome back my friends, to the flamboyance that never ends, as Fabulously Flamboyant Friday sashays up to the crease to deliver yet another light-loafered, lubed-up googly from the gasworks-end of musical magnificence.
This week, as the nation took to the streets to joyfully celebrate the grand opening of a new Taylor Swift exhibition at the V&A, and as we respectfully mark the now fast-approaching International Gay Uncles Day (honestly), I’m afraid Mr Grumpy has his grumpy pants on, as we go full-on Victor Meldrew for some narrow-eyed scrutiny of few of the technological developments that many believe are having a deleteriously disruptive impact on the world of popular music.
So, as you might expect, we’ll begin our journey at the end of the last century with the favourite industry of The Blessed Greta (peas be upon her): oil and gas exploration. ’twas in the autumn of ’98 when an odious but epochal event took place in the music industry and popular music was changed forever – and not for the better, I might add. The event in question was the release of Cher’s single, Believe – a recording that was notable for the distinctive fractured glissandi you can hear in the video below.
This was something new. The polished metallic flutter of Cher’s voice became known as “the Cher effect” and it was eventually revealed to be a cunningly creative misuse of a digital audio processing tool that had been released about 18 months earlier.
The processing tool in question was called Auto-Tune and, as the name suggests, it was designed to automatically keep a musical performance in tune. The creator of this musical monstrosity is a classical flautist, music technology boffin, geophysicist and former oil and gas-exploration expert by the name of Dr. Andy Hildebrand.
Hildebrand had been developing fiendishly cunning mathematical algorithms for ground penetrating sonar surveys that were used to determine the location of prime drilling sites for oil and gas exploration. Being both a musician and an audio boffin, Andy pondered the potential for lucrative spin-offs from his highly successful exploration work.
Inspiration stuck (or Satan whispered in his ear) after his wife jokingly asked him to invent something that could improve her singing voice. Hildebrand’s eureka moment came when he realised the algorithms he used for ground mapping (using sound waves and their reflections) could also be used to detect the pitch of audio signals. Accordingly, Hildebrand decided to see if he could design a software-driven system to automatically re-tune digital recordings that were less than perfectly pitched. Sadly for humanity, he succeeded and Hildebrand’s Auto-Tune system was launched in 1997 to the wide-eyed delight of many a studio engineer.
Auto-Tune was rapidly and enthusiastically adopted by the music industry. Soon vocal performances with the trademark metallic sheen of Auto-Tune were dominating the market and, within a year or two, Auto-Tune became (as near as makes no difference) ubiquitous – and I hate it. Clearly, however, I’m in a minority. Auto-Tune revolutionised music production, the punters lapped it up and the system eventually landed Hildebrand a Grammy Award for his outstanding contribution to recording technology.
I’m happy to admit that (when used as intended) Auto-Tune is a time saving godsend for music producers working with less-than-perfect performers. Additionally, when deployed as an artistic tool, it certainly has an interesting range of creative uses. As a perfect example we shall consider a hypothetical singer in a hypothetical recording studio after a few hypothetical takes. Everyone agrees that, for example, take 3 is absolutely sensational, a phenomenal performance, goosebumps and hairs on the back of your neck. Sadly however, the recording has a couple of slightly duff notes, so the singer in question refuses to use it. No problem – along comes Auto-Tune, a quick fix, the duff notes are corrected and the sensational performance is rescued and saved for posterity – splendid stuff! Thank you Auto-Tune.
But we are a lazy species, and when you’ve got a bloody big hammer that can pummel almost anything into submission, pretty soon everything begins to look like a nail. And that’s my problem with Auto-Tune: it’s overused, it’s everywhere, all the time, and I hate how it’s used to make vocals completely pitch perfect.
When a recording is devoid of any human error, when it lacks all the natural subtleties of pitch, timing and timbre, it lacks humanity, it lacks soul. And that’s what Auto-Tune does – it sucks the humanity out of a performance. Producers adore it and overuse it, turning singers into robots, no variation, no soul, no joy, no passion – vocal constructs of the machine. We simply don’t need it and A.I. is going to be doing that to the recording industry quite soon enough, thank you very much.
Luddite! I hear you cry; chuck your sabot into the machine! And it’s a fair point. After all, it’s absolutely true that after a suitable period of use, most technological advances become normalized and absorbed into everyday life – and musical endeavours are no different. And, to be fair, musicians have never been overly keen on disruptive change – and why should they? If you’ve devoted decades of sweat and blood to perfecting your art, why on earth would you welcome a technology that can let any talentless cloth-eared dolt replace you?
And it was ever thus: musicians’ unions tried to get synthesizers banned, player pianos were greeted with horror by pianists making a living from tinkling the ivories and – my absolute favourite example – the BBC once tried to ban “crooners” because of their “unnatural practices”. I think you’ll agree, that statement warrants a short diversion at this point.
Much like synths and player pianos, crooners such as Frank Sinatra, Dean Martin and Bing Crosby also relied heavily on a new technological development for their vocal style – microphones. Music hall performers, concert performers, opera singers, etc., reached their concert hall audience with the mighty power of their voice alone. But this newfangled “crooning” style was only made possible by some seriously significant advances in microphone technology (particularly in terms of sensitivity and dynamic range), which now allowed vocalists to sing softly into the microphone and thereby create a sense of *gulp* intimacy with their audience.
These sensitive and devilishly handsome crooners (with their rakish cardigans) were immensely popular with the laydees – particularly the young ladies – and this hormonal wave of bobby-sox popularity was soon lashing the cultural shores of Blighty with considerable force. As you can imagine, although this sort of behaviour might have been perfectly acceptable out in the colonies, it did not go down well at home – particularly with the BBC – and, to be quite frank, eyebrows were raised.
Auntie was not happy with all this rampantly unbridled sexuality filling the airwaves, so in 1936, Cecil Graves, then controller of programs at the BBC, attempted to ban crooning from the radio on the grounds that it was ‘unnatural’ and even went as far as calling into question the sexuality of the chaps involved. Blimey! Of course, he didn’t succeed. Crooners crooned, young girls screamed and Britain took a significant step down the dark and foetid cultural pathway that would eventually lead to the invention of *clutches pearls* teenagers!
Anyway, I digress. Back to Auto-Tune. Some critics of the system have gone as far as suggesting that by blurring the line between real and heavily processed vocals, and in particular by normalizing the experience of listening to artificially produced vocal performances, supporters of Auto-Tune (and it’s many rivals such as Melodyne) are hastening the day when A.I. produced music and video – created with absolutely no input from any human performers – will dominate the music business. After all, entertainment is big business. If the corporate giants can eliminate all that pesky (i.e. expensive) and-soon-to-be-unnecessary talent, I’m pretty sure they’ll jump at the chance.
But Auto-Tune isn’t the only audio processing tool in my grumpy sights tonight. There is also the little matter of quantizing. The first warning signs, for me, came in 1980, with the arrival of The Linn LM-1 Drum Computer. This was a pretty nifty piece of kit: it was the first studio quality drum machine to use decent quality samples of real acoustic drums and it was also one of the first easily programmable drum computers. It was a huge success, with artists, producers and engineers welcoming it with open arms.
The reason the LM-1 and its many successors were so eagerly adopted by recording studios throughout the world is simple – drummers are a pain in the arse. Sincere apologies to any drummers on the site, but I’m afraid it’s true. I’m a (very bad) drummer myself. I still have an electronic kit that I use to maintain aerobic fitness (it’s great fun and I recommend it highly). But, in a studio, recording drum parts is a tedious and time consuming business. It takes a good engineer and a lot of expensive equipment to get a good sound, and then you’ve got to deal with idiot drummers who keep speeding up or slowing down and can’t maintain a steady number of beats per minute (bpm).
More than a few drummers (a lot, to be honest) have been unpleasantly surprised to hear a finished track by their band and suddenly realise that they are not on it. The usual reason is the producer, utterly exasperated by the drummer’s inability to perform as required, eventually admitted defeat and called in a battle-hardened studio professional – a session drummer – to re-record the drum parts the way they should have been played in the first place. As you can imagine, this all takes up a lot of time – and studio time is expensive time. So when the Linn drum machine came galloping over the horizon, many an audio engineer and record producer saw salvation heading their way.
But here’s the thing: it was a disaster (IMHO). Drum machines work really well in genres such as synth pop or electronic dance music (EDM), but in any genre that relies on swing or groove and feel (blues, rock, soul, jazz, reggae, funk, etc.) the metronomically perfect thump of the drum machine could easily destroy the soul of a track, turning it into a lifeless and joyless construction.
To be fair, some artists used drum machines very well indeed (Kate Bush and Peter Gabriel spring immediately to mind) but many did not, and there is still a large body of early 80s albums by artists I admire that remain unlistenable to me because of the dead hand of the Linn drum machine. It seems that – just like Auto-Tune eliminating subtle pitch variations – all those pesky bpm variations and timing errors are what makes a drum track real, what makes it groove and swing – and a great drum track is the solid foundation on which a great recording is built.
Anyway, happily for our tale, drum machines improved enormously and also kinda fell from favour, and by the late 80s popular music was beginning to feel real again. But the warning signs were clear and the LM-1 had been a substantial shot across the bows; and then, in 2001, along came a digital audio software programme called Beat Detective.
Beat Detective is a software rhythmic toolbox. It can do a lot of interesting things, but one of its primary features is the ability to cut an audio recording into smaller pieces of information which can then be aligned to a time grid for a more “in-time” and rhythmically accurate sounding track. This process is called quantizing and it’s a very popular way to manipulate drum tracks. However, you can use Beat Detective to quantize pretty much any audio source – including vocals – and once you’ve done this to all the instruments on your recording, you can then edit, copy, cut and paste sections of your recording with as little effort as you can edit an ordinary text document.
The results, for me, are as as bad (worse, in fact) as the dark days of the early 80s and the dead hand of the Linn Drum. Quantizing effectively kills the feel and drive and energy of a track, leaving it lifeless and inhuman. So with auto tune killing the soul and emotion of music and beat detective killing the groove, many of this century’s recordings by a whole host of otherwise interesting artists leave me cold, bored and emotionally disengaged. Cut ‘n’ paste music from cut ‘n’ paste artists – a very sad state of affairs indeed.
And while we’re on the subject of cut and paste music, we can’t wrap things up without considering the potential impact of A.I. on the creation and production of music.
I think the industry really began to sit up and pay attention back in 2023 when two Canadian artists, The Weeknd (Abel Tesfaye) and Drake (Aubrey Graham), were somewhat surprised by the appearance of an A.I.-generated track, Heart On My Sleeve, containing reasonably convincing A.I generated versions of their vocals.
The track went viral, but was quickly taken down by the major streaming services. However, it did succeed in sending shock waves through the music industry. Since the release of Heart On My Sleeve there has been an explosion of amateur A.I. covers (Frank Sinatra singing Nirvana songs, Dolly Parton singing Fleetwood Mac songs – that sort of thing) that use A.I. technology, in part or in full, to produce familiar sounds and voices.
Many see this as a harmless lark, but it has set alarms bells ringing in the music industry where there is growing concerned about A.I. models learning from copyrighted material and then producing unlimited quantities of (at first) passable and then (almost inevitably) ever more convincing fakes. M’learned friends are sniffing the air, dusting down their their tomes on intellectual property rights and copyright law, and undoubtedly detect the sweet and heady aroma of mightily lucrative litigation.
For now, royalty free A.I. compositions are already common on the major music streaming services, usually credited to artists who have never actually existed, and A.I. generated music is now being produced for music libraries, jingles, adverts and even film scores, cutting into an already fragile economy for working musicians.
Of course, it’s not all doom and gloom. Record companies can smell serious profits and are currently rushing to make A.I. vocal clones of their most poptastic superstars. One of their immediate plans is to train these A.I clones to re-sing an artist’s entire back catalogue in as many languages as seems appropriate. The international earnings potential from this type of venture could be enormous.
Others are looking at the potential for significant career extension. The voice can be a fragile instrument and often ages poorly. This might no longer be a problem if you have a decent A.I. vocal clone. You could, for example, keep writing new material into your dotage and your A.I. clone could sing it like you were still in your leather-lunged, hip-thrusting prime. Inevitably, others are looking to raise the dead, creating vocal clones of artists long since departed. Fancy a new Frank Sinatra album? I suspect you might not have very long to wait, and the glorious reign of Tay Tay may yet last for a thousand years or more.
One thing seems clear: however this plays out for the music industry, for good or ill, things seem to be moving very fast indeed, and I expect poptastically big changes over the next five years. But, for now, we must deal with some far more immediate concerns: the pineapple quiche has been gobbled, the Campari consumed and my unwaxed lederhosen is beginning to chafe. So I think that’s probably quite enough of my decidedly dour and downbeat ramblings for this evening.
So that’s yer lot for this week’s edition of Fabulously Flamboyant Fridays and I think we’ll wrap things up for tonight with a nicely topical tune.
TTFN Puffins – Good night, and may your Frog go with you. Not ‘arf.
Featured Image: Dennis AB from Amsterdam, The Netherlands, CC BY-SA 2.0, via Wikimedia Commons
© Ivory Cutlery 2024