Future Sounds: Cheap Gear and Easy Samples
Does the democratization of electronic music tools change how we hear the sounds?
Like many others in the late 1990s, I decided to exchange my guitar for a synthesizer, but it wasn't clear at the time how to actually acquire synthesizers. I dragged my roommate across town in the dead of Boston winter to check out a "Minimoog," only to learn there was a typo on the website: it was a Minitmoog. I finally acquired my first synth — a Roland Juno-6 — after reading through the classified listings, calling someone on a landline telephone, riding a bus to Providence, Rhode Island in the middle of a January snowstorm, and handing over $180 in rumbled bills to some mustachioed guy at the “bus station” (i.e. the middle of the street). I didn’t even know if the Juno worked until I got home, and even when I plugged it in, I had to fiddle with the levers for 15 minutes before landing on the right configuration to make a sound.
Today synthesizers are ubiquitous. Amazon can deliver a new Moog to you overnight. There are a ridiculous number of new models from small companies that do extremely specific things, like reproduce '90s sound card FM-synth percussion or Speak-and-Spell glitches. Meanwhile the increased demand for the classic Moog, Arp, Roland, and Korg, synths has driven prices insanely high. The Juno-6 I bought for $180 now goes for $3,000. (Of course that’s still much cheaper than the original prices. In 1977 Vangelis’ favorite Yamaha CS-80 was priced at the 2022 equivalent of $32,000 USD.)
But the truly new development is the explosion of cheap replicas. Chinese manufacturing and other breakthroughs have allowed companies to make soundalike versions of classic synthesizers for a fraction of the original cost. A Roland TB-303, the sound behind acid house, will set you back $4,500; the dead-ringer Behringer TD-3 clone is $149. That’s a 97% discount! In 2022 you can acquire the foundational sounds of electronic music — not just samples — without having to make any major financial sacrifices.
Yes, “soft synths” began to solve this in the early 2000s by providing software emulations, but the actual hardware has its own charm — not just for some voodoo notions of “authenticity” but for providing the ability to twiddle the knobs in real time. More important, hardware is visual, which makes it ideal in an Instagram world. The blinking lights and chaotic colored cables of Eurorack modules make great capital-c content as science fiction contraptions generating long streams of bleeps and drones with only minimal human intervention.
Sampling is also about to undergo a similar reduction in costs. Hip-hop and jungle both repurpose short phrases from other songs, and this was a difficult venture for most of the 1980s and 1990s — not just because samplers were expensive and limited in memory, but because it was hard to find the rights things to sample. Most songs don't feature sections with minimal instruments that can be mixed and matched. Moreover producers developed a strict ethical code about what should be sampled. Biting others’ samples was taboo, and they would always be in search of new drum breaks once older ones got overused. Someone once told me that DJ Shadow got into a bar fight in the 1990s when another producer suggested it was okay to sample record re-releases rather than the original disc. As much as this story is apocryphal, it reveals the shared values of the time.
The internet first democratized crate-digging by making nearly everything available for easy browsing on YouTube, but now artificial intelligence is primed to change the entire definition of sampling. The startup Audioshake's AI pulls out the “stems” out from any piece of music, so you can input a fully orchestrated, mixed-down track and within seconds, get only the drums, or only the vocals, or only the guitar — an advanced technology indistinguishable from magic. While Audioshake is theoretically for your own songs, this augurs a near future when we can all pull every drum track out of the entire James Brown catalog. The entire genre of hip-hop developed from the fact you couldn’t pull out specific sounds from a song, so DJs were limited to a small set of recordings that featured breakbeats. This created a cult around weirdo pop tunes like Tommy Roe’s “Sweet Pea” and The Turtles’ “I'm Chief Kamanawanalea (We're The Royal Macadamia Nuts).” But that era is over: Every song can now be chopped up for parts.
Technical limitations help define musical genres. Hip-hop, jungle, techno, and house sound like they do because of the barriers producers faced. But for those particular genres, their germinal limitations are now gone. Likewise the few working electronic music producers of the time benefited from stiff barriers to participation. While classical music always had inherent means of exclusion — compositional complexity, needing to mobilize a large group of virtuosos with access to concert-grade instruments — electronic music's deepest moat was material. You had to be able to afford the vintage gear, learn from certain people how to use it, and know the private rules of the genre to win over insiders to secure distribution. In the 1990s suburban teens could easily form a grunge band, but few could make jungle, let alone broadcast their tracks.
So what does it mean for these genres now that the door has been opened to full democratization? Anyone can spend $1,000, turn on their gear, and make out-of-the-box sounds that resemble 1990s rave anthems. And when you're ready to go beyond presets, YouTube tutorials teach you how to do nearly everything else. Whether the net outcome of this democratization is good or bad, we have an opportunity to test a long-standing debate about culture: Does the social usage of sounds change how we hear them?
We never evaluate sound in a vacuum. An obvious example would be blistering guitar feedback, which a century ago would have been dismissed as sheer racket or, at best, avant-garde racket. But with the rise of rock 'n' roll, we came to appreciate distortion. This inherent human variability in how we take pleasure means social factors have major influence on whether sounds are beloved or hated; they shift with the dynamics of fads and fashion. We’ve all lived long enough to know that sounds considered “cool” at one time — gated reverbs on drums, saxophone solos, dubstep wobble — become “cheesy” with overexposure. The specific rate of exhaustion depends on the individual, but as a general principle, we tend to dislike certain sounds once they are overly associated with loathed compositions or undesirable social groups. Here we find the downside of democratization: it increases the likelihood we hear certain sounds from amateurs rather than from professionals. In an era where making electronic music has steep barriers, the 303 only shows up in official record label releases and "classic" recordings. Now with both cheap gear and easy access to broadcasting channels, we’re just as likely to hear the same sounds from hobbyists in IG Reels jams and goofball bedroom tutorials.
At the same time, this kind of democratization can be good for innovation in the long run, because it allows more people to create more things, which creates more competition. As once-innovative sounds become pure kitsch, it forces ambitious artists to seek out new techniques (or re-appropriate the kitsch in new ways). If cheap gear devalues all the classic sounds of electronic music in their conventional form, talented producers will push towards something new. Of course this process already happened in the past, which explains why electronic music changed over time to start with. But what does it mean when the over-usage manifests at mass scale.
With synths getting cheaper and sampling about to get incredibly easy, democratization and exhaustion are serious specters haunting music production. But these aren't just issues for electronic music: they're the central cultural questions of our time.