I occasionally come across posts online from earnest young engineers who seem to approach mixing as some kind of a competitive sport, one for which you should train. I don’t completely disagree. Good quality education is always a good thing but I’m not sure about the value of ear training exercises. They do no harm but the ability to identify a 0.25dB cut at 14KHz won’t on its own help your career. Being a nice person who is reliable and delivers what they say they at the agreed time will probably help you more!
I’ve got enough teaching experience to know that activities which create measurable data are great for assessment but often miss the wider point to some degree. Beyond repeated practice at both recording and mixing are there any related disciplines which will help you become a better audio engineer?
One I’ll call out straight away is music theory, whether you play an instrument or not. But I’ll leave that to one side for today. The other skill I’ll call out is a working knowledge of sound synthesis. After all, the best way to understand how sounds behave is to create some from first principles.
Subtractive synthesis is the place to start. Being able to see the waveform created by your settings is so useful that I’d recommend software synths over hardware. Pretty soon you’ll make the connection between corners on the waveform and high frequencies!
There are loads of free options out there if you don’t take to any of the stock virtual instruments available in your DAW. Just make sure you start with the simple instruments. Don’t try to cut your synthesiser teeth on Falcon or Alchemy, and avoid modulars unless you know what you are doing, and even then maybe avoid them unless you have a few hours to kill - If you’ve used a modular you’ll understand!
Something I’d definitely recommend is setting up a synth with an oscilloscope and a spectrum analyser. Run both at the same time and keep them visible all the time. You can use free plugins for this. A spectrum analyser isn’t hard to find, you probably already have one in one of your EQ plugins but if you need an oscilloscope (some synth plugins already have them) try MOscilloscope from Melda. With these two tools in place, downstream of your synth you’ll be ready to properly explore the first reason why I believe an understanding of synthesis is so useful to an engineer regardless of the specific work they regularly do.
The Harmonic Series
All sounds contain some combination of pitched and unpitched sounds. The majority of sounds in nature are unpitched, being principally noise-based. Think wind, waves, rustling leaves, you get the idea. Blow across the top of a bottle and you hear a distinct pitch, dictated by the dimensions of the bottle. There is still a significant noise element though, giving the characteristically breathy quality. Tap the bottle and you might hear a pure tone, actually a wine glass is a better bet for this. This sound is pitched and has this pitched characteristic because it repeats itself in a more or less consistent fashion for a discernible length of time.
Extremely pure tones are striking when they occur in nature, most sounds are more complex and the character or ‘timbre’ of pitched sounds comes from their particular combination of harmonics or overtones whose frequencies are all multiples of the lowest or fundamental frequency. Different proportions of harmonics differentiate sustained sounds of a constant pitch from one another and while we all intuitively appreciate this, a few minutes spent with a subtractive, analogue style synthesiser does more to illuminate this subject than any number of paragraphs of text.
Gaps In The Frequency Spectrum
One of the first things you’ll notice if you are running a spectrum analyser with a raw oscillator’s waveform, (try a low bass note with a sawtooth wave), is the series of narrow peaks, decreasing in level as they increase in frequency. These are the harmonic series of overtones and the important things to notice about them is that there is a definite lower limit, the fundamental. And that there is a series of gaps between the harmonics. The lowest harmonics in particular leave relatively significant gaps. If you set up a very narrow peaking filter and sweep up this series so you hear one at a time you’ll hear a series of intervals and if, for example, you were to try to boost 150Hz when playing a 100Hz sawtooth bass (slightly sharp of a G2 but we’ll keep the numbers simple…), your filter would be trying to boost the energy in one of these holes in the harmonic series.
If the note playing were to change to a D3 (nearly 150Hz) then the fundamental would be right in the middle of this static boost and the difference in level would be severe. It’s appropriate here to mention Surfer EQ from Sound Radix which follows pitch changes and neatly avoids this issue of timbre changing from note to note. If you were to try this on an unpitched sound you wouldn’t find any of these gaps. Noise is inherently chaotic, containing all frequencies to some extent. An understanding of the difference between pitched and unpitched sounds and how they respond to EQ is a fundamental skill but complex sounds, and nearly all sounds are pretty complex, containing both pitched and unpitched elements, are a less successful illustration of this than a simple synth patch.
The point here is that different notes contain energy at different points of the frequency spectrum and this series of harmonics with gaps between them mean that static EQ, particularly very narrow boosts, can dramatically affect the timbre depending on which note is playing.
Secondly, boosting the bass will only work if there is anything there to be boosted. Use an EQ on a sine wave and you’ll very quickly get the idea!
Sounds Change Over Time
Of course you can experiment with raw waveforms and filters using a signal generator and an EQ in your DAW but any real acoustic sound changes over time and understanding how these sounds ‘work’ is beneficial to anyone who wants to work with recordings of them. A great place to start is to create some synthesised drums. By trying to create a kick, snare and hi hat you will have to deconstruct the elements which make each of these sounds.
This kind of critical listening is in my opinion far more useful than identifying boosts and cuts in a listening test. To understand the combination of noise and pitch which captures a snare like quality, even though a synthesised snare rarely sounds realistic, it has to work like a snare works and if you understand this you might be better placed to understand why your real snare isn’t doing its job.
Doing this will take you into the features of any synth you care to use and you’ll reach for additional oscillators and envelopes, not because they are there, but because you need them to shape your sound in the required direction. Try it. Here’s a video illustrating this using the first version of Krotos’ Concept synth.
Returning to my earlier point about shaping the timbre of sounds by varying the relative levels of the harmonic overtones, nowhere is this more relevant than when creating sounds on a tonewheel organ. This is a very basic form of additive synthesis, which combines overtones together to theoretically create any sound imaginable. My experience is that it makes great pads and good organ sounds.
If you ever thought Organs sound like... well organs, take a few minutes with a tone wheel soft synth. Listen to Green Onions by Booker T. Next listen to No Woman, No Cry by Bob Marley, next listen to The Cat by Jimmy Smith. Try to match the sounds. There are only seven drawbars. It’s harder than you might think but this kind of critical listening is the sort which I think will help all of us become better mixers.
Photo by Ricardo Abreu on Unsplash