Breaking Down The Four Fundamentals Of Audio & Mixing
Audio and mixing can many times seem overwhelming and confusing. That’s because it is. To see things more clearly and get a better overview of different concepts and how sound works, it can help to break it down to the basics. All you do in audio and mixing is based on four relatively simple concepts, but their potential is endless.
In this post, I will break down the four fundamentals of audio and mixing: Levels, frequency response, dynamics, and delay.
Levels
Everything you do in audio starts with levels. If you have no level going in on a track, you want to record, and you get no sound. If you have too much level on a track, you want to record, and you get a distorted and funny-sounding sound. And If you want to make a good mix, you’d probably want to start with setting appropriate levels.
The thing about levels is that it’s measured logarithmically, meaning it’s measured against a set reference point. This is due to how the human ear perceives sound, which is similar to how a logarithmic scale works. For example, a 1 db gain on a sound source is barely noticeable, 3 db gain is noticeable, and a 10 db gain is usually perceived as a doubling in loudness.
As you might have noticed in your own DAW, the meters look very different if you use, for example, a RMS meter compared to a true-peak meter. This is because they measure against different reference points. An RMS meter measures the average level of your entire audio, whereas a true-peak meter is measuring the loudest part in your audio at any given moment. There are many more metering systems, and I suggest that you get a basic understanding of the metering system(s) relevant to you and your productions.
Besides measuring the loudness of your signal, levels affect your entire perception of an audio signal. You may be familiar with the Fletcher-Munson curve, which shows how different audio levels make us perceive frequencies differently. Especially low frequencies and high frequencies are perceived drastically different when played at different levels. This is good to know when mixing and recording because you can use your levels as a kind of equalizer to shape your sound and mix.
Frequency response
Frequency response is the quantitative measure of the frequency content in an audio signal. This means the frequency response determines how a sound sounds. How your voice sounds through a microphone is determined by the microphone’s frequency response. What sound you get from your guitar amp is determined by the amp’s frequency response and how you dial in the different knobs.
Frequency response has a huge impact on your audio and is important to have in the back of your head in any recording or mixing situation. Does your synth sound a little dull and thin? Maybe if you change some of your filter settings, you’ll get a bigger bass response and some extra high-frequency content to make it cut through. Does your guitar sound boomy when you’re trying to record it? Check your mic’s frequency response, see if it’s appropriate for your purpose. Maybe change the placement of your mic. Because where and how you place your mic drastically affects the frequency response you will get.
Equalizers, exciters, bass enhancers, distortion, and so on all have the purpose of altering the frequency response of your audio. Standard digital equalizers cut or boost certain frequencies that are already in your audio, exciters add more frequencies to a given (usually) high-frequency area, bass enhancers do the same with the lower frequencies, add distortion, and even-harmonic an/or odd-harmonic frequency content to your audio.
Being aware of the huge impact frequency response has on your audio can help you simplify things that can sometimes seem like a mystery. Because most times, when something doesn’t sound right, it probably has something to do with the frequency response.
Dynamics
In music, dynamics refers to the variation between the loudest and most quiet parts. This plays a big role both in performing music but also in mixing and working with music. Dynamics are what gives a song groove and expression. If there are little to no dynamics in a song, it can easily be perceived as dull and boring. Think about it like a movie or a theatre. You don’t want to see the same static scene all the time, and you want variation and different expressions.
Dynamics and frequency response are closely related. The sound of your snare drum is determined by its frequency response but also by its dynamics. How hard the drummer hits, how snappy and poppy the snare feels are all related to dynamics (and frequency response). If you barely hit the snare, the dynamics will be more tamed and give a softer and more mellow sound. The opposite is true if you hit hard. How harsh or soft your guitar sounds, is determined by dynamics (and frequency response). If the strum sounds 10x louder than the pure “ringing” of the strings, you have a lot of dynamics, and it will feel more aggressive.
In other words, dynamics play a huge role in shaping both the timbre of your sound and the groove of it. In fact, every instrument’s sound is determined by a unique combination of frequency response and dynamics. What specific frequencies you hear and how much dynamic variation it is between those frequencies decide what sound you will get.
So, dynamics mean more than just dialing in the right compression settings. Dynamics are the life and foundation of any song, and being aware of its profound effects on your music can help you better understand what you’re dealing with and make better decisions in music and mixing.
Delay
Time delay is more than just a cool effect to put on your tracks. Delay is all around, and is the foundation for things like reverb, flanging, chorus, and stereo. Whatever sound you’re making in a real room has delay in it. Because if you clap, the sound waves from your clap will hit the wall and ceiling in your room and are then sent back to your ears at different times. The “sound of the room” is basically a bunch of delayed sound waves that have been reflected on the room’s surfaces before they are sent back into your ear.
Reverb is made up of thousands of unique delays, that when played together, are perceived as a reverb effect. Stereo happens when the left or right channel of an audio signal is slightly delayed from the other. Effects like chorus, flagging, and slapback are made with different degrees of delaying a signal.
The possibilities with delay are enormous, and delay is pretty much everywhere, even when you don’t notice it. One of the few ways to hear a sound without delay is by generating a single sinus tone in a synth, listening through headphones. You will quickly hear how dull or “dry” it sounds. But playing around with delay on a completely dry signal can help you understand how effects like the ones mentioned above work.
It’s useful to be aware of delay both in producing, mixing, and recording. When you are producing, you need delay to breathe life and character into your song. When you are mixing, delay is useful as an effect and as a tool to imitate things like different rooms and give elements stereo width. And when you are recording live sounds, you can’t avoid having to deal with delay.
Summary
There’s a lot more to the concepts mentioned above, and you can read entire books about all of them. But the true knowledge and understanding comes from exploring and doing. Hopefully, this article has given you enough information to further develop knowledge and understanding of the four fundamentals of audio and mixing on your own.
About Gerhard Tinius
Gerhard Tinius is a groovy producer, mixer, and audio engineer from Norway. Working as a mixer and audio/mastering engineer, while releasing his own music under the name Tinius. Gerhard also writes an (almost) daily blog about music-making and the creative process in general. Read it here.
Audio Production, Keeping Track
LEAVE A COMMENT